ChiMed-GPT

ChiMed-GPT is a Chinese medical large language model (LLM) built by continually training Ziya-v2 on Chinese medical data, where pre-training, supervised fine-tuning (SFT), and reinforcement learning from human feedback (RLHF) are comprehensively performed on it. More information about the model is coming soon.

Citation

If you use or extend our work, please cite the following paper:

@article{USTC-ChiMed-GPT,
  title="{ChiMed-GPT: A Chinese Medical Large Language Model with Full Training Regime and Better Alignment to Human Preferences}",
  author={Yuanhe Tian, Ruyi Gan, Yan Song, Jiaxing Zhang, Yongdong Zhang},
  journal={arXiv preprint arXiv:2311.06025},
  year={2023},
}

Usage

from transformers import AutoTokenizer
from transformers import LlamaForCausalLM
import torch

query="[human]:ζ„Ÿε†’ζ€ŽδΉˆε€„η†οΌŸ\n[bot]:"
model = LlamaForCausalLM.from_pretrained('SYNLP/ChiMed-GPT-1.0', torch_dtype=torch.float16, device_map="auto").eval()
tokenizer = AutoTokenizer.from_pretrained(ckpt)
input_ids = tokenizer(query, return_tensors="pt").input_ids.to('cuda:0')
generate_ids = model.generate(
            input_ids,
            max_new_tokens=512, 
            do_sample = True, 
            top_p = 0.9)
output = tokenizer.batch_decode(generate_ids)[0]
print(output)

Disclaimer

Please note that the content generated by ChiMed-GPT, including any advice, suggestions, information, or recommendations, does not reflect our views or beliefs. The responses provided by the large language model should not be considered as endorsements, opinions, or advice from us. We do not take responsibility for the accuracy, reliability, or appropriateness of the information provided. Users should exercise their own judgment and discretion when interpreting and using the information generated by the large language model.

Downloads last month
40
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for SYNLP/ChiMed-GPT-1.0

Quantizations
1 model

Space using SYNLP/ChiMed-GPT-1.0 1