Text Generation
Safetensors
English
Chinese
medical
jymcc commited on
Commit
bea6d1e
·
verified ·
1 Parent(s): 8f65e17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -3
README.md CHANGED
@@ -1,3 +1,75 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - FreedomIntelligence/medical-o1-reasoning-SFT
5
+ language:
6
+ - en
7
+ - zh
8
+ base_model:
9
+ - Qwen/Qwen2.5-72B-Instruct
10
+ pipeline_tag: text-generation
11
+ tags:
12
+ - medical
13
+ ---
14
+
15
+ <div align="center">
16
+ <h1>
17
+ HuatuoGPT-o1-72B
18
+ </h1>
19
+ </div>
20
+
21
+ <div align="center">
22
+ <a href="https://github.com/FreedomIntelligence/HuatuoGPT-o1" target="_blank">GitHub</a> | <a href="https://arxiv.org/abs/2406.19280" target="_blank">Paper</a>
23
+ </div>
24
+
25
+ # <span>Introduction</span>
26
+ **HuatuoGPT-o1** is a medical LLM designed for advanced medical reasoning. It generates a complex thought process, reflecting and refining its reasoning, before providing a final response.
27
+
28
+ For more information, visit our GitHub repository:
29
+ [https://github.com/FreedomIntelligence/HuatuoGPT-o1](https://github.com/FreedomIntelligence/HuatuoGPT-o1).
30
+
31
+ # <span>Model Info</span>
32
+ | | Backbone | Supported Languages | Link |
33
+ | -------------------- | ------------ | ----- | --------------------------------------------------------------------- |
34
+ | **HuatuoGPT-o1-8B** | LLaMA-3.1-8B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-8B) |
35
+ | **HuatuoGPT-o1-70B** | LLaMA-3.1-70B | English | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-70B) |
36
+ | **HuatuoGPT-o1-7B** | Qwen2.5-7B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-7B) |
37
+ | **HuatuoGPT-o1-72B** | Qwen2.5-72B | English & Chinese | [HF Link](https://huggingface.co/FreedomIntelligence/HuatuoGPT-o1-72B) |
38
+
39
+
40
+
41
+ # <span>Usage</span>
42
+ You can use HuatuoGPT-o1-72B in the same way as `Qwen2.5-72B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
43
+ ```python
44
+ from transformers import AutoModelForCausalLM, AutoTokenizer
45
+
46
+ model = AutoModelForCausalLM.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B",torch_dtype="auto",device_map="auto")
47
+ tokenizer = AutoTokenizer.from_pretrained("FreedomIntelligence/HuatuoGPT-o1-7B")
48
+
49
+ input_text = "How to stop a cough?"
50
+ messages = [{"role": "user", "content": input_text}]
51
+
52
+ inputs = tokenizer(tokenizer.apply_chat_template(messages, tokenize=False,add_generation_prompt=True
53
+ ), return_tensors="pt").to(model.device)
54
+ outputs = model.generate(**inputs, max_new_tokens=2048)
55
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
56
+ ```
57
+
58
+ HuatuoGPT-o1 adopts a *thinks-before-it-answers* approach, with outputs formatted as:
59
+
60
+ ```
61
+ ## Thinking
62
+ [Reasoning process]
63
+
64
+ ## Final Response
65
+ [Output]
66
+ ```
67
+
68
+ # <span>📖 Citation</span>
69
+ ```
70
+ @misc{chen2024huatuogpto1,
71
+ title={HuatuoGPT-o1, Towards Medical Complex Reasoning with LLMs},
72
+ author={Junying Chen and Zhenyang Cai and Ke Ji and Xidong Wang and Wanlong Liu1 and Rongsheng Wang and Jianye Hou and Benyou Wang},
73
+ year={2024}
74
+ }
75
+ ```