Update README.md
Browse files
README.md
CHANGED
@@ -4,6 +4,7 @@ datasets:
|
|
4 |
- FreedomIntelligence/medical-o1-reasoning-SFT
|
5 |
language:
|
6 |
- en
|
|
|
7 |
base_model:
|
8 |
- Qwen/Qwen2.5-7B-Instruct
|
9 |
pipeline_tag: text-generation
|
@@ -38,7 +39,7 @@ For more information, visit our GitHub repository:
|
|
38 |
|
39 |
|
40 |
# <span>Usage</span>
|
41 |
-
You can use HuatuoGPT-o1 in the same way as `Qwen2.5-7B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
|
42 |
```python
|
43 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
44 |
|
|
|
4 |
- FreedomIntelligence/medical-o1-reasoning-SFT
|
5 |
language:
|
6 |
- en
|
7 |
+
- zh
|
8 |
base_model:
|
9 |
- Qwen/Qwen2.5-7B-Instruct
|
10 |
pipeline_tag: text-generation
|
|
|
39 |
|
40 |
|
41 |
# <span>Usage</span>
|
42 |
+
You can use HuatuoGPT-o1-7B in the same way as `Qwen2.5-7B-Instruct`. You can deploy it with tools like [vllm](https://github.com/vllm-project/vllm) or [Sglang](https://github.com/sgl-project/sglang), or perform direct inference:
|
43 |
```python
|
44 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
45 |
|