Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,8 +1,3 @@
|
|
1 |
-
---
|
2 |
-
base_model:
|
3 |
-
- tencent/Hunyuan-1.8B-Pretrain
|
4 |
-
library_name: transformers
|
5 |
-
---
|
6 |
|
7 |
<p align="left">
|
8 |
<a href="README_CN.md">中文</a>  | English</a>
|
@@ -15,19 +10,21 @@ library_name: transformers
|
|
15 |
|
16 |
|
17 |
<p align="center">
|
18 |
-
🤗 <a href="https://huggingface.co/tencent/
|
19 |
-
|
20 |
-
|
21 |
-
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a> |
|
22 |
-
🤖 <a href="https://www.modelscope.cn/models/Tencent-Hunyuan/Hunyuan-1.8B-Instruct"><b>ModelScope</b></a>
|
23 |
</p>
|
24 |
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
<p align="center">
|
27 |
-
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-
|
28 |
-
<a href="https://
|
29 |
-
<a href="https://
|
30 |
-
<a href="https://discord.gg/bsPcMEtV7v"><b>Discord</b></a>
|
31 |
</p>
|
32 |
|
33 |
|
@@ -45,7 +42,7 @@ We have released a series of Hunyuan dense models, comprising both pre-trained a
|
|
45 |
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
|
46 |
|
47 |
## Related News
|
48 |
-
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** ,
|
49 |
<br>
|
50 |
|
51 |
|
@@ -503,4 +500,4 @@ docker run --entrypoint="python3" --gpus all \
|
|
503 |
|
504 |
## Contact Us
|
505 |
|
506 |
-
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
|
2 |
<p align="left">
|
3 |
<a href="README_CN.md">中文</a>  | English</a>
|
|
|
10 |
|
11 |
|
12 |
<p align="center">
|
13 |
+
🤗 <a href="https://huggingface.co/tencent/"><b>Hugging Face</b></a> |
|
14 |
+
<img src="https://avatars.githubusercontent.com/u/109945100?s=200&v=4" width="16"/> <a href="https://modelscope.cn/models/Tencent-Hunyuan/Hunyuan-A13B-Instruct"><b>ModelScope</b></a> |
|
15 |
+
<img src="https://cdn-avatars.huggingface.co/v1/production/uploads/6594d0c6c5f1cd69a48b261d/04ZNQlAfs08Bfg4B1o3XO.png" width="14"/> <a href="https://github.com/Tencent/AngelSlim/tree/main"><b>AngelSlim</b></a>
|
|
|
|
|
16 |
</p>
|
17 |
|
18 |
+
<p align="center">
|
19 |
+
🖥️ <a href="https://hunyuan.tencent.com" style="color: red;"><b>Official Website</b></a> |
|
20 |
+
🕖 <a href="https://cloud.tencent.com/product/hunyuan"><b>HunyuanAPI</b></a> |
|
21 |
+
🕹️ <a href="https://hunyuan.tencent.com/"><b>Demo</b></a>
|
22 |
+
</p>
|
23 |
|
24 |
<p align="center">
|
25 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B"><b>GITHUB</b></a> |
|
26 |
+
<a href="https://cnb.cool/tencent/hunyuan/Hunyuan-7B"><b>cnb.cool</b></a> |
|
27 |
+
<a href="https://github.com/Tencent-Hunyuan/Hunyuan-7B/blob/main/LICENSE"><b>LICENSE</b></a>
|
|
|
28 |
</p>
|
29 |
|
30 |
|
|
|
42 |
- **Efficient Inference**: Utilizes Grouped Query Attention (GQA) and supports multiple quantization formats, enabling highly efficient inference.
|
43 |
|
44 |
## Related News
|
45 |
+
* 2025.7.30 We have open-sourced **Hunyuan-0.5B-Pretrain** , **Hunyuan-1.8B-Pretrain** , **Hunyuan-4B-Pretrain** , **Hunyuan-7B-Pretrain** , **Hunyuan-0.5B-Instruct** , **Hunyuan-1.8B-Instruct** , **Hunyuan-4B-Instruct** , **Hunyuan-7B-Instruct** on Hugging Face.
|
46 |
<br>
|
47 |
|
48 |
|
|
|
500 |
|
501 |
## Contact Us
|
502 |
|
503 |
+
If you would like to leave a message for our R&D and product teams, Welcome to contact our open-source team . You can also contact us via email ([email protected]).
|