JosephusCheung
commited on
Commit
·
5e04d1c
1
Parent(s):
2a97280
Update README.md
Browse files
README.md
CHANGED
@@ -38,7 +38,7 @@ tags:
|
|
38 |
# CausalLM 14B - Fully Compatible with Meta LLaMA 2
|
39 |
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
|
40 |
|
41 |
-
**News: SOTA
|
42 |
|
43 |
# Recent Updates: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) outperforms Zephyr-β in MT-Bench
|
44 |
|
@@ -133,12 +133,15 @@ We are currently unable to produce accurate benchmark templates for non-QA tasks
|
|
133 |
## 🤗 Open LLM Leaderboard
|
134 |
SOTA chat model of its size on 🤗 Open LLM Leaderboard.
|
135 |
|
136 |
-
|
|
|
|
|
|
|
137 |
|
138 |
# 因果语言模型 14B - 与 Meta LLaMA 2 完全兼容
|
139 |
使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
|
140 |
|
141 |
-
#
|
142 |
|
143 |
# 最近更新: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) 在 MT-Bench 超过 Zephyr-β
|
144 |
|
@@ -231,6 +234,6 @@ STEM准确率:66.71
|
|
231 |
*JCommonsenseQA 基准测试结果非常非常接近 [Japanese Stable LM Gamma 7B (83.47)](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable),当前 SOTA 日文 LM 。然而,我们的模型并未在日文上进行特别的大量文本训练。这似乎能体现元语言的跨语言迁移能力。*
|
232 |
|
233 |
## 🤗 Open LLM 排行榜
|
234 |
-
|
235 |
-
|
236 |
-
![
|
|
|
38 |
# CausalLM 14B - Fully Compatible with Meta LLaMA 2
|
39 |
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
|
40 |
|
41 |
+
**News: DPO ver. Rank #1 ~13B - SOTA model of its size on 🤗 Open LLM Leaderboard**
|
42 |
|
43 |
# Recent Updates: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) outperforms Zephyr-β in MT-Bench
|
44 |
|
|
|
133 |
## 🤗 Open LLM Leaderboard
|
134 |
SOTA chat model of its size on 🤗 Open LLM Leaderboard.
|
135 |
|
136 |
+
Dec 3, 2023
|
137 |
+
DPO Version Rank **#1** non-base model, of its size on 🤗 Open LLM Leaderboard, outperforms **ALL** ~13B chat models.
|
138 |
+
|
139 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/8nV0yOTteP208bjbCv5MC.png)
|
140 |
|
141 |
# 因果语言模型 14B - 与 Meta LLaMA 2 完全兼容
|
142 |
使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
|
143 |
|
144 |
+
# 新消息:DPO 版本在~13B排名第1 🤗 Open LLM 排行榜上同尺寸的所有模型中评分最高
|
145 |
|
146 |
# 最近更新: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) 在 MT-Bench 超过 Zephyr-β
|
147 |
|
|
|
234 |
*JCommonsenseQA 基准测试结果非常非常接近 [Japanese Stable LM Gamma 7B (83.47)](https://github.com/Stability-AI/lm-evaluation-harness/tree/jp-stable),当前 SOTA 日文 LM 。然而,我们的模型并未在日文上进行特别的大量文本训练。这似乎能体现元语言的跨语言迁移能力。*
|
235 |
|
236 |
## 🤗 Open LLM 排行榜
|
237 |
+
Dec 3, 2023
|
238 |
+
DPO版本在🤗 Open LLM 排行榜上~13B的**所有**聊天模型中**排名第1**
|
239 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/63468a143ea42ee2cb49ddd1/8nV0yOTteP208bjbCv5MC.png)
|