Commit
·
2576a37
1
Parent(s):
a0e926c
Update README.md
Browse files
README.md
CHANGED
@@ -37,6 +37,8 @@ tags:
|
|
37 |
# CausalLM 14B - Fully Compatible with Meta LLaMA 2
|
38 |
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
|
39 |
|
|
|
|
|
40 |
# Friendly reminder: If your VRAM is insufficient, you should use the 7B model instead of the quantized version.
|
41 |
Compared to the quantized versions, the 7B version and the 14B version demonstrate a high level of consistency.
|
42 |
|
@@ -125,6 +127,8 @@ We are currently unable to produce accurate benchmark templates for non-QA tasks
|
|
125 |
# 因果语言模型 14B - 与 Meta LLaMA 2 完全兼容
|
126 |
使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
|
127 |
|
|
|
|
|
128 |
# 友情提示:如果您的显存不足,您应该使用7B模型而不是量化版本。
|
129 |
与量化版本相比,7B 版本和 14B 版本具有高度的一致性。
|
130 |
|
|
|
37 |
# CausalLM 14B - Fully Compatible with Meta LLaMA 2
|
38 |
Use the transformers library that does not require remote/external code to load the model, AutoModelForCausalLM and AutoTokenizer (or manually specify LlamaForCausalLM to load LM, GPT2Tokenizer to load Tokenizer), and model quantization is fully compatible with GGUF (llama.cpp), GPTQ, and AWQ.
|
39 |
|
40 |
+
# Recent Updates: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) outperforms Zephyr-β in MT-Bench
|
41 |
+
|
42 |
# Friendly reminder: If your VRAM is insufficient, you should use the 7B model instead of the quantized version.
|
43 |
Compared to the quantized versions, the 7B version and the 14B version demonstrate a high level of consistency.
|
44 |
|
|
|
127 |
# 因果语言模型 14B - 与 Meta LLaMA 2 完全兼容
|
128 |
使用无需远程/外部代码的transformers库加载模型,AutoModelForCausalLM和AutoTokenizer(或者手动指定LlamaForCausalLM加载LM, GPT2Tokenizer加载Tokenizer),并且模型量化与GGUF(llama.cpp)、GPTQ、AWQ完全兼容。
|
129 |
|
130 |
+
# 最近更新: [DPO-α Version](https://huggingface.co/CausalLM/14B-DPO-alpha) 在 MT-Bench 超过 Zephyr-β
|
131 |
+
|
132 |
# 友情提示:如果您的显存不足,您应该使用7B模型而不是量化版本。
|
133 |
与量化版本相比,7B 版本和 14B 版本具有高度的一致性。
|
134 |
|