Commit
·
09042eb
1
Parent(s):
e6fac1b
Update README.md
Browse files
README.md
CHANGED
@@ -35,8 +35,9 @@ tags:
|
|
35 |
*Image drawn by GPT-4 DALL·E 3* TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...
|
36 |
|
37 |
**llama.cpp GGUF models**
|
38 |
-
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models
|
39 |
-
|
|
|
40 |
|
41 |
|
42 |
# Read Me:
|
@@ -98,6 +99,8 @@ Win rate **88.26%** on [AlpacaEval Leaderboard](https://tatsu-lab.github.io/alpa
|
|
98 |
**llama.cpp GGUF models**
|
99 |
GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
|
100 |
|
|
|
|
|
101 |
## 请读我:
|
102 |
|
103 |
另请参阅[7B版本](https://huggingface.co/CausalLM/7B)
|
|
|
35 |
*Image drawn by GPT-4 DALL·E 3* TL;DR: Perhaps better than all existing models < 70B, in most quantitative evaluations...
|
36 |
|
37 |
**llama.cpp GGUF models**
|
38 |
+
GPT2Tokenizer fixed by [Kerfuffle](https://github.com/KerfuffleV2) on [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743), new models are now reuploaded.
|
39 |
+
|
40 |
+
Thanks TheBloke for GGUF quants: [https://huggingface.co/TheBloke/CausalLM-14B-GGUF](https://huggingface.co/TheBloke/CausalLM-14B-GGUF)
|
41 |
|
42 |
|
43 |
# Read Me:
|
|
|
99 |
**llama.cpp GGUF models**
|
100 |
GPT2Tokenizer 支持由 [Kerfuffle](https://github.com/KerfuffleV2) 修复于 [https://github.com/ggerganov/llama.cpp/pull/3743](https://github.com/ggerganov/llama.cpp/pull/3743),新模型稍后上传。
|
101 |
|
102 |
+
感谢 TheBloke 制作 GGUF 版本量化模型: [https://huggingface.co/TheBloke/CausalLM-14B-GGUF](https://huggingface.co/TheBloke/CausalLM-14B-GGUF)
|
103 |
+
|
104 |
## 请读我:
|
105 |
|
106 |
另请参阅[7B版本](https://huggingface.co/CausalLM/7B)
|