dranger003 commited on
Commit
c2546c3
1 Parent(s): 4d61730

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -2
README.md CHANGED
@@ -4,8 +4,9 @@ pipeline_tag: text-generation
4
  library_name: gguf
5
  base_model: CohereForAI/c4ai-command-r-plus
6
  ---
7
- **2024-04-05**: Support for this model is still being worked on - [`PR#6491`](https://github.com/ggerganov/llama.cpp/pull/6491).
8
- For now, you can test the model using this fork: [https://github.com/dranger003/llama.cpp/tree/Noeda/commandr-plus](https://github.com/dranger003/llama.cpp/tree/Noeda/commandr-plus)
 
9
 
10
  * GGUF importance matrix (imatrix) quants for https://huggingface.co/CohereForAI/c4ai-command-r-plus
11
  * The importance matrix was trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext).
 
4
  library_name: gguf
5
  base_model: CohereForAI/c4ai-command-r-plus
6
  ---
7
+ **2024-04-06**: Support for this model is still being worked on - [`PR#6491`](https://github.com/ggerganov/llama.cpp/pull/6491).
8
+ For now, you can test the model using this fork: [https://github.com/dranger003/llama.cpp/tree/Noeda/commandr-plus](https://github.com/dranger003/llama.cpp/tree/Noeda/commandr-plus)
9
+ If you are using [PR #6491](https://github.com/ggerganov/llama.cpp/pull/6491), the quants here will not load because they are tagged with Command-R+ as a new architecture - [pmysl](https://huggingface.co/pmysl/c4ai-command-r-plus-GGUF) has quants that should work with the PR.
10
 
11
  * GGUF importance matrix (imatrix) quants for https://huggingface.co/CohereForAI/c4ai-command-r-plus
12
  * The importance matrix was trained for ~100K tokens (200 batches of 512 tokens) using [wiki.train.raw](https://huggingface.co/datasets/wikitext).