Text Generation
Safetensors
mistral
conversational
Epiculous commited on
Commit
4674046
1 Parent(s): b354d94

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -31,8 +31,10 @@ If you are using GGUF I strongly advise using ChatML, for some reason that quant
31
  "<s>[INST] Prompt goes here [/INST]<\s>"
32
  ```
33
  ### Context and Instruct
34
- [Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json) <br/>
35
- [Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json) <br/>
 
 
36
  *** NOTE *** <br/>
37
  There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.
38
 
 
31
  "<s>[INST] Prompt goes here [/INST]<\s>"
32
  ```
33
  ### Context and Instruct
34
+ ~~[Mistral-Custom-Context.json](https://files.catbox.moe/l9w0ry.json)~~<br/>
35
+ ~~[Mistral-Custom-Instruct.json](https://files.catbox.moe/9xiiwb.json)~~ <br/>
36
+ [Magnum-123B-Context.json](https://files.catbox.moe/rkyqwg.json) <br/>
37
+ [Magnum-123B-Instruct.json](https://files.catbox.moe/obb5oe.json) <br/>
38
  *** NOTE *** <br/>
39
  There have been reports of the quantized model misbehaving with the mistral prompt, if you are seeing issues it may be worth trying ChatML Context and Instruct templates.
40