xxx777xxxASD commited on
Commit
bb3d87e
·
verified ·
1 Parent(s): 4379619

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -8,7 +8,7 @@ tags:
8
 
9
  ![image/png](https://i.ibb.co/MRXkh6p/icon2.png)
10
 
11
- Test merge. Attempt to get good at RP, ERP, general things model with 128k context. Every model here has [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context experience as perfect as possible until we reach YaRN from 32 to 128k, if not - it's sad D:, or, i get something wrong.
12
 
13
  [Exl2, 4.0 bpw](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-exl2-bpw-4.0)
14
 
 
8
 
9
  ![image/png](https://i.ibb.co/MRXkh6p/icon2.png)
10
 
11
+ Test merge. Attempt to get good at RP, ERP, general tasks model with 128k context. Every model here has [Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context](https://huggingface.co/Epiculous/Fett-uccine-Long-Noodle-7B-120k-Context) in merge instead of regular MistralYarn 128k. The reason is because i belive Epiculous merged it with Mistral Instruct v0.2 to make first 32k context experience as perfect as possible until we reach YaRN from 32 to 128k, if not - it's sad D:, or, i get something wrong.
12
 
13
  [Exl2, 4.0 bpw](https://huggingface.co/xxx777xxxASD/NeuralKunoichi-EroSumika-4x7B-128k-exl2-bpw-4.0)
14