Update README.md
Browse files
README.md
CHANGED
@@ -2,15 +2,17 @@
|
|
2 |
pipeline_tag: text-generation
|
3 |
---
|
4 |
|
5 |
-
|
6 |
|
7 |
Some notes on best usage:
|
|
|
8 |
- dont waste time on sampler settings; use recommended and optimize the prompt
|
9 |
- don't "overparameterize" by writing too long a prompt
|
10 |
- model size/intelligence is important but they are just mimics, the dataset is very important
|
11 |
|
12 |
# model ranking
|
13 |
|
|
|
14 |
- [Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4)
|
15 |
- [MarinaraSpaghetti/NemoMix-Unleashed-12B](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B)
|
16 |
- [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
|
|
|
2 |
pipeline_tag: text-generation
|
3 |
---
|
4 |
|
5 |
+
Good story telling models that can fit in an RTX 3060 12GB. Updated July 2025.
|
6 |
|
7 |
Some notes on best usage:
|
8 |
+
- dont underestimate the original instruct models, esp from mistral
|
9 |
- dont waste time on sampler settings; use recommended and optimize the prompt
|
10 |
- don't "overparameterize" by writing too long a prompt
|
11 |
- model size/intelligence is important but they are just mimics, the dataset is very important
|
12 |
|
13 |
# model ranking
|
14 |
|
15 |
+
- **Winner**: [mistralai/Mistral-Small-3.1-24B-Instruct-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Instruct-2503)
|
16 |
- [Sao10K/MN-12B-Lyra-v4](https://huggingface.co/Sao10K/MN-12B-Lyra-v4)
|
17 |
- [MarinaraSpaghetti/NemoMix-Unleashed-12B](https://huggingface.co/MarinaraSpaghetti/NemoMix-Unleashed-12B)
|
18 |
- [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2)
|