Update README.md
Browse files
README.md
CHANGED
@@ -2,10 +2,11 @@
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
-
# LimaRP-Mistral-7B
|
6 |
|
7 |
This is a version of LimaRP for [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with
|
8 |
-
about 2000 training samples _up to_ 9k tokens length.
|
|
|
9 |
|
10 |
For more details about LimaRP, see the model page for the [previously released v2 version for Llama-2](https://huggingface.co/lemonilia/limarp-llama2-v2).
|
11 |
Most details written there apply for this version as well. Generally speaking, LimaRP is a longform-oriented, novel-style
|
|
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
|
5 |
+
# LimaRP-Mistral-7B (Alpaca, flipped instruction experiment)
|
6 |
|
7 |
This is a version of LimaRP for [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) with
|
8 |
+
about 2000 training samples _up to_ 9k tokens length. The second training epoch used a differently arranged
|
9 |
+
system instruction.
|
10 |
|
11 |
For more details about LimaRP, see the model page for the [previously released v2 version for Llama-2](https://huggingface.co/lemonilia/limarp-llama2-v2).
|
12 |
Most details written there apply for this version as well. Generally speaking, LimaRP is a longform-oriented, novel-style
|