Update README.md
Browse files
README.md
CHANGED
@@ -8,6 +8,10 @@ language:
|
|
8 |
|
9 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/pLcriXAfp3Y9Z0RGwwVUB.png)
|
10 |
|
|
|
|
|
|
|
|
|
11 |
Qlora trained for 5 epochs on 6400 rows of q&a from around 1000 pages from wikipedia + around 100 of python questions and examples from
|
12 |
eph1/Alpaca-Lora-GPT4-Swedish-Refined (because I had spent so much time cleaning them and didn't want to throw them away). Also a couple of hundred rows of manually
|
13 |
gathered examples and some generated using chat-gpt.
|
|
|
8 |
|
9 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/653cd3049107029eb004f968/pLcriXAfp3Y9Z0RGwwVUB.png)
|
10 |
|
11 |
+
Update 240103: I'm currently retraining on Colab with a larger dataset but I'm running into issues due to reaching the limits of the V100. A100 don't seem to be available.
|
12 |
+
May be some time before the new version is done.
|
13 |
+
|
14 |
+
|
15 |
Qlora trained for 5 epochs on 6400 rows of q&a from around 1000 pages from wikipedia + around 100 of python questions and examples from
|
16 |
eph1/Alpaca-Lora-GPT4-Swedish-Refined (because I had spent so much time cleaning them and didn't want to throw them away). Also a couple of hundred rows of manually
|
17 |
gathered examples and some generated using chat-gpt.
|