Ramikan-BR commited on
Commit
cd095f1
1 Parent(s): 27d7c2d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -1
README.md CHANGED
@@ -22,4 +22,43 @@ base_model: unsloth/tinyllama-bnb-4bit
22
 
23
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
24
 
25
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
24
 
25
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
26
+
27
+ ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
28
+ \\ /| Num examples = 967 | Num Epochs = 1
29
+ O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 16
30
+ \ / Total batch size = 32 | Total steps = 30
31
+ "-____-" Number of trainable parameters = 100,925,440
32
+ [30/30 26:26, Epoch 0/1]
33
+ Step Training Loss
34
+ 1 1.737000
35
+ 2 1.738000
36
+ 3 1.384700
37
+ 4 1.086400
38
+ 5 1.009600
39
+ 6 0.921000
40
+ 7 0.830400
41
+ 8 0.808900
42
+ 9 0.774500
43
+ 10 0.759900
44
+ 11 0.736100
45
+ 12 0.721200
46
+ 13 0.733200
47
+ 14 0.701000
48
+ 15 0.711700
49
+ 16 0.701400
50
+ 17 0.689500
51
+ 18 0.678800
52
+ 19 0.675200
53
+ 20 0.680500
54
+ 21 0.685800
55
+ 22 0.681200
56
+ 23 0.672000
57
+ 24 0.679900
58
+ 25 0.675500
59
+ 26 0.666600
60
+ 27 0.687900
61
+ 28 0.653600
62
+ 29 0.672500
63
+ 30 0.660900
64
+