Ramikan-BR commited on
Commit
9b9569f
1 Parent(s): cd095f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -24,6 +24,7 @@ This llama model was trained 2x faster with [Unsloth](https://github.com/unsloth
24
 
25
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
26
 
 
27
  ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
28
  \\ /| Num examples = 967 | Num Epochs = 1
29
  O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 16
@@ -31,6 +32,7 @@ O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 16
31
  "-____-" Number of trainable parameters = 100,925,440
32
  [30/30 26:26, Epoch 0/1]
33
  Step Training Loss
 
34
  1 1.737000
35
  2 1.738000
36
  3 1.384700
@@ -61,4 +63,4 @@ Step Training Loss
61
  28 0.653600
62
  29 0.672500
63
  30 0.660900
64
-
 
24
 
25
  [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
26
 
27
+ ---
28
  ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1
29
  \\ /| Num examples = 967 | Num Epochs = 1
30
  O^O/ \_/ \ Batch size per device = 2 | Gradient Accumulation steps = 16
 
32
  "-____-" Number of trainable parameters = 100,925,440
33
  [30/30 26:26, Epoch 0/1]
34
  Step Training Loss
35
+
36
  1 1.737000
37
  2 1.738000
38
  3 1.384700
 
63
  28 0.653600
64
  29 0.672500
65
  30 0.660900
66
+ ---