File size: 117 Bytes
4a1a362 |
1 2 3 4 5 6 7 8 |
Model: google/gemma-3-1b-it Epochs: 3 Learning rate: 0.0002 Batch size: 2 LoRA r: 16 Device: GPU Quantization: False |
4a1a362 |
1 2 3 4 5 6 7 8 |
Model: google/gemma-3-1b-it Epochs: 3 Learning rate: 0.0002 Batch size: 2 LoRA r: 16 Device: GPU Quantization: False |