|
--- |
|
base_model: unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit |
|
tags: |
|
- text-generation-inference |
|
- transformers |
|
- unsloth |
|
- gemma3n |
|
license: apache-2.0 |
|
language: |
|
- en |
|
datasets: |
|
- chimbiwide/pippa |
|
--- |
|
|
|
# Gemma3NPC-Float16 |
|
|
|
#### The "base" model that delivers good general role-playing at great speed. |
|
|
|
We trained this model as a rank-16 LoRA adapter with one epoch over `pippa` using a 40GB vRAM A100 in Google Colab. For this run, we employed a learning rate of `2e-5` and a total batch size of 1 and gradient accumulation steps of 16. A cosine learning rate scheduler was used with an 800-step warmup. With a gradient clipping of 0.4. |
|
|
|
Check out our training notebook [here](https://github.com/chimbiwide/Gemma3NPC/blob/main/Training/Gemma3NPC.ipynb). |
|
|
|
--- |
|
|
|
Here is a graph of the Step Training Loss, saved every 10 steps: |
|
|
|
 |
|
|