File size: 1,312 Bytes
d555b1d b1e8294 d555b1d 8e3cae8 d555b1d c6108ae d555b1d 8e3cae8 d555b1d c6108ae 8e3cae8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
base_model: meta-llama/Llama-3.1-70B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
datasets:
- kobprof/skolegpt-instruct
---
# Uploaded model
- **Compute sponsored by:** Nvidia and Arrow ECS Denmark through Danish Data Science Community
- **Developed by:** ThatsGroes
- **License:** apache-2.0
- **Finetuned from model :** meta-llama/Llama-3.1-70B-Instruct
LoRA adapter on Llama-3.1-70b loaded in 4-bit. Trained for 1 epoch with rank=lora_alpha=8
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
We ended up using 62.52 GB GPU memory (79.00%), of which 23.83 GB (30.12%) was used for LoRa.
[codecarbon INFO @ 11:07:59] Energy consumed for RAM : 2.574882 kWh. RAM Power : 188.78840446472168 W
[codecarbon INFO @ 11:07:59] Energy consumed for all GPUs : 4.045188 kWh. Total GPU Power : 270.22211938762564 W
[codecarbon INFO @ 11:07:59] Energy consumed for all CPUs : 0.579916 kWh. Total CPU Power : 42.5 W
[codecarbon INFO @ 11:07:59] 7.199986 kWh of electricity used since the beginning.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |