meta-llama/Llama-3.2-1B-Instruct-finetuned with Atomic
Model Description
This model was fine-tuned from meta-llama/Llama-3.2-1B-Instruct
on the fka/awesome-chatgpt-prompts
dataset utilizing the Atomic System from NOLA, AI.
Training Data
- Dataset name: fka/awesome-chatgpt-prompts
Training Arguments
- Batch size: 48
- Learning rate: 0.0001
- **Used ATOMIC Speed: ** True
Evaluation Results
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support