meta-llama/Llama-3.2-1B-Instruct-finetuned with Atomic

Model Description

This model was fine-tuned from meta-llama/Llama-3.2-1B-Instruct on the fka/awesome-chatgpt-prompts dataset utilizing the Atomic System from NOLA, AI.

Training Data

  • Dataset name: fka/awesome-chatgpt-prompts

Training Arguments

  • Batch size: 48
  • Learning rate: 0.0001
  • **Used ATOMIC Speed: ** True

Evaluation Results

Downloads last month
6
Safetensors
Model size
1.24B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support