Llama-3.2-3b-q4_k_m-gguf

  • Developed by: ihumaunkabir
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3.2-3b-unsloth-bnb-4bit

This llama[1] model was trained 2x faster with Unsloth and Huggingface's TRL library.
[1] A. Grattafiori et al., The Llama 3 Herd of Models. 2024. [Online]. Available: https://arxiv.org/abs/2407.21783

Downloads last month
24
GGUF
Model size
3.21B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Collection including ihumaunkabir/Llama-3.2-3b-q4_k_m-gguf