qwen2.5-7b-q4_k_m-gguf

  • Developed by: ihumaunkabir
  • License: apache-2.0
  • Finetuned from model : unsloth/qwen2.5-7b-unsloth-bnb-4bit

This qwen2[1] model was trained 2x faster with Unsloth and Huggingface's TRL library.

[1] A. Yang et al., “Qwen2 Technical Report,” arXiv preprint arXiv:2407.10671, 2024.

Downloads last month
18
GGUF
Model size
7.62B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including ihumaunkabir/qwen2.5-7b-q4_k_m-gguf