Use the latest transformers for this model.

Downloads last month
3
Safetensors
Model size
4.36B params
Tensor type
FP16
I64
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for brunopio/Qwen2.5-7B-Instruct-1M-nbits4-GSNone-Axis0-HQQ-T

Base model

Qwen/Qwen2.5-7B
Quantized
(63)
this model