medmekk/Llama-3.2-1B-BNB-INT4 (Quantized)
Description
This model is a quantized version of the original model medmekk/Llama-3.2-1B-BNB-INT4
. It has been quantized using int4 quantization with bitsandbytes.
Quantization Details
- Quantization Type: int4
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
- bnb_4bit_quant_storage: uint8
Usage
You can use this model in your applications by loading it directly from the Hugging Face Hub:
from transformers import AutoModel
model = AutoModel.from_pretrained("medmekk/Llama-3.2-1B-BNB-INT4")
- Downloads last month
- 51
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support