medmekk/Phi-3.5-mini-instruct-bnb-4bit (Quantized)

Description

This model is a quantized version of the original model medmekk/Phi-3.5-mini-instruct-bnb-4bit. It has been quantized using int4 quantization with bitsandbytes.

Quantization Details

  • Quantization Type: int4
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16
  • bnb_4bit_quant_storage: uint8

Usage

You can use this model in your applications by loading it directly from the Hugging Face Hub:

from transformers import AutoModel

model = AutoModel.from_pretrained("medmekk/Phi-3.5-mini-instruct-bnb-4bit")
Downloads last month
7
Safetensors
Model size
1.97B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for medmekk/Phi-3.5-mini-instruct-bnb-4bit

Unable to build the model tree, the base model loops to the model itself. Learn more.