Llama-2-70b-hf-2bit_g16_s128-HQQ {Deprecated}

This is a version of the LLama-2-70B-hf model quantized to 2-bit via Half-Quadratic Quantization (HQQ): https://mobiusml.github.io/hqq_blog/

This model outperforms an fp16 LLama-2-13B (perplexity 4.13 vs. 4.63) for a comparable ~26GB size.

Warning: this model is deprecated, it requires an older version of hqq.

To run the model, install the HQQ library:

#This model is deprecated and requires older versions
pip install hqq==0.1.8
pip install transformers==4.46.0

and use it as follows:

model_id = 'mobiuslabsgmbh/Llama-2-70b-hf-2bit_g16_s128-HQQ'

from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_id)
model     = HQQModelForCausalLM.from_quantized(model_id)

Limitations:
-Only supports single GPU runtime.
-Not compatible with HuggingFace's PEFT.

Downloads last month
0
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including mobiuslabsgmbh/Llama-2-70b-hf-2bit_g16_s128-HQQ