fbaldassarri's picture
Update README.md
969e0e3 verified
metadata
license: other
license_name: falcon-llm-license
license_link: https://falconllm.tii.ae/falcon-terms-and-conditions.html
language:
  - en
  - fr
  - es
  - pt
pipeline_tag: text-generation
tags:
  - causal-lm
  - autoround
  - auto-round
  - intel-autoround
  - woq
  - autogptq
  - auto-gptq
  - gptq
  - intel
  - pytorch
  - falcon3
model_name: Falcon3 1B Base
base_model:
  - tiiuae/Falcon3-1B-Base
inference: false
library_name: transformers
model_creator: tiiuae
prompt_template: '{prompt} '
quantized_by: fbaldassarri

Model Information

Quantized version of tiiuae/Falcon3-1B-Base using torch.float32 for quantization tuning.

  • 4 bits (INT4)
  • group size = 128
  • Asymmetrical Quantization
  • Method AutoGPTQ

Quantization framework: Intel AutoRound v0.4.4

Note: this INT4 version of Falcon3-1B-Base has been quantized to run inference through CPU.

Replication Recipe

Step 1 Install Requirements

I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.

wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.4.tar.gz
tar -xvzf v0.4.4.tar.gz
cd auto-round-0.4.4
pip install -r requirements-cpu.txt --upgrade

Step 2 Build Intel AutoRound wheel from sources

pip install -vvv --no-build-isolation -e .[cpu]

Step 3 Script for Quantization

  from transformers import AutoModelForCausalLM, AutoTokenizer
  model_name = "tiiuae/Falcon3-1B-Base"
  model = AutoModelForCausalLM.from_pretrained(model_name)
  tokenizer = AutoTokenizer.from_pretrained(model_name)
  from auto_round import AutoRound
  bits, group_size, sym, device, amp = 4, 128, False, 'cpu', False
  autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
  autoround.quantize()
  output_dir = "./AutoRound/tiiuae_Falcon3-1B-Base-autogptq-int4-gs128-asym"
  autoround.save_quantized(output_dir, format='auto_gptq', inplace=True)

License

Falcon3 License

Disclaimer

This quantized model comes with no warranty. It has been developed only for research purposes.