File size: 2,131 Bytes
722a847
 
df2e5b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a4b7e3a
df2e5b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
AI Model Name: Llama 3 70B "Built with Meta Llama 3" https://llama.meta.com/llama3/license/

How to quantize 70B model so it will fit on 2x4090 GPUs:

I tried EXL2, AutoAWQ, and SqueezeLLM and they all failed for different reasons (issues opened).

HQQ worked:

I rented a 4x GPU 1TB RAM ($19/hr) instance on runpod with 1024GB container and 1024GB workspace disk space.
I think you only need 2x GPU with 80GB VRAM and 512GB+ system RAM so probably overpaid.

Note you need to fill in the form to get access to the 70B Meta weights.

You can copy/paste this on the console and it will just set up everything automatically:

```bash
apt update
apt install git-lfs vim -y

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
~/miniconda3/bin/conda init bash
source ~/.bashrc

conda create -n hqq python=3.10 -y && conda activate hqq

git lfs install
git clone https://github.com/mobiusml/hqq.git
cd hqq

pip install torch
pip install .

pip install huggingface_hub[hf_transfer]
export HF_HUB_ENABLE_HF_TRANSFER=1

huggingface-cli login
```

Create `quantize.py` file by copy/pasting this into console:

```
echo "
import torch

model_id      = 'meta-llama/Meta-Llama-3-70B-Instruct'
save_dir   = 'cat-llama-3-70b-hqq'
compute_dtype = torch.bfloat16

from hqq.core.quantize import *
quant_config = BaseQuantizeConfig(nbits=4, group_size=64, offload_meta=True)
zero_scale_group_size = 128
quant_config['scale_quant_params']['group_size']     = zero_scale_group_size
quant_config['zero_quant_params']['group_size']      = zero_scale_group_size

from hqq.engine.hf import HQQModelForCausalLM, AutoTokenizer
model = HQQModelForCausalLM.from_pretrained(model_id)

from hqq.models.hf.base import AutoHQQHFModel
AutoHQQHFModel.quantize_model(model, quant_config=quant_config,
                                    compute_dtype=compute_dtype)

AutoHQQHFModel.save_quantized(model, save_dir)
model = AutoHQQHFModel.from_quantized(save_dir)

model.eval()

" > quantize.py
```

Run script:

```
python quantize.py
```