File size: 5,312 Bytes
9a9af41 ac4823e 9a9af41 2b0b460 9a9af41 126a408 9a9af41 c63c6ac 60d189b 9a9af41 ac4823e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 |
---
datasets:
- NeelNanda/pile-10k
license: llama3.2
base_model:
- meta-llama/Llama-3.2-11B-Vision-Instruct
---
## Model Details
This model is an int4 model with group_size 128 and symmetric quantization of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) generated by [intel/auto-round](https://github.com/intel/auto-round). Load the model with revision="6aeae92" to use AutoGPTQ format.
## How To Use
### Requirements
Please use Transformers version 4.45.0 or later
AutoRound version >= 0.4.1
### INT4 Inference
```python
from auto_round import AutoRoundConfig ## must import for auto-round format
import requests
import torch
from PIL import Image
from transformers import MllamaForConditionalGeneration, AutoProcessor
quantized_model_path="OPEA/Llama-3.2-11B-Vision-Instruct-int4-sym-inc"
model = MllamaForConditionalGeneration.from_pretrained(
quantized_model_path,
torch_dtype="auto",
device_map="auto",
##revision="6aeae92" ##AutoGPTQ format
)
processor = AutoProcessor.from_pretrained(quantized_model_path)
image_url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/0052a70beed5bf71b92610a43a52df6d286cd5f3/diffusers/rabbit.jpg"
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "Please write a haiku for this one, it would be: "}
]}
]
# Preparation for inference
image = Image.open(requests.get(image_url, stream=True).raw)
input_text = processor.apply_chat_template(messages, add_generation_prompt=True)
inputs = processor(
image,
input_text,
add_special_tokens=False,
return_tensors="pt"
).to(model.device)
output = model.generate(**inputs, max_new_tokens=50)
print(processor.decode(output[0]))
##INT4:
## Here is a haiku for the rabbit:
## Whiskers twitching bright
## Ears perked up, alert and keen
## Spring's gentle delight<|eot_id|>
##BF16:
## Here is a haiku for the rabbit:
## Whiskers twitching fast
## In a coat of blue and brown
## Hoppy little soul<|eot_id|>
image_url = "http://images.cocodataset.org/train2017/000000411975.jpg"
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "How many people are on the baseball field in the picture?"}
]}
]
##INT4: There are five people on the baseball field in the picture.
##
##BF16: There are five people on the baseball field in the picture.
##
image_url = "https://intelcorp.scene7.com/is/image/intelcorp/processor-overview-framed-badge:1920-1080?wid=480&hei=270"
messages = [
{"role": "user", "content": [
{"type": "image"},
{"type": "text", "text": "Which company does this picture represent?"}
]}
]
##INT4: This picture represents Intel.
##
##BF16: This image represents Intel, a multinational semiconductor corporation headquartered in Santa Clara, California.
##
```
## Evaluation the model
pip3 install git+https://github.com/open-compass/VLMEvalKit.git@7de2dcb. The evaluation process may encounter errors that require changing model backend or evaluation code. Detailed instructions will be provided in a future update.
```bash
auto-round-mllm --eval --model OPEA/Llama-3.2-11B-Vision-Instruct-int4-sym-inc --tasks MMBench_DEV_EN_V11,ScienceQA_VAL,TextVQA_VAL,POPE --output_dir "./eval_result"
```
|Metric |16bits|Llava Calib INT4|
|:-------------------|:------|:------|
|avg |66.05 |67.81 |
|MMBench_DEV_EN_V11 |52.86 |53.48 |
|ScienceQA_VAL |68.86 |70.39 |
|TextVQA_VAL |54.49 |59.62 |
|POPE |88.00 |87.76 |
### Generate the model
Here is the sample command to reproduce the model.
```bash
pip install auto-round
auto-round-mllm \
--model meta-llama/Llama-3.2-11B-Vision-Instruct \
--device 0 \
--group_size 128 \
--bits 4 \
--iters 1000 \
--nsample 512 \
--seqlen 512 \
--format 'auto_gptq,auto_round' \
--output_dir "./tmp_autoround"
```
## Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
## Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor [link](https://github.com/intel/neural-compressor)
## Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
## Cite
@article{cheng2023optimize, title={Optimize weight rounding via signed gradient descent for the quantization of llms}, author={Cheng, Wenhua and Zhang, Weiwei and Shen, Haihao and Cai, Yiyang and He, Xin and Lv, Kaokao and Liu, Yi}, journal={arXiv preprint arXiv:2309.05516}, year={2023} }
[arxiv](https://arxiv.org/abs/2309.05516) [github](https://github.com/intel/auto-round) |