File size: 1,518 Bytes
e91b0b6 d40be70 caa448b d40be70 8e7b8c7 ca41594 8e7b8c7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
pipeline_tag: any-to-any
datasets:
- openbmb/RLAIF-V-Dataset
library_name: transformers
language:
- multilingual
tags:
- minicpm-o
- omni
- vision
- ocr
- multi-image
- video
- custom_code
- audio
- speech
- voice cloning
- live Streaming
- realtime speech conversation
- asr
- tts
---
<h1>A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming on Your Phone</h1>
## MiniCPM-o 2.6 int4
This is the int4 quantized version of [**MiniCPM-o 2.6**](https://huggingface.co/openbmb/MiniCPM-o-2_6).
Running with int4 version would use lower GPU memory (about 9GB).
### Prepare code and install AutoGPTQ
We are submitting PR to officially support minicpm-o 2.6 inference
```python
git clone https://github.com/OpenBMB/AutoGPTQ.git && cd AutoGPTQ
git checkout minicpmo
# install AutoGPTQ
pip install -vvv --no-build-isolation -e .
```
### Usage of **MiniCPM-o-2_6-int4**
Change the model initialization part to `AutoGPTQForCausalLM.from_quantized`
```python
import torch
from transformers import AutoModel, AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM
model = AutoGPTQForCausalLM.from_quantized(
'openbmb/MiniCPM-o-2_6-int4',
torch_dtype=torch.bfloat16,
device="cuda:0",
trust_remote_code=True,
disable_exllama=True,
disable_exllamav2=True
)
tokenizer = AutoTokenizer.from_pretrained(
'openbmb/MiniCPM-o-2_6-int4',
trust_remote_code=True
)
model.init_tts()
```
Usage reference [MiniCPM-o-2_6#usage](https://huggingface.co/openbmb/MiniCPM-o-2_6#usage)
|