prithivMLmods
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ Blazer.1-2B-Vision `4-bit precision` is based on the Qwen2-VL model, fine-tuned
|
|
22 |
|
23 |
# **Use it With Transformer**
|
24 |
|
25 |
-
The bitsandbytes library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.
|
26 |
|
27 |
```python
|
28 |
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
|
|
|
22 |
|
23 |
# **Use it With Transformer**
|
24 |
|
25 |
+
The `bitsandbytes` library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.
|
26 |
|
27 |
```python
|
28 |
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
|