prithivMLmods commited on
Commit
dc91237
·
verified ·
1 Parent(s): 25d08ba

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -22,7 +22,7 @@ Blazer.1-2B-Vision `4-bit precision` is based on the Qwen2-VL model, fine-tuned
22
 
23
  # **Use it With Transformer**
24
 
25
- The bitsandbytes library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.
26
 
27
  ```python
28
  from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
 
22
 
23
  # **Use it With Transformer**
24
 
25
+ The `bitsandbytes` library is a lightweight Python wrapper around CUDA custom functions, in particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 & 4-bit quantization functions.
26
 
27
  ```python
28
  from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor