Camel-Doc-OCR-062825-mmp-GGUF

The Camel-Doc-OCR-062825 model is a fine-tuned version of Qwen2.5-VL-7B-Instruct, optimized for Document Retrieval, Content Extraction, and Analysis Recognition. Built on top of the Qwen2.5-VL architecture, this model enhances document comprehension capabilities with focused training on the Opendoc2-Analysis-Recognition dataset for superior document analysis and information extraction tasks.

Model Files

File Name Size Type Description
Camel-Doc-OCR-062825.Q2_K.gguf 3.02 GB Model Q2_K quantized model
Camel-Doc-OCR-062825.Q3_K_M.gguf 3.81 GB Model Q3_K_M quantized model
Camel-Doc-OCR-062825.Q4_K_M.gguf 4.68 GB Model Q4_K_M quantized model
Camel-Doc-OCR-062825.Q5_K_M.gguf 5.44 GB Model Q5_K_M quantized model
Camel-Doc-OCR-062825.Q6_K.gguf 6.25 GB Model Q6_K quantized model
Camel-Doc-OCR-062825.Q8_0.gguf 8.1 GB Model Q8_0 quantized model
Camel-Doc-OCR-062825.f16.gguf 15.2 GB Model Full precision f16 model
Camel-Doc-OCR-062825.mmproj-Q8_0.gguf 853 MB Projection Q8_0 multimodal projection
Camel-Doc-OCR-062825.mmproj-f16.gguf 1.35 GB Projection f16 multimodal projection
.gitattributes 2.14 kB Config Git LFS configuration
config.json 36 Bytes Config Model configuration
README.md 633 Bytes Documentation Repository documentation

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
0
GGUF
Model size
7.62B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/Camel-Doc-OCR-062825-mmp-GGUF

Quantized
(3)
this model

Collection including prithivMLmods/Camel-Doc-OCR-062825-mmp-GGUF