docscopeOCR-7B-050425-exp-GGUF

The docscopeOCR-7B-050425-exp model is a fine-tuned version of Qwen/Qwen2.5-VL-7B-Instruct, optimized for Document-Level Optical Character Recognition (OCR), long-context vision-language understanding, and accurate image-to-text conversion with mathematical LaTeX formatting. Built on top of the Qwen2.5-VL architecture, this model significantly improves document comprehension, structured data extraction, and visual reasoning across diverse input formats.

Model File

File Name Size Format Description
docscopeOCR-7B-050425-exp.IQ4_XS.gguf 4.25 GB GGUF (IQ4_XS) Int4 extra-small quantized model
docscopeOCR-7B-050425-exp.Q2_K.gguf 3.02 GB GGUF (Q2_K) 2-bit quantized model
docscopeOCR-7B-050425-exp.Q3_K_L.gguf 4.09 GB GGUF (Q3_K_L) 3-bit large quantized model
docscopeOCR-7B-050425-exp.Q3_K_M.gguf 3.81 GB GGUF (Q3_K_M) 3-bit medium quantized model
docscopeOCR-7B-050425-exp.Q3_K_S.gguf 3.49 GB GGUF (Q3_K_S) 3-bit small quantized model
docscopeOCR-7B-050425-exp.Q4_K_M.gguf 4.68 GB GGUF (Q4_K_M) 4-bit medium quantized model
docscopeOCR-7B-050425-exp.Q5_K_M.gguf 5.44 GB GGUF (Q5_K_M) 5-bit medium quantized model
docscopeOCR-7B-050425-exp.Q5_K_S.gguf 5.32 GB GGUF (Q5_K_S) 5-bit small quantized model
docscopeOCR-7B-050425-exp.Q6_K.gguf 6.25 GB GGUF (Q6_K) 6-bit quantized model
docscopeOCR-7B-050425-exp.Q8_0.gguf 8.1 GB GGUF (Q8_0) 8-bit quantized model
config.json 36 B JSON Configuration file
.gitattributes 2.25 kB Text Git attributes configuration

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png

Downloads last month
189
GGUF
Model size
7.62B params
Architecture
qwen2vl
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for prithivMLmods/docscopeOCR-7B-050425-exp-GGUF

Quantized
(3)
this model