prithivMLmods commited on
Commit
3741e4a
·
verified ·
1 Parent(s): 2b2266f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +103 -1
README.md CHANGED
@@ -9,4 +9,106 @@ base_model:
9
  - Qwen/Qwen2.5-VL-3B-Instruct
10
  pipeline_tag: image-text-to-text
11
  library_name: transformers
12
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - Qwen/Qwen2.5-VL-3B-Instruct
10
  pipeline_tag: image-text-to-text
11
  library_name: transformers
12
+ ---
13
+
14
+ ![Add a heading.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/t0z6C_PSP37WIVZBc_Y8-.png)
15
+
16
+ # **Behemoth-3B-070225-post0.1**
17
+
18
+ > The **Behemoth-3B-070225-post0.1** model is a fine-tuned version of **Qwen2.5-VL-3B-Instruct**, optimized for **Detailed Image Captioning**, **OCR Tasks**, and **Chain-of-Thought Reasoning**. Built on top of the Qwen2.5-VL architecture, this model enhances visual understanding capabilities with focused training on the 50k LLaVA-CoT-o1-Instruct dataset for superior image analysis and detailed reasoning tasks.
19
+
20
+ # Key Enhancements
21
+
22
+ * **Detailed Image Captioning**: Advanced capability for generating comprehensive, contextually rich descriptions of visual content with fine-grained detail recognition.
23
+
24
+ * **Enhanced OCR Performance**: Designed to efficiently extract and recognize text from images with high accuracy across various fonts, layouts, and image qualities.
25
+
26
+ * **Chain-of-Thought Reasoning**: Specialized in providing step-by-step logical reasoning processes for complex visual analysis tasks, breaking down problems into manageable components.
27
+
28
+ * **Superior Visual Understanding**: Optimized for precise interpretation of visual elements, spatial relationships, and contextual information within images.
29
+
30
+ * **Instruction Following**: Enhanced ability to follow detailed instructions for specific image analysis tasks while maintaining reasoning transparency.
31
+
32
+ * **State-of-the-Art Performance on Vision Tasks**: Achieves competitive results on visual question answering, image captioning, and OCR benchmarks.
33
+
34
+ * **Efficient 3B Parameter Model**: Provides strong performance while maintaining computational efficiency for broader accessibility.
35
+
36
+ * **Multi-Modal Reasoning**: Enables comprehensive analysis combining visual perception with logical reasoning chains.
37
+
38
+ # Quick Start with Transformers
39
+
40
+ ```python
41
+ from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
42
+ from qwen_vl_utils import process_vision_info
43
+
44
+ model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
45
+ "prithivMLmods/Behemoth-3B-070225-post0.1", torch_dtype="auto", device_map="auto"
46
+ )
47
+
48
+ processor = AutoProcessor.from_pretrained("prithivMLmods/Behemoth-3B-070225-post0.1")
49
+
50
+ messages = [
51
+ {
52
+ "role": "user",
53
+ "content": [
54
+ {
55
+ "type": "image",
56
+ "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
57
+ },
58
+ {"type": "text", "text": "Provide a detailed caption for this image and explain your reasoning step by step."},
59
+ ],
60
+ }
61
+ ]
62
+
63
+ text = processor.apply_chat_template(
64
+ messages, tokenize=False, add_generation_prompt=True
65
+ )
66
+ image_inputs, video_inputs = process_vision_info(messages)
67
+ inputs = processor(
68
+ text=[text],
69
+ images=image_inputs,
70
+ videos=video_inputs,
71
+ padding=True,
72
+ return_tensors="pt",
73
+ )
74
+ inputs = inputs.to("cuda")
75
+
76
+ generated_ids = model.generate(**inputs, max_new_tokens=256)
77
+ generated_ids_trimmed = [
78
+ out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
79
+ ]
80
+ output_text = processor.batch_decode(
81
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
82
+ )
83
+ print(output_text)
84
+ ```
85
+
86
+ # Intended Use
87
+
88
+ This model is intended for:
89
+
90
+ * **Detailed Image Captioning**: Generating comprehensive, nuanced descriptions of visual content for accessibility, content creation, and analysis purposes.
91
+ * **OCR Applications**: High-accuracy text extraction from images, documents, signs, and handwritten content.
92
+ * **Chain-of-Thought Visual Analysis**: Providing step-by-step reasoning for complex visual interpretation tasks.
93
+ * **Educational Content Creation**: Generating detailed explanations of visual materials with logical reasoning chains.
94
+ * **Content Accessibility**: Creating detailed alt-text and descriptions for visually impaired users.
95
+ * **Visual Question Answering**: Answering complex questions about images with detailed reasoning processes.
96
+ * **Document Analysis**: Processing and understanding visual documents with both text extraction and content comprehension.
97
+ * **Research and Analysis**: Supporting academic and professional research requiring detailed visual analysis with transparent reasoning.
98
+
99
+ # Base Training Details
100
+
101
+ * **Base Model**: Qwen2.5-VL-3B-Instruct
102
+ * **Training Dataset**: 50k LLaVA-CoT-o1-Instruct dataset
103
+ * **Specialized Training Focus**: Chain-of-thought reasoning, detailed captioning, and OCR tasks
104
+ * **Model Size**: 3 billion parameters for efficient deployment
105
+
106
+ # Limitations
107
+
108
+ * **Computational Requirements**: While more efficient than larger models, still requires adequate GPU memory for optimal performance.
109
+ * **Image Quality Sensitivity**: Performance may degrade on extremely low-quality, heavily occluded, or severely distorted images.
110
+ * **Processing Speed**: Chain-of-thought reasoning may result in longer response times compared to direct answer models.
111
+ * **Language Coverage**: Primarily optimized for English language tasks, with variable performance on other languages.
112
+ * **Context Length**: Limited by the base model's context window for very long reasoning chains.
113
+ * **Hallucination Risk**: May occasionally generate plausible but incorrect details, especially in ambiguous visual scenarios.
114
+ * **Resource Constraints**: Not optimized for real-time applications on edge devices or low-resource environments.