openfree commited on
Commit
0ad895e
·
verified ·
1 Parent(s): a2a6ac0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +181 -3
README.md CHANGED
@@ -1,3 +1,181 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: gemma
3
+ library_name: transformers
4
+ base_model: google/gemma-3-1b-it
5
+ language:
6
+ - en
7
+ - ko
8
+ - ja
9
+ - zh
10
+ - es
11
+ - ru
12
+ - ar
13
+ - hi
14
+ - id
15
+ - ml
16
+ - fr
17
+ - de
18
+ pipeline_tag: image-text-to-text
19
+ ---
20
+
21
+ # Gemma3-R1984-1B
22
+
23
+ # Model Overview
24
+ Gemma3-R1984-1B is a robust Agentic AI platform built on Googls’s Gemma-3-4B model. It integrates state-of-the-art deep research via web search with multimodal file processing—including images, videos, and documents—and handles long contexts up to 8,000 tokens. Designed for local deployment on independent servers using NVIDIA L40s, L4, A-100(ZeroGPU) GPUs, it provides high security, prevents data leakage, and delivers uncensored responses.
25
+
26
+ # Key Features
27
+ Multimodal Processing:
28
+ Supports multiple file types such as images (PNG, JPG, JPEG, GIF, WEBP), videos (MP4), and documents (PDF, CSV, TXT).
29
+
30
+ Deep Research (Web Search):
31
+ Automatically extracts keywords from user queries and utilizes the SERPHouse API to retrieve up to 20 real-time search results. The model incorporates multiple sources by explicitly citing them in the response.
32
+
33
+ Long Context Handling:
34
+ Capable of processing inputs up to 8,000 tokens, ensuring comprehensive analysis of lengthy documents or conversations.
35
+
36
+ Robust Reasoning:
37
+ Employs extended chain-of-thought reasoning for systematic and accurate answer generation.
38
+
39
+ Secure Local Deployment:
40
+ Operates on independent local servers using NVIDIA L40s GPUs to maximize security and prevent information leakage.
41
+
42
+ **Experience the Power of Gemma3-R1984-1B**
43
+
44
+ - ✅ **Agentic AI Platform:** An autonomous system designed to make intelligent decisions and act independently.
45
+ - ✅ **Reasoning & Uncensored:** Delivers clear, accurate, and unfiltered responses by harnessing advanced reasoning capabilities.
46
+ - ✅ **Multimodal & VLM:** Seamlessly processes and interprets multiple input types—text, images, videos—empowering versatile applications.
47
+ - ✅ **Deep-Research & RAG:** Integrates state-of-the-art deep research and retrieval-augmented generation to provide comprehensive, real-time insights.
48
+
49
+ **Cutting-Edge Hardware for Maximum Security**
50
+
51
+ Gemma3-R1984-1B is engineered to operate on a dedicated **NVIDIA L40s GPU** within an independent local server environment. This robust setup not only guarantees optimal performance and rapid processing but also enhances security by isolating the model from external networks, effectively preventing information leakage. Whether handling sensitive data or complex queries, our platform ensures that your information remains secure and your AI interactions remain uncompromised.
52
+
53
+
54
+
55
+ # Use Cases
56
+ Fast-response conversational agents
57
+
58
+ Deep research and retrieval-augmented generation (RAG)
59
+
60
+ Document comparison and detailed analysis
61
+
62
+ Visual question answering from images and videos
63
+
64
+ Complex reasoning and research-based inquiries
65
+
66
+ # Supported File Formats
67
+ Images: PNG, JPG, JPEG, GIF, WEBP
68
+
69
+ Videos: MP4
70
+
71
+ Documents: PDF, CSV, TXT
72
+
73
+ # Model Details
74
+ Parameter Count: Approximately 1B parameters (estimated)
75
+
76
+ Context Window: Up to 8,000 tokens
77
+
78
+ Hugging Face Model Path: VIDraft/Gemma-3-R1984-1B
79
+
80
+ License: mit(Agentic AI) / gemma(gemma-3-1B)
81
+
82
+ # Installation and Setup
83
+ ## Requirements
84
+ Ensure you have Python 3.8 or higher installed. The model relies on several libraries:
85
+
86
+ PyTorch (with bfloat16 support)
87
+
88
+ Transformers
89
+
90
+ Gradio
91
+
92
+ OpenCV (opencv-python)
93
+
94
+ Pillow (PIL)
95
+
96
+ PyPDF2
97
+
98
+ Pandas
99
+
100
+ Loguru
101
+
102
+ Requests
103
+
104
+
105
+ # Install dependencies using pip:
106
+
107
+ pip install torch transformers gradio opencv-python pillow PyPDF2 pandas loguru requests
108
+
109
+ # Environment Variables
110
+ Set the following environment variables before running the model:
111
+
112
+ ## SERPHOUSE_API_KEY
113
+ Your SERPHouse API key for web search functionality.
114
+
115
+ Example:
116
+ export SERPHOUSE_API_KEY="your_api_key_here"
117
+
118
+ MODEL_ID
119
+ (Optional) The model identifier; default is VIDraft/Gemma-3-R1984-1B.
120
+
121
+ MAX_NUM_IMAGES
122
+ (Optional) Maximum number of images allowed per query (default is 5).
123
+
124
+ # Running the Model
125
+ Gemma3-R1984-1B comes with a Gradio-based multimodal chat interface. To run the model locally:
126
+
127
+ 1. Clone the Repository:
128
+ Ensure you have the repository containing the model code.
129
+
130
+ 2. Launch the Application:
131
+ Execute the main Python file:
132
+
133
+
134
+ python your_filename.py
135
+
136
+ This will start a local Gradio interface. Open the provided URL in your browser to interact with the model.
137
+
138
+ # Example Code: Server and Client Request
139
+ ## Server Example
140
+ You can deploy the model server locally using the provided Gradio code. Make sure your server is accessible at your designated URL.
141
+
142
+ ## Client Request Example
143
+ Below is an example of how to interact with the model using an HTTP API call:
144
+
145
+ ```py
146
+
147
+ import requests
148
+ import json
149
+
150
+ # Replace with your server URL and token
151
+ url = "http://<your-server-url>:8000/v1/chat/completions"
152
+ headers = {
153
+ "Content-Type": "application/json",
154
+ "Authorization": "Bearer your_token_here"
155
+ }
156
+
157
+ # Construct the message payload
158
+ messages = [
159
+ {"role": "system", "content": "You are a powerful AI assistant."},
160
+ {"role": "user", "content": "Compare the contents of two PDF files."}
161
+ ]
162
+
163
+ data = {
164
+ "model": "VIDraft/Gemma-3-R1984-1B",
165
+ "messages": messages,
166
+ "temperature": 0.15
167
+ }
168
+
169
+ # Send the POST request to the server
170
+ response = requests.post(url, headers=headers, data=json.dumps(data))
171
+
172
+ # Print the response from the model
173
+ print(response.json())
174
+ ```
175
+
176
+ **Important Deployment Notice:**
177
+
178
+ For optimal performance, it is highly recommended to clone the repository using the following command. This model is designed to run on a server equipped with at least an NVIDIA L40s, L4, A100(ZeroGPU) GPU. The minimum VRAM requirement is 24GB, and VRAM usage may temporarily peak at approximately 82GB during processing.
179
+
180
+ ```bash
181
+ git clone https://huggingface.co/spaces/VIDraft/Gemma-3-R1984-1B