openfree commited on
Commit
1677a45
·
verified ·
1 Parent(s): 2bba946

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -21,7 +21,7 @@ pipeline_tag: image-text-to-text
21
  # Gemma3-R1984-4B
22
 
23
  # Model Overview
24
- Gemma3-R1984-4B is a robust Agentic AI platform built on Googls’s Gemma-3-4B model. It integrates state-of-the-art deep research via web search with multimodal file processing—including images, videos, and documents—and handles long contexts up to 8,000 tokens. Designed for local deployment on independent servers using NVIDIA L40s GPUs, it provides high security, prevents data leakage, and delivers uncensored responses.
25
 
26
  # Key Features
27
  Multimodal Processing:
@@ -175,7 +175,7 @@ print(response.json())
175
 
176
  **Important Deployment Notice:**
177
 
178
- For optimal performance, it is highly recommended to clone the repository using the following command. This model is designed to run on a server equipped with at least an NVIDIA A100 GPU. The minimum VRAM requirement is 53GB, and VRAM usage may temporarily peak at approximately 82GB during processing.
179
 
180
  ```bash
181
  git clone https://huggingface.co/spaces/VIDraft/Gemma-3-R1984-4B
 
21
  # Gemma3-R1984-4B
22
 
23
  # Model Overview
24
+ Gemma3-R1984-4B is a robust Agentic AI platform built on Googls’s Gemma-3-4B model. It integrates state-of-the-art deep research via web search with multimodal file processing—including images, videos, and documents—and handles long contexts up to 8,000 tokens. Designed for local deployment on independent servers using NVIDIA L40s, L4, A-100(ZeroGPU) GPUs, it provides high security, prevents data leakage, and delivers uncensored responses.
25
 
26
  # Key Features
27
  Multimodal Processing:
 
175
 
176
  **Important Deployment Notice:**
177
 
178
+ For optimal performance, it is highly recommended to clone the repository using the following command. This model is designed to run on a server equipped with at least an NVIDIA L40s, L4, A100(ZeroGPU) GPU. The minimum VRAM requirement is 24GB, and VRAM usage may temporarily peak at approximately 82GB during processing.
179
 
180
  ```bash
181
  git clone https://huggingface.co/spaces/VIDraft/Gemma-3-R1984-4B