Moondream is a small vision language model designed to run efficiently everywhere.
This repository contains the 2025-04-14 4-bit release of Moondream. On an Nvidia RTX 3090, it uses 2,450 MB of VRAM and runs at a speed of 184 tokens/second. We used quantization-aware training techniques to build this version of the model, allowing us to achieve a 42% reduction in memory usage with only an 0.6% drop in accuracy.
There's more information about this version of the model in our release blog post. Other revisions, as well as release history, can be found here.
Usage
Make sure to install the requirements:
pip install pillow torchao
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model = AutoModelForCausalLM.from_pretrained(
"moondream/moondream-2b-2025-04-14-4bit",
trust_remote_code=True,
device_map={"": "cuda"}
)
# Optional, but recommended when running inference on a large number of
# images since it has upfront compilation cost but significantly speeds
# up inference:
model.model.compile()
# Captioning
print("Short caption:")
print(model.caption(image, length="short")["caption"])
print("\nNormal caption:")
for t in model.caption(image, length="normal", stream=True)["caption"]:
# Streaming generation example, supported for caption() and detect()
print(t, end="", flush=True)
print(model.caption(image, length="normal"))
# Visual Querying
print("\nVisual query: 'How many people are in the image?'")
print(model.query(image, "How many people are in the image?")["answer"])
# Object Detection
print("\nObject detection: 'face'")
objects = model.detect(image, "face")["objects"]
print(f"Found {len(objects)} face(s)")
# Pointing
print("\nPointing: 'person'")
points = model.point(image, "person")["points"]
print(f"Found {len(points)} person(s)")
- Downloads last month
- 3,616
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support