vikhyatk commited on
Commit
08ffeb8
·
verified ·
1 Parent(s): 05d640e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -27
README.md CHANGED
@@ -3,43 +3,50 @@ license: apache-2.0
3
  pipeline_tag: image-text-to-text
4
  ---
5
 
6
- moondream is a small vision language model designed to run efficiently on edge devices. Check out the [GitHub repository](https://github.com/vikhyat/moondream) for details, or try it out on the [Hugging Face Space](https://huggingface.co/spaces/vikhyatk/moondream2)!
7
 
8
- **Benchmarks**
 
 
9
 
10
- | Release | VQAv2 | GQA | TextVQA | DocVQA | TallyQA<br>(simple/full) | POPE<br>(rand/pop/adv) |
11
- | --- | --- | --- | --- | --- | --- | --- |
12
- | **2024-08-26** (latest) | 80.3 | 64.3 | 65.2 | 70.5 | 82.6 / 77.6 | 89.6 / 88.8 / 87.2 |
13
- | 2024-07-23 | 79.4 | 64.9 | 60.2 | 61.9 | 82.0 / 76.8 | 91.3 / 89.7 / 86.9 |
14
- | 2024-05-20 | 79.4 | 63.1 | 57.2 | 30.5 | 82.1 / 76.6 | 91.5 / 89.6 / 86.2 |
15
- | 2024-05-08 | 79.0 | 62.7 | 53.1 | 30.5 | 81.6 / 76.1 | 90.6 / 88.3 / 85.0 |
16
- | 2024-04-02 | 77.7 | 61.7 | 49.7 | 24.3 | 80.1 / 74.2 | - |
17
- | 2024-03-13 | 76.8 | 60.6 | 46.4 | 22.2 | 79.6 / 73.3 | - |
18
- | 2024-03-06 | 75.4 | 59.8 | 43.1 | 20.9 | 79.5 / 73.2 | - |
19
- | 2024-03-04 | 74.2 | 58.5 | 36.4 | - | - | - |
20
 
21
 
22
  **Usage**
23
 
24
- ```bash
25
- pip install transformers einops
26
- ```
27
-
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
  from PIL import Image
31
 
32
- model_id = "vikhyatk/moondream2"
33
- revision = "2024-08-26"
34
  model = AutoModelForCausalLM.from_pretrained(
35
- model_id, trust_remote_code=True, revision=revision
 
 
 
 
36
  )
37
- tokenizer = AutoTokenizer.from_pretrained(model_id, revision=revision)
38
-
39
- image = Image.open('<IMAGE_PATH>')
40
- enc_image = model.encode_image(image)
41
- print(model.answer_question(enc_image, "Describe this image.", tokenizer))
42
- ```
43
 
44
- The model is updated regularly, so we recommend pinning the model version to a
45
- specific release as shown above.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  pipeline_tag: image-text-to-text
4
  ---
5
 
6
+ Moondream is a small vision language model designed to run efficiently on edge devices.
7
 
8
+ * [Website](https://moondream.ai/)
9
+ * [Demo](https://moondream.ai/playground)
10
+ * [GitHub](https://github.com/vikhyat/moondream)
11
 
12
+ This repository contains the latest (**2025-01-09**) release of Moondream, as well as historical releases. The model is updated frequently, so we recommend specifying a revision as shown below if you're using it in a production application.
 
 
 
 
 
 
 
 
 
13
 
14
 
15
  **Usage**
16
 
 
 
 
 
17
  ```python
18
  from transformers import AutoModelForCausalLM, AutoTokenizer
19
  from PIL import Image
20
 
 
 
21
  model = AutoModelForCausalLM.from_pretrained(
22
+ "vikhyatk/moondream2",
23
+ revision="2025-01-09",
24
+ trust_remote_code=True,
25
+ # Uncomment to run on GPU.
26
+ # device_map={"": "cuda"}
27
  )
 
 
 
 
 
 
28
 
29
+ # Captioning
30
+ print("Short caption:")
31
+ print(model.caption(image, length="short")["caption"])
32
+
33
+ print("\nNormal caption:")
34
+ for t in model.caption(image, length="normal", stream=True)["caption"]:
35
+ # Streaming generation example, supported for caption() and detect()
36
+ print(t, end="", flush=True)
37
+ print(model.caption(image, length="normal"))
38
+
39
+ # Visual Querying
40
+ print("\nVisual query: 'How many people are in the image?'")
41
+ print(model.query(image, "How many people are in the image?")["answer"])
42
+
43
+ # Object Detection
44
+ print("\nObject detection: 'face'")
45
+ objects = model.detect(image, "face")["objects"]
46
+ print(f"Found {len(objects)} face(s)")
47
+
48
+ # Pointing
49
+ print("\nPointing: 'person'")
50
+ points = model.point(image, "person")["points"]
51
+ print(f"Found {len(points)} person(s)")
52
+ ```