pimpalgaonkar commited on
Commit
79f40ff
·
verified ·
1 Parent(s): c01e25c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -3
README.md CHANGED
@@ -12,13 +12,30 @@ pipeline_tag: text-generation
12
 
13
  [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1FEmlwGgn9209iQO-rs2-9UHPLoytwZMH?usp=sharing)
14
 
15
- This is an open‑source chart‑understanding Vision‑Language Model (VLM) developed at [Bespoke Labs](https://www.bespokelabs.ai/) and maintained by [Liyan Tang](https://www.tangliyan.com/) and Bespoke Labs. It sets a new state‑of‑the‑art in chart question‑answering (Chart‑QA) for 7 billion‑parameter models, outperforming much larger closed models such as Gemini‑1.5‑Pro and Claude‑3.5 on seven public benchmarks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16
 
17
- Please check our blog for more information about how we trained the model <Blog Post Link>
18
 
19
  # Model Performance
20
 
21
- Our model achieves state-of-the-art performance on chart understanding among models with similar sizes. In addition to that, our models can even surpass closed-models such as Gemini-1.5-Pro and Claude-3.5.
 
22
 
23
  | Model / Category | ChartQAPro (1637) | ChartQA (2500) | EvoChart (1250) | CharXiv (4000) | ChartX (1152) | ChartBench (2100) | MMC (808) | Average |
24
  |----------------------------------------|------------------:|---------------:|----------------:|---------------:|--------------:|------------------:|----------:|--------:|
@@ -42,6 +59,10 @@ Our model achieves state-of-the-art performance on chart understanding among mod
42
 
43
  # Model Use:
44
 
 
 
 
 
45
  ```python
46
  import requests
47
  from PIL import Image
@@ -118,6 +139,7 @@ question = "How many global regions maintained their startup funding losses belo
118
  print("\n\n=================Model Output:===============\n\n", get_answer(image_url, question))
119
  ```
120
 
 
121
  # Licence
122
 
123
  This work is licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).
 
12
 
13
  [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1FEmlwGgn9209iQO-rs2-9UHPLoytwZMH?usp=sharing)
14
 
15
+ This is an open‑source chart‑understanding Vision‑Language Model (VLM) developed at [Bespoke Labs](https://www.bespokelabs.ai/) and maintained by [Liyan Tang](https://www.tangliyan.com/) and Bespoke Labs. It sets a new state‑of‑the‑art in chart question‑answering (Chart‑QA) for 7 billion‑parameter models, outperforming much larger closed models such as Gemini‑1.5‑Pro and Claude‑3.5 on six public benchmarks.
16
+
17
+ 1. **Blog Post**: https://www.bespokelabs.ai/blog/bespoke-minichart-7b
18
+ 2. **Model Playground**: https://playground.bespokelabs.ai/minichart
19
+ ---
20
+
21
+ # Example Outputs
22
+
23
+ Following are two examples of output obtained from Bespoke-MiniChart-7B
24
+
25
+
26
+ <p align="left">
27
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6444e4417a7b94ddc2d14e1d/E5WGhi_fVNzCsrKeNeIs3.png" width="700">
28
+ </p>
29
+
30
+ <p align="left">
31
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6444e4417a7b94ddc2d14e1d/bYKXRm3sfOdX3zd_5qUpK.png" width="700">
32
+ </p>
33
 
 
34
 
35
  # Model Performance
36
 
37
+ Bespoke-MiniChart-7B achieves state-of-the-art performance on chart understanding among models with similar sizes. In addition to that, the model can even surpass closed-models such as Gemini-1.5-Pro and Claude-3.5.
38
+
39
 
40
  | Model / Category | ChartQAPro (1637) | ChartQA (2500) | EvoChart (1250) | CharXiv (4000) | ChartX (1152) | ChartBench (2100) | MMC (808) | Average |
41
  |----------------------------------------|------------------:|---------------:|----------------:|---------------:|--------------:|------------------:|----------:|--------:|
 
59
 
60
  # Model Use:
61
 
62
+ The model is available on the playground here: https://playground.bespokelabs.ai/minichart
63
+
64
+ You can also run the model with the following snippet:
65
+
66
  ```python
67
  import requests
68
  from PIL import Image
 
139
  print("\n\n=================Model Output:===============\n\n", get_answer(image_url, question))
140
  ```
141
 
142
+ ---
143
  # Licence
144
 
145
  This work is licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/).