Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ pipeline_tag: text-generation
|
|
12 |
|
13 |
[](https://colab.research.google.com/drive/1FEmlwGgn9209iQO-rs2-9UHPLoytwZMH?usp=sharing)
|
14 |
|
15 |
-
This is an open‑source chart‑understanding Vision‑Language Model (VLM) developed at
|
16 |
|
17 |
Please check our blog for more information about how we trained the model <Blog Post Link>
|
18 |
|
@@ -112,7 +112,7 @@ llm = LLM(
|
|
112 |
)
|
113 |
|
114 |
# Running inference
|
115 |
-
image_url = "https://github.com/bespokelabsai/
|
116 |
question = "How many global regions maintained their startup funding losses below 30% in 2022?"
|
117 |
|
118 |
print("\n\n=================Model Output:===============\n\n", get_answer(image_url, question))
|
|
|
12 |
|
13 |
[](https://colab.research.google.com/drive/1FEmlwGgn9209iQO-rs2-9UHPLoytwZMH?usp=sharing)
|
14 |
|
15 |
+
This is an open‑source chart‑understanding Vision‑Language Model (VLM) developed at [Bespoke Labs](https://www.bespokelabs.ai/) and maintained by [Liyan Tang](https://www.tangliyan.com/) and Bespoke Labs. It sets a new state‑of‑the‑art in chart question‑answering (Chart‑QA) for 7 billion‑parameter models, outperforming much larger closed models such as Gemini‑1.5‑Pro and Claude‑3.5 on seven public benchmarks.
|
16 |
|
17 |
Please check our blog for more information about how we trained the model <Blog Post Link>
|
18 |
|
|
|
112 |
)
|
113 |
|
114 |
# Running inference
|
115 |
+
image_url = "https://github.com/bespokelabsai/minichart-playground-examples/blob/main/images/ilyc9wk4jf8b1.png?raw=true"
|
116 |
question = "How many global regions maintained their startup funding losses below 30% in 2022?"
|
117 |
|
118 |
print("\n\n=================Model Output:===============\n\n", get_answer(image_url, question))
|