arxiv:2410.23218
Kanzhi Cheng
cckevinn
AI & ML interests
None yet
Recent Activity
reacted
to
Symbol-LLM's
post
with 🚀
about 1 month ago
🥳 Thrilled to introduce our recent efforts on bootstrapping VLMs for multi-modal chain-of-thought reasoning !
📕 Title: Vision-Language Models Can Self-Improve Reasoning via Reflection
🔗 Link: https://huggingface.co/papers/2411.00855
😇Takeaways:
- We found that VLMs can self-improve reasoning performance through a reflection mechanism, and importantly, this approach can scale through test-time computing.
- Evaluation on comprehensive and diverse Vision-Language reasoning tasks are included !
reacted
to
Symbol-LLM's
post
with 🔥
about 1 month ago
🥳 Thrilled to introduce our recent efforts on bootstrapping VLMs for multi-modal chain-of-thought reasoning !
📕 Title: Vision-Language Models Can Self-Improve Reasoning via Reflection
🔗 Link: https://huggingface.co/papers/2411.00855
😇Takeaways:
- We found that VLMs can self-improve reasoning performance through a reflection mechanism, and importantly, this approach can scale through test-time computing.
- Evaluation on comprehensive and diverse Vision-Language reasoning tasks are included !
reacted
to
Symbol-LLM's
post
with 🔥
about 1 month ago
🥳 Thrilled to introduce our recent efforts on bootstrapping VLMs for multi-modal chain-of-thought reasoning !
📕 Title: Vision-Language Models Can Self-Improve Reasoning via Reflection
🔗 Link: https://huggingface.co/papers/2411.00855
😇Takeaways:
- We found that VLMs can self-improve reasoning performance through a reflection mechanism, and importantly, this approach can scale through test-time computing.
- Evaluation on comprehensive and diverse Vision-Language reasoning tasks are included !