Visualizations + NLP

community

AI & ML interests

None defined yet.

Recent Activity

VisNLP's activity

ahmed-masryΒ 
posted an update about 4 hours ago
view post
Post
251
Happy to announce AlignVLM πŸ“ – a novel approach to bridging vision and language latent spaces for multimodal understanding in Vision-Language Models (VLMs) πŸŒπŸ“„πŸ–Ό

πŸ”— Read the paper: AlignVLM: Bridging Vision and Language Latent Spaces for Multimodal Understanding (2502.01341)

🧐 What’s the challenge?
Aligning visual features with language embeddings remains a major bottleneck in VLMs. Existing connectors such as Multi-layer perceptron (MLPs) often introduce noise that degrades performance. ❌

🎯 Our Solution: ALIGN Connector
We propose AlignVLM, a method that maps vision features into a weighted average of LLM text embeddings, ensuring they remain in a space that the LLM can effectively interpret. βœ…

πŸ”¬ How does it perform?
We compared ALIGN against common connectors like MLPs, Perceiver Resampler, and Ovis trained under similar configurations. The results? ALIGN outperforms them all πŸ† on diverse document understanding tasks πŸ“„.

πŸ“Š Meet the AlignVLM Model Family!
We trained Llama 3.1 (1B, 3B, 8B) using our connector and benchmarked them against various models. The results:
βœ… AlignVLM surpasses all Base VLMs trained under similar configurations. βœ… Our models also perform competitively against Instruct VLMs such as Qwen2-VL and InternVL-2.5 πŸš€.

πŸ€” What about robustness to noise?
We injected Gaussian noise (ΞΌ=0, Οƒ=3) into the vision encoder’s outputs before feeding them to the connector:
βœ… ALIGN Connector: Minimal drop (↓1.67%) – proving its high robustness!
❌ MLP Connector: Severe degradation (↓25.54%) – struggling with noisy inputs.

Code & model weights coming soon! Stay tuned! πŸ”₯
ahmed-masryΒ 
posted an update 4 months ago
view post
Post
1481
πŸš€ Introducing ColFlor: An Efficient, OCR-Free Vision-Language Document Retrieval Model 🌟

Earlier this year, ColPali revolutionized document retrieval by eliminating the need for error-prone OCR pipelines. Instead, it directly processes the document images. However, with its 3 billion parameters, ColPali is computationally heavy for large-scale applications.

That’s where ColFlor comes inβ€”a smaller, faster alternative! πŸŽ‰ At 17x smaller than ColPali, ColFlor offers a more efficient, OCR-free document retrieval solution, making it ideal for users with limited computing resources (GPU Poor). πŸ’‘

Key Highlights:
🧠 174M parameters (vs. 3B for ColPali)
⚑ 9.8x faster query encoding, 5.25x faster image encoding
πŸ“‰ Only 1.8% performance drop on text-rich English documents

Check out the full blog post for more insights on modeling, training, and evaluations across various document retrieval tasks! πŸš€
Also, feel free to try our demo on huggingface πŸ€—

πŸ”— Resources:
πŸ“„ Blog post: https://huggingface.co/blog/ahmed-masry/colflor
🧠 Model: ahmed-masry/ColFlor
πŸ’» Demo: ahmed-masry/ColFlor-Demo
πŸ‹οΈβ€β™‚οΈ Training code: https://github.com/AhmedMasryKU/colflor
πŸ“Š Evaluation code: https://github.com/AhmedMasryKU/vidore-benchmark-colflor
ahmed-masryΒ 
posted an update 7 months ago
view post
Post
3657
πŸ“’ Exciting News! Our latest paper "ChartGemma" is out! πŸ“Š

🧡1/3: ChartGemma overcomes existing chart models key limitations that rely too much on data tables. Instead, it is trained on data generated directly from chart images, capturing crucial visual trendsπŸ“ΈπŸ”

🧡2/3: ChartGemma builds upon PaliGemma from Google Research and is fine-tuned on a high-quality visual instruction tuning dataset generated from Gemini Flash 1.5. πŸŒŸπŸ“Š

🧡3/3: Achieves state-of-the-art results in chart summarization, question answering, and fact-checking tasks. πŸ…πŸ“Š It can also generate more accurate and realistic chart summaries. πŸ“πŸ”

Our model and data are publicly available. We also have a cool web demo. Check it out! πŸš€
Demo: ahmed-masry/ChartGemma
Code: https://github.com/vis-nlp/ChartGemma
Paper: ChartGemma: Visual Instruction-tuning for Chart Reasoning in the Wild (2407.04172)