AI & ML interests

Small LMs for small computers

AbhaykoulΒ 
posted an update about 16 hours ago
view post
Post
684
πŸŽ‰ Dhanishtha-2.0-preview-0725 is Now Live

The Intermediate Thinking Model just got even better.
With the new update, Dhanishtha is now sharper, smarter, and trained further on tool use, coding, and math

🧠 What Makes Dhanishtha Different?
Unlike standard COT models that give one-shot responses, Dhanishtha thinks in layers:

> Think β†’ Answer β†’ Rethink β†’ Improve β†’ Rethink again if needed.

HelpingAI/Dhanishtha-2.0-preview-0725
prithivMLmodsΒ 
posted an update 1 day ago
view post
Post
1218
Open Omega Ξ© (Forge, Atom, Explora):
A Fusion of Math, Science, and Coding πŸ§ͺπŸ€—

Datasets :
⌯⌲ Open-Omega-Forge-1M [Mathematics, Coding, and Science]: prithivMLmods/Open-Omega-Forge-1M
⌯⌲ Open-Omega-Atom-1.5M [Mathematics and Science]: prithivMLmods/Open-Omega-Atom-1.5M
⌯⌲ Open-Omega-Explora-2.5M [Forge + Atom]: prithivMLmods/Open-Omega-Explora-2.5M
⌯⌲ Others [Subordinate portion] - Curated and blended modular dataset.

Models :
> Omega-Qwen3-Atom-8B : prithivMLmods/Omega-Qwen3-Atom-8B
> Omega-Qwen2.5-Coder-3B : prithivMLmods/Omega-Qwen2.5-Coder-3B

Dataset Collection: prithivMLmods/open-omega-a-fusion-of-math-science-and-coding-68756c37769fa39c4055cc0e

.
.
.
For more information, refer to the dataset card(s).

prithivMLmodsΒ 
posted an update 3 days ago
view post
Post
3699
Excited to bring the new models that are performing exceptionally well in document OCR, image captioning, and visual understanding tasks. Megalodon-OCR and Perseus-Doc-VL have both demonstrated significant improvements across key areas. You can explore live demos on Hugging Face Spaces to compare their performance with other top-tier models available on the hub. πŸ€—πŸ“„

Models & Spaces :
> Megalodon-OCR (3B) : prithivMLmods/Megalodon-OCR-Sync-0713
> Perseus-Doc-vl (7B): prithivMLmods/Perseus-Doc-vl-0712
> Doc-VLMs-OCR : prithivMLmods/Doc-VLMs-OCR
> core-OCR : prithivMLmods/core-OCR


Datasets Caption Mix :
> Corvus-OCR-Caption-Mix : prithivMLmods/Corvus-OCR-Caption-Mix
> Corvus-OCR-Caption-Mini-Mix : prithivMLmods/Corvus-OCR-Caption-Mini-Mix

Collections :
> Corvus OCR Caption Mix: prithivMLmods/corvus-ocr-caption-mix-687349bfaceffbd10976f0cc
> Captioning / OCR / DocTable : prithivMLmods/captioning-ocr-doctable-687382e1da822008bb5c06f2

GitHub :
> OCR-ReportLab : https://github.com/PRITHIVSAKTHIUR/OCR-ReportLab/blob/main/Megalodon-OCR-Sync-0713-ColabNotebook/Megalodon_OCR_Sync_0713_ReportLab.ipynb

Others Spaces :
> Multimodal-OCR : prithivMLmods/Multimodal-OCR
> Multimodal-VLMs : prithivMLmods/Multimodal-VLMs
> Multimodal-OCR2 : prithivMLmods/Multimodal-OCR2
> Florence-2-Image-Caption : prithivMLmods/Florence-2-Image-Caption
> VisionScope-R2 : prithivMLmods/VisionScope-R2
> DocScope-R1 : prithivMLmods/DocScope-R1

.
.
.
To know more about it, visit the model card of the respective model. !!
TonicΒ 
posted an update 4 days ago
view post
Post
3101
πŸ™‹πŸ»β€β™‚οΈ Normalize adding compute & runtime traces to your model cards
  • 2 replies
Β·
prithivMLmodsΒ 
posted an update 7 days ago
view post
Post
2328
Demo of OCR & Math QA using multi-capable VLMs like MonkeyOCR-pro-1.2B, R1-One-Vision, VisionaryR1, Vision Matters-7B, and VIGAL-7B, all running together with support for both image and video inference. πŸͺ

✦ Demo Spaces :
β€· Multimodal VLMs : prithivMLmods/Multimodal-VLMs

✦ Models :
β€· Visionary R1 : maifoundations/Visionary-R1
β€· MonkeyOCR [1.2B] : echo840/MonkeyOCR-pro-1.2B
β€· ViGaL 7B : yunfeixie/ViGaL-7B
β€· Lh41-1042-Magellanic-7B-0711 : prithivMLmods/Lh41-1042-Magellanic-7B-0711
β€· Vision Matters 7B : Yuting6/Vision-Matters-7B
β€· WR30a-Deep-7B-0711 : prithivMLmods/WR30a-Deep-7B-0711

✦ MonkeyOCR-pro-1.2B Colab T4 Demo [ notebook ]
β€· MonkeyOCR-pro-1.2B-ReportLab : https://github.com/PRITHIVSAKTHIUR/OCR-ReportLab/blob/main/MonkeyOCR-0709/MonkeyOCR-pro-1.2B-ReportLab.ipynb

✦ GitHub : https://github.com/PRITHIVSAKTHIUR/OCR-ReportLab

The community GPU grant was given by Hugging Face β€” special thanks to them.πŸ€—πŸš€

.
.
.
To know more about it, visit the model card of the respective model. !!
TonicΒ 
posted an update 10 days ago
view post
Post
431
Who's going to Raise Summit in Paris Tomorrow ?

If you're around , I would love to meet you :-)
prithivMLmodsΒ 
posted an update 13 days ago
view post
Post
3502
Multimodal OCR with ReportLab? On Colab T4? (Nanonets OCR, Monkey OCR, OCRFlux 3B, Typhoo OCR 3B?) .. Yeah, it’s possible. I’ve made a dedicated Colab notebook to experiment with these models (all built on top of Qwen2.5 VL). πŸ€—πŸš€

Download notebooks here :

✦︎ NanonetsOCR : https://colab.research.google.com/drive/1VvA-amvSVxGdWgIsh4_by6KWOtEs_Iqp
✦︎ MonkeyOCR : https://colab.research.google.com/drive/1vPCojbmlXjDFUt06FJ1tjgnj_zWK4mUo
✦︎ OCRFluxOCR : https://colab.research.google.com/drive/1TDoCXzWdF2hxVLbISqW6DjXAzOyI7pzf
✦︎ TyphoonOCR : https://colab.research.google.com/drive/1_59zvLNnn1kvbiSFxzA1WiqhpbW8RKbz

🜲 Github : https://github.com/PRITHIVSAKTHIUR/OCR-ReportLab

What does it do?

1. Performs OCR on the input image
2. Generates a DOCX or PDF file with the input image and the extracted text

.
.
.
To know more about it, visit the model card of the respective model. !!
prithivMLmodsΒ 
posted an update 15 days ago
view post
Post
1659
The bunch of comparable demos for Multimodal VLMs (excels in OCR, cinematography understanding, spatial reasoning, etc.) now up on the Hub πŸ€— β€” max recent till Jun'25.

✦ Demo Spaces β€”

> [Nanonets-OCR-s, MonkeyOCR, Typhoon-OCR-7B, SmolDocling] : prithivMLmods/Multimodal-OCR2
> [GLM-4.1v, docscopeOCR-7B, MonkeyOCR, coreOCR-7B] : prithivMLmods/core-OCR
> [Camel-Doc-OCR, ViLaSR-7B, OCRFlux-3B, ShotVL-7B] : prithivMLmods/Doc-VLMs-v2-Localization
> [SkyCaptioner-V1, SpaceThinker-3B, coreOCR-7B, SpaceOm-3B] : prithivMLmods/VisionScope-R2
> [RolmOCR-7B, Qwen2-VL-OCR-2B, Aya-Vision-8B, Nanonets-OCR-s] : prithivMLmods/Multimodal-OCR
> [DREX-062225-7B, Typhoon-OCR-3B, olmOCR-7B-0225, VIREX-062225-7B] : prithivMLmods/Doc-VLMs-OCR
> [Cosmos-Reason1-7B, docscopeOCR-7B, Captioner-7B, visionOCR-3B] : prithivMLmods/DocScope-R1

✦ Space Collection : prithivMLmods/multimodal-implementations-67c9982ea04b39f0608badb0

.
.
.
To know more about it, visit the model card of the respective model. !!
  • 1 reply
Β·
AbhaykoulΒ 
posted an update 15 days ago
view post
Post
2907
πŸŽ‰ Dhanishtha 2.0 Preview is Now Open Source!

The world's first Intermediate Thinking Model is now available to everyone!

Dhanishtha 2.0 Preview brings revolutionary intermediate thinking capabilities to the open-source community. Unlike traditional reasoning models that think once, Dhanishtha can think, answer, rethink, answer again, and continue rethinking as needed using multiple blocks between responses.

πŸš€ Key Features
- Intermediate thinking: Think β†’ Answer β†’ Rethink β†’ Answer β†’ Rethink if needed...
- Token efficient: Uses up to 79% fewer tokens than DeepSeek R1 on similar queries
- Transparent thinking: See the model's reasoning process in real-time
- Open source: Freely available for research and development


HelpingAI/Dhanishtha-2.0-preview
https://helpingai.co/chat
  • 1 reply
Β·
NymboΒ 
posted an update 16 days ago
view post
Post
1836
Anyone know how to reset Claude web's MCP config? I connected mine when the HF MCP first released with just the default example spaces added. I added lots of other MCP spaces but Claude.ai doesn't update the available tools... "Disconnecting" the HF integration does nothing, deleting it and adding it again does nothing.

Refreshing tools works fine in VS Code because I can manually restart it in mcp.json, but claude.ai has no such option. Anyone got any ideas?
Β·
prithivMLmodsΒ 
posted an update 17 days ago
view post
Post
2417
The demo for Camel-Doc-OCR-062825 (exp) is optimized for document retrieval and direct Markdown (.md) generation from images and PDFs. Additional demos include OCRFlux-3B (document OCR), VilaSR (spatial reasoning with visual drawing), and ShotVL (cinematic language understanding). πŸͺ

✦ Space : prithivMLmods/Doc-VLMs-v2-Localization

Models :
β€· camel-doc-ocr-062825 : prithivMLmods/Camel-Doc-OCR-062825
β€· ocrflux-3b : ChatDOC/OCRFlux-3B
β€· vilasr : AntResearchNLP/ViLaSR
β€· shotvl : Vchitect/ShotVL-7B

β€· Multimodal Implementations : prithivMLmods/multimodal-implementations-67c9982ea04b39f0608badb0

The community GPU grant was given by Hugging Face β€” special thanks to them. This space supports the following tasks: (image inference, video inference) with result markdown canvas and object detection/localization. πŸ€—πŸš€

.
.
.
To know more about it, visit the model card of the respective model. !!
AbhaykoulΒ 
posted an update 22 days ago
view post
Post
4260
Introducing Dhanishtha 2.0: World's first Intermediate Thinking Model

Dhanishtha 2.0 is the world's first LLM designed to think between the responses. Unlike other Reasoning LLMs, which think just once.

Dhanishtha can think, rethink, self-evaluate, and refine in between responses using multiple <think> blocks.
This technique makes it Hinghlt Token efficient it Uses up to 79% fewer tokens than DeepSeek R1
---

You can try our model from: https://helpingai.co/chat
Also, we're gonna Open-Source Dhanistha on July 1st.

---
For Devs:
πŸ”‘ Get your API key at https://helpingai.co/dashboard
from HelpingAI import HAI  # pip install HelpingAI==1.1.1
from rich import print

hai = HAI(api_key="hl-***********************")

response = hai.chat.completions.create(
    model="Dhanishtha-2.0-preview",
    messages=[{"role": "user", "content": "What is the value of ∫0∞π‘₯3/π‘₯βˆ’1𝑑π‘₯ ?"}],
    stream=True,
    hide_think=False # Hide or show models thinking
)

for chunk in response:
    print(chunk.choices[0].delta.content, end="", flush=True)
  • 2 replies
Β·
prithivMLmodsΒ 
posted an update 22 days ago
view post
Post
1977
The demo for DREX-062225-exp (Document Retrieval and Extraction eXpert ~ experimental) / typhoon-ocr-3b (a bilingual document parsing model built specifically for real-world documents) / VIREX-062225-exp (Video Information Retrieval and Extraction eXpert ~ experimental) / olmOCR-7B-0225-preview (the document parsing model based on Qwen2VL). πŸ€—

✦ Demo : prithivMLmods/Doc-VLMs-OCR ~ ( with .md canvas )

β€· DREX-062225-exp : prithivMLmods/DREX-062225-exp
β€· typhoon-ocr-3b : scb10x/typhoon-ocr-3b
β€· VIREX-062225-exp : prithivMLmods/VIREX-062225-exp
β€· olmOCR-7B-0225-preview : allenai/olmOCR-7B-0225-preview

β€· Collection : prithivMLmods/doc-vl-685839064a863e1cd23be3f1
β€· Multimodal Implementations : prithivMLmods/multimodal-implementations-67c9982ea04b39f0608badb0
.
.
.

To know more about it, visit the model card of the respective model. !!
Β·
prithivMLmodsΒ 
posted an update 23 days ago
view post
Post
2695
Updated the docscopeOCR-7B-050425-exp with the DREX-062225-exp, with improved preciseness in table structure and line spacing in the markdown used on the document page. And though this is still an experimental one, it's expected to perform well in the defined DREX use cases [ Document Retrieval and Extraction eXpert – experimental ocr ]. πŸ’»

β€· Model : prithivMLmods/DREX-062225-exp
β€· Demo : prithivMLmods/Doc-VLMs-OCR

β€· Collection : prithivMLmods/doc-vl-685839064a863e1cd23be3f1
β€· Multimodal Implementations : prithivMLmods/multimodal-implementations-67c9982ea04b39f0608badb0
β€· Git : https://github.com/PRITHIVSAKTHIUR/DREX.git
.
.
.

To know more about it, visit the model card of the respective model. !!
prithivMLmodsΒ 
posted an update 27 days ago
view post
Post
1911
The demo for smoldocling / nanonets ocr / typhoon ocr / monkey ocr explores the document OCR capabilities of various newly released multimodal VLMs in a single space. And if you're experiencing or demoing long document image OCR, kindly use the Smoldocling 256M preview [ Smoldocling is back in demo here. ] πŸ€—.

✦ Try the demo here : prithivMLmods/Multimodal-OCR2

β€· MonkeyOCR Recognition : echo840/MonkeyOCR
β€· Nanonets-OCR-s : nanonets/Nanonets-OCR-s
β€· SmolDocling-256M-preview : ds4sd/SmolDocling-256M-preview
β€· typhoon-ocr-7b : scb10x/typhoon-ocr-7b

β€· Multimodal Implementations : prithivMLmods/multimodal-implementations-67c9982ea04b39f0608badb0

β€· Github : https://github.com/PRITHIVSAKTHIUR/Multimodal-OCR2


The community GPU grant was given by Hugging Face β€” special thanks to them. πŸ€—πŸš€



To know more about it, visit the model card of the respective model. !!
  • 2 replies
Β·
prithivMLmodsΒ 
posted an update 29 days ago
view post
Post
3890
The demo for the MonkeyOCR Recognition model, which adopts a Structure-Recognition-Relation (SRR) triplet paradigm & Nanonets-OCR-s a powerful, state-of-the-art image-to-markdown OCR model that goes far beyond traditional text extraction and other experimental document OCR models, is combined into a single space.

✦ Try the demo here : prithivMLmods/core-OCR
✦ Try Nanonets-OCR-s demo here : prithivMLmods/Multimodal-OCR

β€· MonkeyOCR Recognition : echo840/MonkeyOCR
β€· docscopeOCR-7B-050425-exp : prithivMLmods/docscopeOCR-7B-050425-exp
β€· coreOCR-7B-050325-preview : prithivMLmods/coreOCR-7B-050325-preview
β€· Nanonets-OCR-s : nanonets/Nanonets-OCR-s

β€· Multimodal Implementations : prithivMLmods/multimodal-implementations-67c9982ea04b39f0608badb0

Also, include a sample OCR test using the VisionOCR-3B-061125 model and the Qwen2-VL-OCR-2B-Instruct model.
β€· Blog : https://huggingface.co/blog/prithivMLmods/visionocr-3b-061125-vs-qwen2-vl-ocr-2b-instruct

To know more about it, visit the model card of the respective model. !!
TonicΒ 
posted an update about 1 month ago
view post
Post
675
πŸ™‹πŸ»β€β™‚οΈ hey there folks ,

So every bio/med/chem meeting i go to i always the same questions "why are you sharing a gdrive link with me for this?" and "Do you have any plans to publish your model weights and datasets on huggingface?" and finally i got a good answer today which explains everything :

basically there is some kind of government censorship on this (usa, but i'm sure others too) and they are told they are not allowed as it is considered a "dataleak" which is illegal !!!!

this is terrible ! but the good news is that we can do something about it !

so there is this "call for opinions and comments" here from the NIH (usa) , and here we can make our opinion on this topic known : https://osp.od.nih.gov/comment-form-responsibly-developing-and-sharing-generative-artificial-intelligence-tools-using-nih-controlled-access-data/

kindly consider dropping your opinion and thoughts about this censorship of science , and share this post , link or thoughts widely .

Together maybe we can start to share data and model weights appropriately and openly in a good way πŸ™πŸ»πŸš€

cc. @cyrilzakka

prithivMLmodsΒ 
posted an update about 2 months ago
view post
Post
5746
OpenAI, Google, Hugging Face, and Anthropic have released guides and courses on building agents, prompting techniques, scaling AI use cases, and more. Below are 10+ minimalistic guides and courses that may help you in your progress. πŸ“–

β€· Agents Companion : https://www.kaggle.com/whitepaper-agent-companion
β€· Building Effective Agents : https://www.anthropic.com/engineering/building-effective-agents
β€· Guide to building agents by OpenAI : https://cdn.openai.com/business-guides-and-resources/a-practical-guide-to-building-agents.pdf
β€· Prompt engineering by Google : https://www.kaggle.com/whitepaper-prompt-engineering
β€· Google: 601 real-world gen AI use cases : https://cloud.google.com/transform/101-real-world-generative-ai-use-cases-from-industry-leaders
β€· Prompt engineering by IBM : https://www.ibm.com/think/topics/prompt-engineering-guide
β€· Prompt Engineering by Anthropic : https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview
β€· Scaling AI use cases : https://cdn.openai.com/business-guides-and-resources/identifying-and-scaling-ai-use-cases.pdf
β€· Prompting Guide 101 : https://services.google.com/fh/files/misc/gemini-for-google-workspace-prompting-guide-101.pdf
β€· AI in the Enterprise by OpenAI : https://cdn.openai.com/business-guides-and-resources/ai-in-the-enterprise.pdf

by HFπŸ€— :
β€· AI Agents Course by Huggingface : https://huggingface.co/learn/agents-course/unit0/introduction
β€· Smol-agents Docs : https://huggingface.co/docs/smolagents/en/tutorials/building_good_agents
β€· MCP Course by Huggingface : https://huggingface.co/learn/mcp-course/unit0/introduction
β€· Other Course (LLM, Computer Vision, Deep RL, Audio, Diffusion, Cookbooks, etc..) : https://huggingface.co/learn
  • 2 replies
Β·
prithivMLmodsΒ 
posted an update about 2 months ago
view post
Post
2331
Just made a demo for Cosmos-Reason1, a physical AI model that understands physical common sense and generates appropriate embodied decisions in natural language through long chain-of-thought reasoning. Also added video understanding support to it. πŸ€—πŸš€

✦ Try the demo here : prithivMLmods/DocScope-R1

β€· Cosmos-Reason1-7B : nvidia/Cosmos-Reason1-7B
β€· docscopeOCR-7B-050425-exp : prithivMLmods/docscopeOCR-7B-050425-exp
β€· Captioner-Relaxed : Ertugrul/Qwen2.5-VL-7B-Captioner-Relaxed

β€· Multimodal Implementations : prithivMLmods/multimodal-implementations-67c9982ea04b39f0608badb0

β€· GitHub :
β€’ https://github.com/PRITHIVSAKTHIUR/Cosmos-x-DocScope
β€’ https://github.com/PRITHIVSAKTHIUR/Nvidia-Cosmos-Reason1-Demo.

To know more about it, visit the model card of the respective model. !!
AtAndDevΒ 
posted an update about 2 months ago
view post
Post
2864
deepseek-ai/DeepSeek-R1-0528

This is the end
  • 1 reply
Β·