Meridiani Fomalhaut's picture

Meridiani Fomalhaut

fomalhaut292
·

AI & ML interests

None yet

Recent Activity

View all activity

Organizations

None yet

fomalhaut292's activity

reacted to zamal's post with 👍 3 days ago
view post
Post
2370
DeepGit: Your GitHub Gold Digger! 💰🚀
Hey Hugging Face gang! Meet DeepGit—my open-source sidekick that rips through GitHub to snag repos that fit you. Done with dead-end searches? Me too. Built it with LangGraph and some dope tricks:
Embeddings grab the good stuff (HF magic, baby!)

Re-ranking nails the best picks

Snoops docs, code, and buzz in one slick flow

Drops a clean list of hidden gems 💎

Unearth that sneaky ML lib or Python gem—run python app.py or langgraph dev and boom! Peek it at https://github.com/zamalali/DeepGit. Fork it, tweak it, love it—Docker’s in, HF vibes are strong. Drop a 🌟 or a crazy idea—I’m pumped to jam with you all! 🪂
reacted to JLouisBiz's post with 👍 3 days ago
view post
Post
1364
https://www.youtube.com/watch?v=AT0nJybzQ0w

This is demonstration how it can be done in shell that you make your audible assistant. You can speak and you can get answers. It's very interesting. And you can also bind it to the mouse button. When you bind it to the mouse button you forget anything. All what you need to do is click the mouse button on the left side of the mouse. Not the left or right button. I mean those other buttons which mouses have. For me it's button number 8 and button number 9.

https://www.youtube.com/watch?v=AT0nJybzQ0w

In my opinion, everybody should upgrade his computer to have speech recognition, automatic typing of the transcript, and also a very interactive possibility to request information from your digital assistant.

I am using xbindkeys program to bind LLM software to mouse buttons:

;; specify a mouse button
(xbindkey '("b:8") "rcd-llm-speech-single-input.sh")
(xbindkey '(alt "b:8") "rcd-llm-audible-assistant-single.sh")

(xbindkey '("b:9") "rcd-llm-correct-marked-text.sh")

reacted to etemiz's post with 👍 3 days ago
reacted to clem's post with 🔥 3 days ago
view post
Post
3765
Before 2020, most of the AI field was open and collaborative. For me, that was the key factor that accelerated scientific progress and made the impossible possible—just look at the “T” in ChatGPT, which comes from the Transformer architecture openly shared by Google.

Then came the myth that AI was too dangerous to share, and companies started optimizing for short-term revenue. That led many major AI labs and researchers to stop sharing and collaborating.

With OAI and sama now saying they're willing to share open weights again, we have a real chance to return to a golden age of AI progress and democratization—powered by openness and collaboration, in the US and around the world.

This is incredibly exciting. Let’s go, open science and open-source AI!
·
reacted to Reality123b's post with 👍 3 days ago
view post
Post
2025
ok, there must be a problem. HF charged me 0.12$ for 3 inference requests to text models
·
reacted to ZhiyuanthePony's post with 🤗 3 days ago
view post
Post
2492
🎉 Thrilled to share our #CVPR2025 accepted work:
Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data (2503.21694)

🔥 ​Key Innovations:
1️⃣ First to adapt SD for ​direct textured mesh generation (1-2s inference)
2️⃣ Novel teacher-student framework leveraging multi-view diffusion models ([MVDream](https://arxiv.org/abs/2308.16512) & [RichDreamer](https://arxiv.org/abs/2311.16918))
3️⃣ ​Parameter-efficient tuning - ​only +2.6% params over base SD
4️⃣ ​3D data-free training liberates model from dataset constraints

💡 Why matters?
→ A novel ​3D-Data-Free paradigm
→ Outperforms data-driven methods on creative concept generation
→ Unlocks web-scale text corpus for 3D content creation

🌐 Project: https://theericma.github.io/TriplaneTurbo/
🎮 Demo: ZhiyuanthePony/TriplaneTurbo
💻 Code: https://github.com/theEricMa/TriplaneTurbo
reacted to Reality123b's post with 👍 3 days ago
view post
Post
2025
ok, there must be a problem. HF charged me 0.12$ for 3 inference requests to text models
·
reacted to ZhiyuanthePony's post with 🤗 3 days ago
view post
Post
2492
🎉 Thrilled to share our #CVPR2025 accepted work:
Progressive Rendering Distillation: Adapting Stable Diffusion for Instant Text-to-Mesh Generation without 3D Data (2503.21694)

🔥 ​Key Innovations:
1️⃣ First to adapt SD for ​direct textured mesh generation (1-2s inference)
2️⃣ Novel teacher-student framework leveraging multi-view diffusion models ([MVDream](https://arxiv.org/abs/2308.16512) & [RichDreamer](https://arxiv.org/abs/2311.16918))
3️⃣ ​Parameter-efficient tuning - ​only +2.6% params over base SD
4️⃣ ​3D data-free training liberates model from dataset constraints

💡 Why matters?
→ A novel ​3D-Data-Free paradigm
→ Outperforms data-driven methods on creative concept generation
→ Unlocks web-scale text corpus for 3D content creation

🌐 Project: https://theericma.github.io/TriplaneTurbo/
🎮 Demo: ZhiyuanthePony/TriplaneTurbo
💻 Code: https://github.com/theEricMa/TriplaneTurbo