AI & ML interests
Finetune Diffusion, Train Diffusion
Post
2548
PSA for anyone using
Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.
Nymbo/Nymbo_Theme
or Nymbo/Nymbo_Theme_5
in a Gradio space ~Both of these themes have been updated to fix some of the long-standing inconsistencies ever since the transition to Gradio v5. Textboxes are no longer bright green and
in-line code
is readable now! Both themes are now visually identical across versions.If your space is already using one of these themes, you just need to restart your space to get the latest version. No code changes needed.

ehristoforu
posted
an
update
4 months ago
Post
3430
Introducing our first standalone model – FluentlyLM Prinum
Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.
General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT
Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.
Evolution:
🏆 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)
Detailed results and comparisons are presented in Pic. 3.
Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo
Introducing the first standalone model from Project Fluently LM! We worked on it for several months, used different approaches and eventually found the optimal one.
General characteristics:
- Model type: Causal language models (QwenForCausalLM, LM Transformer)
- Number of parameters: 32.5B
- Number of parameters (not embedded): 31.0B
- Number of layers: 64
- Context: 131,072 tokens
- Language(s) (NLP): English, French, Spanish, Russian, Chinese, Japanese, Persian (officially supported)
- License: MIT
Creation strategy:
The basis of the strategy is shown in Pic. 2.
We used Axolotl & Unsloth for SFT-finetuning with PEFT LoRA (rank=64, alpha=64) and Mergekit for SLERP and TIES mergers.
Evolution:
🏆 12th place in the Open LLM Leaderboard ( open-llm-leaderboard/open_llm_leaderboard) (21.02.2025)
Detailed results and comparisons are presented in Pic. 3.
Links:
- Model: fluently-lm/FluentlyLM-Prinum
- GGUF version: mradermacher/FluentlyLM-Prinum-GGUF
- Demo on ZeroGPU: ehristoforu/FluentlyLM-Prinum-demo

ameerazam08
posted
an
update
5 months ago
Post
4719
🔥 THE WAIT IS OVER... HAI-SER IS HERE! 🔥
Yo fam, this ain't just another AI drop— this is the FUTURE of emotional intelligence! 🚀
Introducing HAI-SER, powered by Structured Emotional Reasoning (SER), the next-level AI that doesn’t just understand your words—it feels you, analyzes your emotions, and helps you navigate life’s toughest moments. 💡
💥 What makes HAI-SER a game-changer?
🔹 Emotional Vibe Check – Gets the mood, energy, and what’s really going on 🎭
🔹 Mind-State Analysis – Breaks down your thoughts, beliefs, and patterns 🤯
🔹 Root Cause Deep-Dive – Unpacks the WHY behind your emotions 💡
🔹 Impact Check – Sees how it’s affecting your life and mental health 💔
🔹 Safety Check – Prioritizes your well-being and crisis management 🚨
🔹 Healing Game Plan – Custom strategies to help you bounce back 💪
🔹 Growth Potential – Turns struggles into opportunities for self-improvement 📈
🔹 How to Approach – Teaches you and others how to communicate and heal 🤝
🔹 Personalized Response – Not just generic advice—real talk, tailored to YOU 💯
No more robotic AI responses. No more surface-level advice. HAI-SER gets deep, analyzing emotions with precision and giving real, actionable support.
This ain’t just AI—this is your digital therapist, life coach, and hype squad all in one. Whether it’s mental health, career struggles, relationships, or personal growth, HAI-SER has your back.
🚀 The future of emotionally intelligent AI is HERE.
Are you ready? 🔥💯
HelpingAI/HAI-SER
Yo fam, this ain't just another AI drop— this is the FUTURE of emotional intelligence! 🚀
Introducing HAI-SER, powered by Structured Emotional Reasoning (SER), the next-level AI that doesn’t just understand your words—it feels you, analyzes your emotions, and helps you navigate life’s toughest moments. 💡
💥 What makes HAI-SER a game-changer?
🔹 Emotional Vibe Check – Gets the mood, energy, and what’s really going on 🎭
🔹 Mind-State Analysis – Breaks down your thoughts, beliefs, and patterns 🤯
🔹 Root Cause Deep-Dive – Unpacks the WHY behind your emotions 💡
🔹 Impact Check – Sees how it’s affecting your life and mental health 💔
🔹 Safety Check – Prioritizes your well-being and crisis management 🚨
🔹 Healing Game Plan – Custom strategies to help you bounce back 💪
🔹 Growth Potential – Turns struggles into opportunities for self-improvement 📈
🔹 How to Approach – Teaches you and others how to communicate and heal 🤝
🔹 Personalized Response – Not just generic advice—real talk, tailored to YOU 💯
No more robotic AI responses. No more surface-level advice. HAI-SER gets deep, analyzing emotions with precision and giving real, actionable support.
This ain’t just AI—this is your digital therapist, life coach, and hype squad all in one. Whether it’s mental health, career struggles, relationships, or personal growth, HAI-SER has your back.
🚀 The future of emotionally intelligent AI is HERE.
Are you ready? 🔥💯
HelpingAI/HAI-SER
Post
1081
Hey everyone 🤗!
Check out this new Virtual Try Off model (based on SD1.5): 1aurent/TryOffAnyone
This model isn't as accurate as others (e.g. xiaozaa/cat-try-off-flux based on FLUX.1) but it sure is fast!
Check out this new Virtual Try Off model (based on SD1.5): 1aurent/TryOffAnyone
This model isn't as accurate as others (e.g. xiaozaa/cat-try-off-flux based on FLUX.1) but it sure is fast!

ehristoforu
posted
an
update
6 months ago
Post
4264
✒️ Ultraset - all-in-one dataset for SFT training in Alpaca format.
fluently-sets/ultraset
❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.
🤯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.
🤗 For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.
❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
fluently-sets/ultraset
❓ Ultraset is a comprehensive dataset for training Large Language Models (LLMs) using the SFT (instruction-based Fine-Tuning) method. This dataset consists of over 785 thousand entries in eight languages, including English, Russian, French, Italian, Spanish, German, Chinese, and Korean.
🤯 Ultraset solves the problem faced by users when selecting an appropriate dataset for LLM training. It combines various types of data required to enhance the model's skills in areas such as text writing and editing, mathematics, coding, biology, medicine, finance, and multilingualism.
🤗 For effective use of the dataset, it is recommended to utilize only the "instruction," "input," and "output" columns and train the model for 1-3 epochs. The dataset does not include DPO or Instruct data, making it suitable for training various types of LLM models.
❇️ Ultraset is an excellent tool to improve your language model's skills in diverse knowledge areas.
Post
2164
🔥 BIG ANNOUNCEMENT: THE HELPINGAI API IS LIVE! 🔥
Yo, the moment you’ve all been waiting for is here! 🚀 The HelpingAI API is now LIVE and ready to level up your projects! 🔥 We’re bringing that next-level AI goodness straight to your fingertips. 💯
No more waiting— it’s time to build something epic! 🙌
From now on, you can integrate our cutting-edge AI models into your own applications, workflows, and everything in between. Whether you’re a developer, a creator, or just someone looking to make some serious moves, this is your chance to unlock the full potential of emotional intelligence and adaptive AI.
Check out the docs 🔥 and let’s get to work! 🚀
👉 Check out the docs and start building (https://helpingai.co/docs)
👉 Visit the HelpingAI website (https://helpingai.co/)
Yo, the moment you’ve all been waiting for is here! 🚀 The HelpingAI API is now LIVE and ready to level up your projects! 🔥 We’re bringing that next-level AI goodness straight to your fingertips. 💯
No more waiting— it’s time to build something epic! 🙌
From now on, you can integrate our cutting-edge AI models into your own applications, workflows, and everything in between. Whether you’re a developer, a creator, or just someone looking to make some serious moves, this is your chance to unlock the full potential of emotional intelligence and adaptive AI.
Check out the docs 🔥 and let’s get to work! 🚀
👉 Check out the docs and start building (https://helpingai.co/docs)
👉 Visit the HelpingAI website (https://helpingai.co/)
Post
11024
Realtime Whisper Large v3 Turbo Demo:
It transcribes audio in about 0.3 seconds.
KingNish/Realtime-whisper-large-v3-turbo
It transcribes audio in about 0.3 seconds.
KingNish/Realtime-whisper-large-v3-turbo
Post
8285
Exciting news! Introducing super-fast AI video assistant, currently in beta. With a minimum latency of under 500ms and an average latency of just 600ms.
DEMO LINK:
KingNish/Live-Video-Chat
DEMO LINK:
KingNish/Live-Video-Chat
Post
3739
A super good and fast image inpainting demo is here.
Its' super cool and realistic.
Demo by @OzzyGT (Must try):
OzzyGT/diffusers-fast-inpaint
Its' super cool and realistic.
Demo by @OzzyGT (Must try):
OzzyGT/diffusers-fast-inpaint
Post
3611
Mistral Nemo is better than many models in 1st grader level reasoning.
Post
3936
I am experimenting with Flux and trying to push it to its limits without training (as I am GPU-poor 😅).
I found some flaws in the pipelines, which I resolved, and now I am able to generate an approx similar quality image as Flux Schnell 4 steps in just 1 step.
Demo Link:
KingNish/Realtime-FLUX
I found some flaws in the pipelines, which I resolved, and now I am able to generate an approx similar quality image as Flux Schnell 4 steps in just 1 step.
Demo Link:
KingNish/Realtime-FLUX
Post
1929
I am excited to announce a major speed updated in Voicee, a superfast voice assistant.
It has now achieved latency <250 ms.
While its average latency is about 500ms.
KingNish/Voicee
This become Possible due to newly launched @sambanovasystems cloud.
You can also use your own API Key to get fastest speed.
You can get on from here: https://cloud.sambanova.ai/apis
For optimal performance use Google Chrome.
Please try Voicee and share your valuable feedback to help me further improve its performance and usability.
Thank you!
It has now achieved latency <250 ms.
While its average latency is about 500ms.
KingNish/Voicee
This become Possible due to newly launched @sambanovasystems cloud.
You can also use your own API Key to get fastest speed.
You can get on from here: https://cloud.sambanova.ai/apis
For optimal performance use Google Chrome.
Please try Voicee and share your valuable feedback to help me further improve its performance and usability.
Thank you!
Post
1393
Hey everyone 🤗!
We (finegrain) have created some custom ComfyUI nodes to use our refiners micro-framework inside comfy! 🎉
We only support our new Box Segmenter at the moment, but we're thinking of adding more nodes since there seems to be a demand for it. We leverage the new (beta) Comfy Registry to host our nodes. They are available at: https://registry.comfy.org/publishers/finegrain/nodes/comfyui-refiners. You can install them by running:
Or by unzipping the archive you can download by clicking "Download Latest" into your
We are eager to hear your feedbacks and suggestions for new nodes and how you'll use them! 🙏
We (finegrain) have created some custom ComfyUI nodes to use our refiners micro-framework inside comfy! 🎉
We only support our new Box Segmenter at the moment, but we're thinking of adding more nodes since there seems to be a demand for it. We leverage the new (beta) Comfy Registry to host our nodes. They are available at: https://registry.comfy.org/publishers/finegrain/nodes/comfyui-refiners. You can install them by running:
comfy node registry-install comfyui-refiners
Or by unzipping the archive you can download by clicking "Download Latest" into your
custom_nodes
comfy folder.We are eager to hear your feedbacks and suggestions for new nodes and how you'll use them! 🙏
Post
4450
Hey everyone 🤗!
Check out this awesome new model for object segmentation!
finegrain/finegrain-object-cutter.
We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate 🚀.
It’s all open source under the MIT license ( finegrain/finegrain-box-segmenter), complete with a test set tailored for e-commerce ( finegrain/finegrain-product-masks-lite). Have fun experimenting with it!
Check out this awesome new model for object segmentation!
finegrain/finegrain-object-cutter.
We (finegrain) have trained this new model in partnership with Nfinite and some of their synthetic data, the resulting model is incredibly accurate 🚀.
It’s all open source under the MIT license ( finegrain/finegrain-box-segmenter), complete with a test set tailored for e-commerce ( finegrain/finegrain-product-masks-lite). Have fun experimenting with it!
Post
3605
Introducing Voicee, A superfast voice fast assistant.
KingNish/Voicee
It achieved latency <500 ms.
While its average latency is 700ms.
It works best in Google Chrome.
Please try and give your feedbacks.
Thank you. 🤗
KingNish/Voicee
It achieved latency <500 ms.
While its average latency is 700ms.
It works best in Google Chrome.
Please try and give your feedbacks.
Thank you. 🤗

ehristoforu
updated
a
Space
10 months ago
Post
3274
Introducing HelpingAI2-9B, an emotionally intelligent LLM.
Model Link : https://huggingface.co/OEvortex/HelpingAI2-9B
Demo Link: Abhaykoul/HelpingAI2
This model is part of the innovative HelpingAI series and it stands out for its ability to engage users with emotional understanding.
Key Features:
-----------------
* It gets 95.89 score on EQ Bench greather than all top notch LLMs, reflecting advanced emotional recognition.
* It gives responses in empathetic and supportive manner.
Must try our demo: Abhaykoul/HelpingAI2
Model Link : https://huggingface.co/OEvortex/HelpingAI2-9B
Demo Link: Abhaykoul/HelpingAI2
This model is part of the innovative HelpingAI series and it stands out for its ability to engage users with emotional understanding.
Key Features:
-----------------
* It gets 95.89 score on EQ Bench greather than all top notch LLMs, reflecting advanced emotional recognition.
* It gives responses in empathetic and supportive manner.
Must try our demo: Abhaykoul/HelpingAI2