BuiDoan's picture

BuiDoan

BuiDoan

AI & ML interests

None yet

Recent Activity

liked a model 19 days ago
google/gemma-3n-E4B-it-litert-preview
liked a model 19 days ago
nari-labs/Dia-1.6B
liked a model 19 days ago
sarvamai/sarvam-m
View all activity

Organizations

Gradio-Blocks-Party's profile picture

BuiDoan's activity

upvoted an article about 1 month ago
liked a model about 1 month ago
reacted to seawolf2357's post with 👀 about 1 month ago
view post
Post
6212
Samsung Hacking Incident: Samsung Electronics' Official Hugging Face Account Compromised
Samsung Electronics' official Hugging Face account has been hacked. Approximately 17 hours ago, two new language models (LLMs) were registered under Samsung Electronics' official Hugging Face account. These models are:

https://huggingface.co/Samsung/MuTokenZero2-32B
https://huggingface.co/Samsung/MythoMax-L2-13B

The model descriptions contain absurd and false claims, such as being trained on "1 million W200 GPUs," hardware that doesn't even exist.
Moreover, community participants on Hugging Face who have noticed this issue are continuously posting that Samsung Electronics' account has been compromised.
There is concern about potential secondary and tertiary damage if users download these LLMs released under the Samsung Electronics account, trusting Samsung's reputation without knowing about the hack.
Samsung Electronics appears to be unaware of this situation, as they have not taken any visible measures yet, such as changing the account password.
Source: https://discord.gg/openfreeai
  • 2 replies
·
updated a collection about 1 month ago
reacted to Kseniase's post with 👍 about 1 month ago
view post
Post
4991
11 Alignment and Optimization Algorithms for LLMs

When we need to align models' behavior with the desired objectives, we rely on specialized algorithms that support helpfulness, accuracy, reasoning, safety, and alignment with user preferences. Much of a model’s usefulness comes from post-training optimization methods.

Here are the main optimization algorithms (both classic and new) in one place:

1. PPO (Proximal Policy Optimization) -> Proximal Policy Optimization Algorithms (1707.06347)
Clips the probability ratio to prevent the new policy from diverging too far from the old one. It helps keep everything stable

2. DPO (Direct Preference Optimization) -> Direct Preference Optimization: Your Language Model is Secretly a Reward Model (2305.18290)
It's a non RL method, where an LM is an implicit reward model. It uses a simple loss to boost the preferred answer’s probability over the less preferred one

3. GRPO (Group Relative Policy Optimization) -> DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models (2402.03300)
An RL method that compares a group of model outputs for the same input and updates the policy based on relative rankings. It doesn't need a separate critic model
It's latest application is Flow-GRPO which adds online RL into flow matching models -> Flow-GRPO: Training Flow Matching Models via Online RL (2505.05470)

4. DAPO (Decoupled Clip and Dynamic sAmpling Policy Optimization) -> DAPO: An Open-Source LLM Reinforcement Learning System at Scale (2503.14476)
Decouples the clipping bounds for flexibility, introducing 4 key techniques: clip-higher (to maintain exploration), dynamic sampling (to ensure gradient updates), token-level loss (to balance learning across long outputs), and overlong reward shaping (to handle long, truncated answers)

5. Supervised Fine-Tuning (SFT) -> Training language models to follow instructions with human feedback (2203.02155)
Often the first post-pretraining step. A model is fine-tuned on a dataset of high-quality human-written input-output pairs to directly teach desired behaviors

More in the comments 👇

If you liked it, also subscribe to the Turing Post: https://www.turingpost.com/subscribe
  • 1 reply
·
upvoted an article about 1 month ago
view article
Article

Yes, Transformers are Effective for Time Series Forecasting (+ Autoformer)

By elisim and 2 others
32
reacted to wolfram's post with 🚀 about 1 month ago
view post
Post
7221
Finally finished my extensive **Qwen 3 evaluations** across a range of formats and quantisations, focusing on **MMLU-Pro** (Computer Science).

A few take-aways stood out - especially for those interested in local deployment and performance trade-offs:

1️⃣ **Qwen3-235B-A22B** (via Fireworks API) tops the table at **83.66%** with ~55 tok/s.
2️⃣ But the **30B-A3B Unsloth** quant delivered **82.20%** while running locally at ~45 tok/s and with zero API spend.
3️⃣ The same Unsloth build is ~5x faster than Qwen's **Qwen3-32B**, which scores **82.20%** as well yet crawls at <10 tok/s.
4️⃣ On Apple silicon, the **30B MLX** port hits **79.51%** while sustaining ~64 tok/s - arguably today's best speed/quality trade-off for Mac setups.
5️⃣ The **0.6B** micro-model races above 180 tok/s but tops out at **37.56%** - that's why it's not even on the graph (50 % performance cut-off).

All local runs were done with LM Studio on an M4 MacBook Pro, using Qwen's official recommended settings.

**Conclusion:** Quantised 30B models now get you ~98 % of frontier-class accuracy - at a fraction of the latency, cost, and energy. For most local RAG or agent workloads, they're not just good enough - they're the new default.

Well done, Qwen - you really whipped the llama's ass! And to OpenAI: for your upcoming open model, please make it MoE, with toggleable reasoning, and release it in many sizes. *This* is the future!
·
upvoted an article about 1 month ago
view article
Article

What is MoE 2.0? Update Your Knowledge about Mixture-of-experts

By Kseniase and 1 other
9
reacted to ginipick's post with ❤️ about 1 month ago
view post
Post
5210
🔮 Mistral Perflexity AI - Local LLM Space with Web Search Capabilities 🌐
Hello AI enthusiasts! Today I'm excited to introduce my special Hugging Face space! 🚀

ginigen/Mistral-Perflexity

✨ Key Features

Powerful Model: Using Private-BitSix-Mistral-Small-3.1-24B-Instruct-2503, optimized through 6-bit quantization to run smoothly on local 4090 GPUs! 💪
Web Search Integration: Leveraging the Brave Search API to provide real-time web search results for user queries! 🔍
Customizable Responses: Shape AI personality and response format through system messages ⚙️
Multilingual Support: Perfect handling of both English and Korean! 🇺🇸🇰🇷

🛠️ Technical Highlights

GGUF Format: Optimized quantized model with excellent memory efficiency
Flash Attention: Applied optimization technology for faster inference speeds
8K Context Window: Capable of handling lengthy conversations and complex queries
Streaming Responses: Watch text being generated in real-time

💡 Use Cases

Complex Q&A requiring real-time information
Programming assistance and code generation
Multilingual content creation and translation
Summarization and explanation of learning materials

🔧 Customization
Adjust various parameters like Temperature, Top-p, Top-k, and repetition penalty to control response creativity and accuracy. Lower temperature (0.1-0.5) produces more deterministic responses, while higher values (0.7-1.0) generate more creative outputs!

🌟 Try It Yourself!
This space is available for anyone to use for free. Experience the power of a robust local LLM combined with web search capabilities! Your feedback is always welcome! 😊