Benhao Tang's picture

Benhao Tang

benhaotang

AI & ML interests

Physics Master student in theoretical particle physics at UniversitΓ€t Heidelberg, actively looking into the possibilities of integrating AI into future physics research.

Recent Activity

Organizations

None yet

benhaotang's activity

reacted to dhruv3006's post with πŸš€ 1 day ago
view post
Post
2563
Lumier – Run macOS & Linux VMs in a Docker

Lumier is an open-source tool for running macOS virtual machines in Docker containers on Apple Silicon Macs.

When building virtualized environments for AI agents, we needed a reliable way to package and distribute macOS VMs. Inspired by projects like dockur/macos that made macOS running in Docker possible, we wanted to create something similar but optimized for Apple Silicon.

The existing solutions either didn't support M-series chips or relied on KVM/Intel emulation, which was slow and cumbersome. We realized we could leverage Apple's Virtualization Framework to create a much better experience.

Lumier takes a different approach: It uses Docker as a delivery mechanism (not for isolation) and connects to a lightweight virtualization service (lume) running on your Mac.

Lumier is 100% open-source under MIT license and part of C/ua.

Github : https://github.com/trycua/cua/tree/main/libs/lumier
Join the discussion here : https://discord.gg/fqrYJvNr4a

reacted to Jaward's post with 🧠 4 days ago
view post
Post
3270
finally, a course that makes diffusion math much easier to grasp, well done πŸ‘ https://diffusion.csail.mit.edu/
  • 1 reply
Β·
reacted to abidlabs's post with πŸ”₯ 13 days ago
view post
Post
4035
HOW TO ADD MCP SUPPORT TO ANY πŸ€— SPACE

Gradio now supports MCP! If you want to convert an existing Space, like this one hexgrad/Kokoro-TTS, so that you can use it with Claude Desktop / Cursor / Cline / TinyAgents / or any LLM that supports MCP, here's all you need to do:

1. Duplicate the Space (in the Settings Tab)
2. Upgrade the Gradio sdk_version to 5.28 (in the README.md)
3. Set mcp_server=True in launch()
4. (Optionally) add docstrings to the function so that the LLM knows how to use it, like this:

def generate(text, speed=1):
    """
    Convert text to speech audio.

    Parameters:
        text (str): The input text to be converted to speech.
        speed (float, optional): Playback speed of the generated speech.


That's it! Now your LLM will be able to talk to you 🀯
reacted to fdaudens's post with πŸ‘ 15 days ago
view post
Post
1811
Want to know which AI models are least likely to hallucinate β€” and how to keep yours from spiking hallucinations by 20%?

A new benchmark called Phare, by Giskard, tested leading models across multiple languages, revealing three key findings:

1️⃣ Popular models aren't necessarily factual. Some models ranking highest in user satisfaction benchmarks like LMArena are actually more prone to hallucination.

2️⃣ The way you ask matters - a lot. When users present claims confidently ("My teacher said..."), models are 15% less likely to correct misinformation vs. neutral framing ("I heard...").

3️⃣ Telling models to "be concise" can increase hallucination by up to 20%.

What's also cool is that the full dataset is public - use them to test your own models or dive deeper into the results! H/t @davidberenstein1957 for the link.

- Study: https://www.giskard.ai/knowledge/good-answers-are-not-necessarily-factual-answers-an-analysis-of-hallucination-in-leading-llms
- Leaderboard: https://phare.giskard.ai/
- Dataset: giskardai/phare
reacted to Reality123b's post with πŸ‘ about 1 month ago
view post
Post
2158
ok, there must be a problem. HF charged me 0.12$ for 3 inference requests to text models
Β·
replied to Reality123b's post about 1 month ago
view reply

look at someone being charged $300 here: https://old.reddit.com/r/huggingface/comments/1jkyj2a/huggingface_just_billed_me_300_on_top_of_the_9/
I am not saying that he used almost 400k request is not crazy, but before the price change it should be covered in 20k*30 per month, ig the OP has no idea about the price change and carried on his usage.
For my own experience I think many HF-inference API text model are at least 10x the price since last week, e.g. command-r-plus is even more expensive than using from cohere now
To run my own number, I only use text models, Feb I did 210 requests for 0.26, Mar I did 60 for 0.26 and then after price increase another 20 for 0.14... almost 0.01 per request now, mind you most of my request is below 1k tokens

reacted to Keltezaa's post with πŸ”₯ about 2 months ago
view post
Post
2885
Dear HF Staff and pro Users.

Why did you remove the "Regen" feature from the ZeroGPU feature?
Is this an error or intended?

I am now limited to 13 images per 24 hrs. Using my space.
When I upgraded to Pro, it was exclusively for the 5x more usage and the faster regen.

The reason I spend my hard earned money on your site was for this feature.
This is totally unacceptable.

########
Other Pro Users please reply and tag others
IF YOU AGREE or DISAGREE.
########
@Always-cheating ,@anonymous111110987654321 ,@Arshili @bedspirit @blackedguy @John6666 ,@DavidBaloches @E-07 ,@f-14 @mindfulpeoples @multimodalart
Β·
replied to julien-c's post about 2 months ago