[FEEDBACK] Local apps

#31
by kramp - opened
Hugging Face org

Please share your feedback about the Local Apps integration in model pages.

On compatible models , you'll be proposed to launch some local apps:

In your settings, you can configure the list of apps and their order:

The list of available local apps is defined in https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/src/local-apps.ts

I think the tensor-core fp16 FLOPS should be used for GPUs supporting that. I note that V100 counts as way less than the theoretical 125 TFLOPS, listed e.g. here: https://images.nvidia.com/content/technologies/volta/pdf/tesla-volta-v100-datasheet-letter-fnl-web.pdf

Hey! Have you guys heard of LangFlow? It is a neat solution for developing AI-powered apps as well!

The GPU list is missing the RTX A4000 (16GB)

PR: https://github.com/huggingface/huggingface.js/pull/817

Would be nice to get ollama integration

I suggest adding Ollama as local app to run LLM's

I use GPT4All and it is not listed herein

the GPU list is missing RTX 5070 Ti Mobile(for laptops)

And the CPU is missing Intel Core Ultra 9 275hx and others ultra.

@pcuenq hey! could you take a look at our pr adding atomic chat as a local app provider? https://github.com/huggingface/huggingface.js/pull/2076
clem recently followed our project, would really appreciate a review

The GPU list is also missing the new blackwell A pro series
https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/rtx-pro-5000/

@pcuenq Hello! Atomic Chat has been added to HF .main (merged PR here: https://github.com/huggingface/huggingface.js/pull/2076), but we still don't appear in the list of local applications on Hugging Face. What do we need to do to get Atomic Chat to appear in the Hugging Face interface as a local app?

Hi! I'm Danny, CPO of Atomic Chat. We were merged in huggingface/huggingface.js#2076 ( https://github.com/huggingface/huggingface.js/pull/2076) and our app is present in @huggingface/tasks since v0.20.24. However, Atomic Chat doesn't appear on the Settings → Local Apps page (huggingface.co/settings/local-apps). It seems this page uses a separate registry on the Hub frontend side. Could you add Atomic Chat to it? cc @Wauplin @julien-c

For Local Apps, one useful direction could be making the “handoff” metadata explicit: what model artifact is being opened, what local app/version receives it, and what command/config was used.

That matters once local apps move beyond simple model loading into agentic workflows. If a local app starts an agent that can call tools or write files, users will eventually want to answer: which local app launched this run, with which model, and what did it do afterwards?

The feature already feels like a natural bridge between HF models and local developer workflows. Adding a little more launch/run metadata would make it easier to debug and trust those workflows.

The GPU list is missing the Intel Arc Pro B70

Sign up or log in to comment