Caleb Fahlgren PRO

cfahlgren1

AI & ML interests

None yet

Articles

Organizations

cfahlgren1's activity

reacted to Muhammadreza's post with โค๏ธ 7 days ago
view post
Post
2518
Hey guys.
This is my first post here on huggingface. I'm glad to be a part of this amazing community!
  • 2 replies
ยท
reacted to merve's post with ๐Ÿ”ฅ๐Ÿ‘ 12 days ago
view post
Post
4931
Hugging Face Hub Python library now comes with easy inference for vision language models! โœจ

$ pip install huggingface_hub ๐Ÿค—
  • 1 reply
ยท
posted an update 12 days ago
view post
Post
1074
If you are like me, I like to find up and coming datasets and spaces before everyone else.

I made a trending repo space cfahlgren1/trending-repos where it shows:

- New up and coming Spaces in the last day
- New up and coming Datasets in the last 2 weeks

It's a really good way to find some new gems before they become popular. For example, someone is working on a way to dynamically create assets inside a video game here: gptcall/AI-Game-Creator

reacted to reach-vb's post with ๐Ÿš€ 22 days ago
view post
Post
2370
What a great day for Open Science! @AIatMeta released models, datasets, and code for many of its research artefacts! ๐Ÿ”ฅ

1. Meta Segment Anything Model 2.1: An updated checkpoint with improved results on visually similar objects, small objects and occlusion handling. A new developer suite will be added to make it easier for developers to build with SAM 2.

Model checkpoints: reach-vb/sam-21-6702d40defe7611a8bafa881

2. Layer Skip: Inference code and fine-tuned checkpoints demonstrating a new method for enhancing LLM performance.

Model checkpoints: facebook/layerskip-666b25c50c8ae90e1965727a

3. SALSA: New code enables researchers to benchmark AI-based attacks to validate security for post-quantum cryptography.

Repo: https://github.com/facebookresearch/LWE-benchmarking

4. Meta Lingua: A lightweight and self-contained codebase designed to train language models at scale.

Repo: https://github.com/facebookresearch/lingua

5. Meta Open Materials: New open source models and the largest dataset to accelerate AI-driven discovery of new inorganic materials.

Model checkpoints: fairchem/OMAT24

6. MEXMA: A new research paper and code for our novel pre-trained cross-lingual sentence encoder covering 80 languages.

Model checkpoint: facebook/MEXMA

7. Self-Taught Evaluator: a new method for generating synthetic preference data to train reward models without relying on human annotations.

Model checkpoint: facebook/Self-taught-evaluator-llama3.1-70B

8. Meta Spirit LM: An open-source language model for seamless speech and text integration.

Repo: https://github.com/facebookresearch/spiritlm
  • 3 replies
ยท
reacted to victor's post with ๐Ÿค—๐Ÿ”ฅ 30 days ago
view post
Post
2502
NEW - Inference Playground

Maybe like me you have always wanted a super easy way to compare llama3.2-1B vs. llama3.2-3B? or the same model with different temperatures?

Trying and comparing warm Inference API models has never been easier!
Just go to https://hf.co/playground, set your token and you're ready to go.
We'll keep improving, feedback welcome ๐Ÿ˜Š
  • 2 replies
ยท
reacted to reach-vb's post with ๐Ÿ‘๐Ÿ”ฅ about 1 month ago
view post
Post
2022
On-device AI framework ecosystem is blooming these days:

1. llama.cpp - All things Whisper, LLMs & VLMs - run across Metal, CUDA and other backends (AMD/ NPU etc)
https://github.com/ggerganov/llama.cpp

2. MLC - Deploy LLMs across platforms especially WebGPU (fastest WebGPU LLM implementation out there)
https://github.com/mlc-ai/web-llm

3. MLX - Arguably the fastest general purpose framework (Mac only) - Supports all major Image Generation (Flux, SDXL, etc), Transcription (Whisper), LLMs
https://github.com/ml-explore/mlx-examples

4. Candle - Cross-platform general purpose framework written in Rust - wide coverage across model categories
https://github.com/huggingface/candle

Honorable mentions:

1. Transformers.js - Javascript (WebGPU) implementation built on top of ONNXruntimeweb
https://github.com/xenova/transformers.js

2. Mistral rs - Rust implementation for LLMs & VLMs, built on top of Candle
https://github.com/EricLBuehler/mistral.rs

3. Ratchet - Cross platform, rust based WebGPU framework built for battle-tested deployments
https://github.com/huggingface/ratchet

4. Zml - Cross platform, Zig based ML framework
https://github.com/zml/zml

Looking forward to how the ecosystem would look 1 year from now - Quite bullish on the top 4 atm - but open source ecosystem changes quite a bit! ๐Ÿค—

Also, which frameworks did I miss?
  • 1 reply
ยท
replied to fdaudens's post about 1 month ago
reacted to fdaudens's post with ๐Ÿš€ about 1 month ago
view post
Post
3329
๐Ÿš€ 1,000,000 public models milestone achieved on Hugging Face! ๐Ÿคฏ

This chart by @cfahlgren1 shows the explosive growth of open-source AI. It's not just about numbers - it's a thriving community combining cutting-edge ML with real-world applications. cfahlgren1/hub-stats

Can't wait to see what's next!
  • 2 replies
ยท
replied to their post about 2 months ago
posted an update about 2 months ago
view post
Post
1836
Have you tried the new SQL Console yet?

Would love to know any queries you've tried or general feedback! If you haven't go try it out and let us know ๐Ÿค—

If you have some interesting queries feel free to share the URLs as well!
  • 1 reply
ยท
reacted to charlesdedampierre's post with ๐Ÿ”ฅ about 2 months ago
view post
Post
4141
Please check the Open Source AI Network: we mapped the top 500 HF users
based on their followers' profiles.

The map can be found here: bunkalab/mapping_the_OS_community
  • 1 reply
ยท
reacted to m-ric's post with ๐Ÿ”ฅ about 2 months ago
view post
Post
2173
๐—”๐—ฟ๐—ฐ๐—ฒ๐—ฒ ๐—ฟ๐—ฒ๐—น๐—ฒ๐—ฎ๐˜€๐—ฒ๐˜€ ๐—ฆ๐˜‚๐—ฝ๐—ฒ๐—ฟ๐—ก๐—ผ๐˜ƒ๐—ฎ, ๐—ฏ๐—ฒ๐˜๐˜๐—ฒ๐—ฟ ๐—ณ๐—ถ๐—ป๐—ฒ-๐˜๐˜‚๐—ป๐—ฒ ๐—ผ๐—ณ ๐—Ÿ๐—น๐—ฎ๐—บ๐—ฎ-๐Ÿฏ.๐Ÿญ-๐Ÿณ๐Ÿฌ๐—•!

2๏ธโƒฃ versions: 70B and 8B
๐Ÿง  Trained by distilling logits from Llama-3.1-405B
๐Ÿฅ Used a clever compression method to reduce dataset weight from 2.9 Petabytes down to 50GB (may share it in a paper)
โš™๏ธ Not all benchmarks are improved: GPQA and MUSR go down a slight bit
๐Ÿค— 8B weights are available on HF (not the 70B)

Read their blog post ๐Ÿ‘‰ https://blog.arcee.ai/arcee-supernova-training-pipeline-and-model-composition/
Model weights (8B) ๐Ÿ‘‰ arcee-ai/Llama-3.1-SuperNova-Lite
replied to their post 2 months ago
view reply

Disclaimer: This is being funded by some free credits on Together, so will only be up as supplies last ๐Ÿคฃ or someone decides to sponsor it. But here are some cool examples:

posted an update 2 months ago
reacted to Xenova's post with ๐Ÿ”ฅ๐Ÿš€ 3 months ago
view post
Post
12058
I can't believe this... Phi-3.5-mini (3.8B) running in-browser at ~90 tokens/second on WebGPU w/ Transformers.js and ONNX Runtime Web! ๐Ÿคฏ Since everything runs 100% locally, no messages are sent to a server โ€” a huge win for privacy!
- ๐Ÿค— Demo: webml-community/phi-3.5-webgpu
- ๐Ÿง‘โ€๐Ÿ’ป Source code: https://github.com/huggingface/transformers.js-examples/tree/main/phi-3.5-webgpu
ยท
reacted to victor's post with ๐Ÿ‘€ 3 months ago
view post
Post
5343
๐Ÿ™‹ Calling all Hugging Face users! We want to hear from YOU!

What feature or improvement would make the biggest impact on Hugging Face?

Whether it's the Hub, better documentation, new integrations, or something completely different โ€“ we're all ears!

Your feedback shapes the future of Hugging Face. Drop your ideas in the comments below! ๐Ÿ‘‡
ยท