Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
14.2
TFLOPS
41
6
47
Dominik Weckmüller
do-me
Follow
Mi6paulino's profile picture
loztcontrol's profile picture
SimonProvost's profile picture
47 followers
·
34 following
https://do-me.github.io/SemanticFinder/
DomeGIS
do-me
domegis.bsky.social
AI & ML interests
Making AI more accessible. Working on semantic search, embeddings and Geospatial AI applications. https://geo.rocks
Recent Activity
posted
an
update
about 12 hours ago
Wrote a quick one-liner to run Qwen3-Next-80B-A3B-Instruct-8bit with mlx-lm on MacOS with mlx-lm and uv: ```bash curl -sL https://gist.githubusercontent.com/do-me/34516f7f4d8cc701da823089b09a3359/raw/5f3b7e92d3e5199fd1d4f21f817a7de4a8af0aec/prompt.py | uv run --with git+https://github.com/ml-explore/mlx-lm.git python - --prompt "What is the meaning of life?" ``` ... or if you prefer the more secure 2-liner version (if you check the script before executing): ``` curl -sL https://gist.githubusercontent.com/do-me/34516f7f4d8cc701da823089b09a3359/raw/5f3b7e92d3e5199fd1d4f21f817a7de4a8af0aec/prompt.py -o prompt.py uv run --with git+https://github.com/ml-explore/mlx-lm.git python prompt.py --prompt "What is the meaning of life?" ``` I get like 45-50 tokens on an M3 Max, pretty happy with the generation speed! Stats from the video: ```bash Prompt: 15 tokens, 80.972 tokens-per-sec Generation: 256 tokens, 45.061 tokens-per-sec Peak memory: 84.834 GB ```
new
activity
about 12 hours ago
mlx-community/Qwen3-Next-80B-A3B-Instruct-8bit:
ValueError: Model type qwen3_next not supported
liked
a Space
15 days ago
apple/fastvlm-webgpu
View all activity
Organizations
do-me
's models
5
Sort: Recently updated
do-me/Eurovoc_English
Updated
Feb 9
do-me/twhin-bert-base
Fill-Mask
•
Updated
Mar 1, 2024
•
4
do-me/jina-embeddings-v2-base-en
Feature Extraction
•
Updated
Oct 29, 2023
•
4
do-me/jina-embeddings-v2-small-en
Feature Extraction
•
Updated
Oct 27, 2023
•
3
do-me/test
Fill-Mask
•
Updated
Oct 27, 2023
•
4