Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
warshanks
's Collections
Qwen 3 AWQ
Menlo Research AWQ
AWQ Quants
Mistral Quants
MedGemma MLX
Lingshu MLX
DeepSeek MLX
Medical MLX
Nemotron MLX
TheDrummer MLX
Abliterated MLX
Arcee AI MLX
Jedi MLX-VLM
MiMo-VL MLX-VLM
Sarvam-M MLX
Nemotron MLX
updated
21 days ago
Nemotron MLX conversions I've done
Upvote
1
mlx-community/AceReason-Nemotron-7B-4bit
Text Generation
•
1B
•
Updated
May 26
•
67
mlx-community/AceReason-Nemotron-7B-8bit
Text Generation
•
2B
•
Updated
May 26
•
11
mlx-community/AceReason-Nemotron-7B-bf16
Text Generation
•
8B
•
Updated
May 26
•
9
mlx-community/AceReason-Nemotron-1.1-7B-4bit
Text Generation
•
1B
•
Updated
Jun 17
•
21
mlx-community/AceReason-Nemotron-1.1-7B-8bit
Text Generation
•
8B
•
Updated
Jun 17
•
12
mlx-community/AceReason-Nemotron-1.1-7B-bf16
Text Generation
•
8B
•
Updated
Jun 17
•
10
mlx-community/AceReason-Nemotron-14B-4bit
Text Generation
•
2B
•
Updated
May 24
•
21
mlx-community/AceReason-Nemotron-14B-8bit
Text Generation
•
4B
•
Updated
May 24
•
13
mlx-community/AceReason-Nemotron-14B-bf16
Text Generation
•
15B
•
Updated
May 24
•
9
mlx-community/Nemotron-Research-Reasoning-Qwen-1.5B-4bit
0.3B
•
Updated
Jun 2
•
14
mlx-community/Nemotron-Research-Reasoning-Qwen-1.5B-8bit
0.5B
•
Updated
Jun 2
•
41
•
2
mlx-community/Nemotron-Research-Reasoning-Qwen-1.5B-bf16
2B
•
Updated
Jun 2
•
11
mlx-community/Llama-3.1-Nemotron-Nano-4B-v1.1-4bit
Text Generation
•
0.7B
•
Updated
Jun 4
•
43
mlx-community/Llama-3.1-Nemotron-Nano-4B-v1.1-8bit
Text Generation
•
1B
•
Updated
Jun 4
•
15
mlx-community/Llama-3.1-Nemotron-Nano-4B-v1.1-bf16
Text Generation
•
5B
•
Updated
Jun 4
•
14
Upvote
1
Share collection
View history
Collection guide
Browse collections