-
-
-
-
-
-
Inference Providers
Active filters:
vllm
matatonic/Mistral-Small-3.2-24B-Instruct-2506-6.5bpw-h8-exl2
Image-Text-to-Text
•
Updated
•
25
•
3
RedHatAI/Mistral-Small-3.2-24B-Instruct-2506-FP8
Image-Text-to-Text
•
Updated
•
435
•
4
lmstudio-community/Devstral-Small-2507-MLX-4bit
Text Generation
•
24B
•
Updated
•
298k
•
2
unsloth/Magistral-Small-2507-GGUF
24B
•
Updated
•
14k
•
12
unsloth/Magistral-Small-2507
24B
•
Updated
•
126
•
2
unsloth/Magistral-Small-2507-unsloth-bnb-4bit
24B
•
Updated
•
622
•
2
unsloth/Magistral-Small-2507-bnb-4bit
24B
•
Updated
•
32
•
1
2imi9/Qwen3-1.7B-NVFP4A16
Text Generation
•
1B
•
Updated
•
31
•
1
nightmedia/gpt-oss-20b-q5-hi-mlx
Text Generation
•
21B
•
Updated
•
555
•
1
NexVeridian/gpt-oss-120b-5bit
Text Generation
•
117B
•
Updated
•
452
•
1
prayanksai/gpt-oss-120b-MLX-6bit
Text Generation
•
117B
•
Updated
•
1.17k
•
1
huynguyendbs/gpt-oss-20b-mlx
Text Generation
•
21B
•
Updated
•
741
•
1
mlx-community/gpt-oss-120b-4bit
Text Generation
•
117B
•
Updated
•
301
•
1
lokinfey/gpt-oss-20B-mlx-metal-int4
Text Generation
•
21B
•
Updated
•
197
•
1
cpatonn/gpt-oss-20b-BF16
Text Generation
•
21B
•
Updated
•
8
•
1
Inferless/deciLM-7B-GPTQ
Text Generation
•
Updated
•
4
•
1
Inferless/SOLAR-10.7B-Instruct-v1.0-GPTQ
Text Generation
•
Updated
•
5
•
2
Inferless/Mixtral-8x7B-v0.1-int8-GPTQ
Text Generation
•
Updated
•
4
•
2
RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
4.85k
•
•
23
RedHatAI/Mixtral-8x7B-Instruct-v0.1-AutoFP8
Text Generation
•
47B
•
Updated
•
61
•
3
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation
•
8B
•
Updated
•
5.69k
•
•
8
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
1.87k
•
•
13
RedHatAI/Qwen2-72B-Instruct-FP8
Text Generation
•
73B
•
Updated
•
1.46k
•
15
mradermacher/Mistral-7B-Instruct-v0.3-GGUF
7B
•
Updated
•
682
•
2
mradermacher/Mistral-7B-Instruct-v0.3-i1-GGUF
7B
•
Updated
•
275
•
1
RedHatAI/Mixtral-8x22B-Instruct-v0.1-AutoFP8
Text Generation
•
141B
•
Updated
•
8
•
3
RedHatAI/Qwen2-0.5B-Instruct-FP8
Text Generation
•
0.5B
•
Updated
•
1.4k
•
3
RedHatAI/Qwen2-1.5B-Instruct-FP8
Text Generation
•
2B
•
Updated
•
5.83k
RedHatAI/Qwen2-7B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
16.9k
•
•
2
nm-testing/SparseLlama-3-8B-pruned_50.2of4-FP8
Text Generation
•
8B
•
Updated
•
2