-
-
-
-
-
-
Inference Providers
Active filters:
4bit
ModelCloud/gemma-2-9b-gptq-4bit
Text Generation
•
3B
•
Updated
•
2
legraphista/Phi-3-mini-4k-instruct-update2024_07_03-IMat-GGUF
Text Generation
•
4B
•
Updated
•
189
legraphista/internlm2_5-7b-chat-IMat-GGUF
Text Generation
•
8B
•
Updated
•
270
legraphista/internlm2_5-7b-chat-1m-IMat-GGUF
Text Generation
•
8B
•
Updated
•
244
•
1
legraphista/codegeex4-all-9b-IMat-GGUF
Text Generation
•
9B
•
Updated
•
312
•
8
ModelCloud/DeepSeek-V2-Lite-gptq-4bit
Text Generation
•
2B
•
Updated
•
16
ModelCloud/internlm-2.5-7b-gptq-4bit
Feature Extraction
•
2B
•
Updated
•
2
ModelCloud/internlm-2.5-7b-chat-gptq-4bit
Feature Extraction
•
2B
•
Updated
•
2
ModelCloud/internlm-2.5-7b-chat-1m-gptq-4bit
Feature Extraction
•
2B
•
Updated
•
2
legraphista/NuminaMath-7B-TIR-IMat-GGUF
Text Generation
•
7B
•
Updated
•
62
•
1
legraphista/mathstral-7B-v0.1-IMat-GGUF
Text Generation
•
7B
•
Updated
•
178
Xelta/miniXelta_01
Text Generation
•
Updated
•
2
ModelCloud/Mistral-Nemo-Instruct-2407-gptq-4bit
Text Generation
•
3B
•
Updated
•
73
•
4
legraphista/Athene-70B-IMat-GGUF
Text Generation
•
71B
•
Updated
•
270
•
3
ModelCloud/gemma-2-27b-it-gptq-4bit
Text Generation
•
6B
•
Updated
•
104
•
12
legraphista/Mistral-Nemo-Instruct-2407-IMat-GGUF
Text Generation
•
12B
•
Updated
•
258
•
2
legraphista/Meta-Llama-3.1-8B-Instruct-IMat-GGUF
Text Generation
•
8B
•
Updated
•
474
•
5
ModelCloud/Meta-Llama-3.1-8B-Instruct-gptq-4bit
Text Generation
•
2B
•
Updated
•
1.57k
•
4
ModelCloud/Meta-Llama-3.1-8B-gptq-4bit
Text Generation
•
2B
•
Updated
•
7
legraphista/Meta-Llama-3.1-70B-Instruct-IMat-GGUF
Text Generation
•
71B
•
Updated
•
438
•
11
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit
Text Generation
•
11B
•
Updated
•
8
•
4
legraphista/Llama-Guard-3-8B-IMat-GGUF
Text Generation
•
8B
•
Updated
•
252
•
4
legraphista/Mistral-Large-Instruct-2407-IMat-GGUF
Text Generation
•
123B
•
Updated
•
195
•
29
jhangmez/CHATPRG-v0.2.1-Meta-Llama-3.1-8B-bnb-4bit-lora-adapters
Text Generation
•
Updated
jhangmez/CHATPRG-v0.2.1-Meta-Llama-3.1-8B-bnb-4bit-q4_k_m
Text Generation
•
8B
•
Updated
•
22
•
1
ModelCloud/Mistral-Large-Instruct-2407-gptq-4bit
Text Generation
•
17B
•
Updated
•
4
•
1
legraphista/Meta-Llama-3.1-8B-Instruct-abliterated-IMat-GGUF
Text Generation
•
8B
•
Updated
•
114
•
1
ModelCloud/Meta-Llama-3.1-405B-Instruct-gptq-4bit
Text Generation
•
59B
•
Updated
•
2
•
2
legraphista/gemma-2-2b-it-IMat-GGUF
Text Generation
•
3B
•
Updated
•
837
•
2
legraphista/gemma-2-2b-IMat-GGUF
Text Generation
•
3B
•
Updated
•
236
•
1