shortened_name
stringlengths 1
96
| count
int64 1
170
|
---|---|
model | 170 |
Meta-Llama-3.1-8B-Instruct-Q4_K_M-GGUF | 55 |
Meta-Llama-3-8B-Q4_K_M-GGUF | 50 |
llama-3-8b-Instruct-bnb-4bit-aiaustin-demo | 44 |
Meta-Llama-3-8B-Instruct-GGUF | 39 |
Meta-Llama-3-8B-Instruct-Q4_K_M-GGUF | 36 |
Mistral-7B-Instruct-v0.2-GGUF | 33 |
phi-2-GGUF | 33 |
Meta-Llama-3.1-8B-Instruct-GGUF | 31 |
test | 29 |
Phi-3-mini-4k-instruct-Q4_K_M-GGUF | 25 |
Mistral-7B-Instruct-v0.3-GGUF | 24 |
qwen1.5-llm | 24 |
TinyLlama-1.1B-Chat-v1.0-GGUF | 23 |
qwen1.5-llm-quantized | 23 |
Meta-Llama-3.1-8B-Q4_K_M-GGUF | 22 |
Llama-3.2-3B-Instruct-GGUF | 21 |
Llama-3.1-8B-bnb-4bit-wenyanwen | 21 |
Meta-Llama-3-70B-Instruct-GGUF | 20 |
gemma-2-2b-it-Q4_K_M-GGUF | 20 |
Phi-3-mini-128k-instruct-Q4_K_M-GGUF | 19 |
Meta-Llama-3.1-8B-Instruct-Q8_0-GGUF | 19 |
gemma-2-2b-it-GGUF | 18 |
Mistral-Nemo-Instruct-2407-GGUF | 17 |
Qwen2-7B-Instruct-GGUF | 17 |
gemma-2-9b-it-GGUF | 17 |
Phi-3-mini-4k-instruct-GGUF | 17 |
lora_model | 17 |
llama-3-8b-chat-doctor | 17 |
Llama-3.2-1B-Instruct-GGUF | 16 |
gemma-2b-it-GGUF | 15 |
Llama-3.1-8B-bnb-4bit-python | 15 |
Phi-3.5-mini-instruct-GGUF | 15 |
Llama-3.2-1B-Instruct-Q4_K_M-GGUF | 15 |
EvolCodeLlama-7b-GGUF | 15 |
vicuna-13b-v1.5-gguf | 15 |
TinyLlama-1.1B-Chat-v0.3-GGUF | 15 |
mistral-7b-v3 | 15 |
Mistral-7B-Instruct-v0.3-Q4_K_M-GGUF | 15 |
Indic-gemma-2b-finetuned-sft-Navarasa-2.1.gguf | 15 |
Yi-1.5-9B-Chat-GGUF | 14 |
Codestral-22B-v0.1-GGUF | 14 |
Qwen2-0.5B-Instruct-GGUF | 14 |
gemma-2b-GGUF | 14 |
Qwen2-1.5B-Instruct-GGUF | 14 |
Qwen2.5-7B-Instruct-GGUF | 13 |
Meta-Llama-3-8B-Instruct-Q8_0-GGUF | 13 |
Phi-3-medium-128k-instruct-Q4_K_M-GGUF | 13 |
Qwen2-7B-Instruct-Q4_K_M-GGUF | 13 |
Indic-gemma-2b-finetuned-sft-Navarasa-2.0.gguf | 13 |
Phi-3.5-mini-instruct-Q4_K_M-GGUF | 13 |
Hermes-3-Llama-3.1-8B-GGUF | 12 |
Llama-3.2-3B-Instruct-Q8_0-GGUF | 12 |
Meta-Llama-3.1-70B-Instruct-GGUF | 12 |
Mistral-7B-Instruct-v0.1-GGUF | 12 |
Phi-3-mini-128k-instruct-GGUF | 12 |
Yi-Coder-9B-Chat-GGUF | 12 |
Qwen2.5-0.5B-Instruct-GGUF | 12 |
FineLlama-3.1-8B-GGUF | 12 |
Meta-Llama-3-8B-GGUF | 12 |
Phi-3-mini-128k-instruct-Q8_0-GGUF | 12 |
Yi-1.5-6B-Chat-GGUF | 12 |
Phi-3-medium-128k-instruct-GGUF | 12 |
gemma-2-9b-it-Q4_K_M-GGUF | 12 |
Mistral-Nemo-Instruct-2407-Q4_K_M-GGUF | 12 |
Qwen2.5-3B-Instruct-GGUF | 11 |
Mixtral-8x7B-Instruct-v0.1-GGUF | 11 |
Llama-3-8B-Instruct-Gradient-1048k-GGUF | 11 |
Hermes-2-Pro-Llama-3-8B-GGUF | 11 |
TinyLlama-1.1B-Chat-v1.0-Q4_K_M-GGUF | 11 |
mathstral-7B-v0.1-GGUF | 11 |
Qwen2.5-1.5B-Instruct-GGUF | 11 |
Qwen2.5-14B-Instruct-GGUF | 11 |
NeuralBeagle14-7B-GGUF | 11 |
Starling-LM-7B-beta-GGUF | 11 |
Phi-3-medium-4k-instruct-Q4_K_M-GGUF | 11 |
Mistral-Nemo-Instruct-2407-Q8_0-GGUF | 11 |
gemma-2-2b-it-Q8_0-GGUF | 11 |
llama3.1-Q4_K_M-gguf | 11 |
Llama-3.2-3B-Instruct-Q4_K_M-GGUF | 10 |
gemma-7b-it-GGUF | 10 |
aya-23-8B-GGUF | 10 |
DeepSeek-Coder-V2-Lite-Instruct-GGUF | 10 |
Reflection-Llama-3.1-70B-GGUF | 10 |
Qwen2.5-32B-Instruct-GGUF | 10 |
Qwen2.5-Coder-7B-Instruct-GGUF | 10 |
Meta-Llama-3-8B-Q8_0-GGUF | 10 |
Qwen2-0.5B-Instruct-Q4_K_M-GGUF | 10 |
gemma-2-27b-it-GGUF | 9 |
Qwen2.5-72B-Instruct-GGUF | 9 |
Llama3-ChatQA-1.5-8B-GGUF | 9 |
Mistral-Large-Instruct-2407-GGUF | 9 |
Meta-Llama-3.1-8B-Q8_0-GGUF | 9 |
Yi-Coder-1.5B-Chat-GGUF | 9 |
gemma-7b-GGUF | 9 |
gemma-1.1-7b-it-GGUF | 9 |
gemma-1.1-2b-it-GGUF | 9 |
gpt2-Q4_K_M-GGUF | 9 |
Phi-3-mini-4k-instruct-Q8_0-GGUF | 9 |
Yi-1.5-34B-Chat-GGUF | 9 |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
- Downloads last month
- 7