matrixportal commited on
Commit
07c2990
·
verified ·
1 Parent(s): c213e33

Update README with 8 models (20250409-164721)

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - gguf
4
+ - llama.cpp
5
+ - quantized
6
+ - text-generation
7
+ license: other
8
+ base_model: TheDrummer/Gemmasutra-Small-4B-v1
9
+ datasets:
10
+ - Gemmasutra-Small-4B-v1
11
+
12
+ ---
13
+
14
+ # Gemmasutra-Small-4B-v1 GGUF Quantized Models
15
+
16
+ ## Model Information
17
+ - **Base Model:** [TheDrummer/Gemmasutra-Small-4B-v1](https://huggingface.co/TheDrummer/Gemmasutra-Small-4B-v1)
18
+ - **Quantized by:** [matrixportal](https://huggingface.co/matrixportal)
19
+ - **Format:** GGUF (for llama.cpp compatible tools)
20
+ - **Quantized on:** 2025-04-09
21
+
22
+ ## Recommended Downloads
23
+ - **Q4_K_M:** [`gemmasutra-small-4b-v1.q4_k_m.gguf`](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q4_k_m.gguf)
24
+ - **Q4_0:** [`gemmasutra-small-4b-v1.q4_0.gguf`](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q4_0.gguf)
25
+ - **Q8_0:** [`gemmasutra-small-4b-v1.q8_0.gguf`](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q8_0.gguf)
26
+
27
+ ## All Available Quantizations
28
+ | File | Download |
29
+ |------|----------|
30
+ | `gemmasutra-small-4b-v1.f16.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.f16.gguf) |
31
+ | `gemmasutra-small-4b-v1.q2_k.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q2_k.gguf) |
32
+ | `gemmasutra-small-4b-v1.q3_k_m.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q3_k_m.gguf) |
33
+ | `gemmasutra-small-4b-v1.q4_0.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q4_0.gguf) |
34
+ | `gemmasutra-small-4b-v1.q4_k_m.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q4_k_m.gguf) |
35
+ | `gemmasutra-small-4b-v1.q5_k_m.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q5_k_m.gguf) |
36
+ | `gemmasutra-small-4b-v1.q6_k.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q6_k.gguf) |
37
+ | `gemmasutra-small-4b-v1.q8_0.gguf` | [Download](https://huggingface.co/matrixportal/Gemmasutra-Small-4B-v1-GGUF/resolve/main/gemmasutra-small-4b-v1.q8_0.gguf) |
38
+
39
+ ## Usage Instructions
40
+ 1. Download desired GGUF file
41
+ 2. Use with compatible tools:
42
+ - [llama.cpp](https://github.com/ggerganov/llama.cpp)
43
+ - [Ollama](https://ollama.ai/)
44
+ - [LM Studio](https://lmstudio.ai/)
45
+ - [GPT4All](https://gpt4all.io)
46
+
47
+ 💡 **Tip:** Q4_K_M offers the best balance for most use cases.