prithivMLmods commited on
Commit
7b7385a
·
verified ·
1 Parent(s): b69cdb4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -3
README.md CHANGED
@@ -1,3 +1,45 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - prithivMLmods/SmolLM2-Rethink-135M
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - text-generation-inference
11
+ - trl
12
+ ---
13
+ # **SmolLM2-Rethink-135M-GGUF**
14
+
15
+ > SmolLM2-Rethink-135M is an experimental lightweight model trained on the Celestia3-DeepSeek-R1-0528 reasoning dataset. Based on the SmolLM2-135M-Instruct architecture, this model is specifically optimized for reasoning, structured outputs, and efficient small-scale deployment. Despite its compact size (135M parameters), it demonstrates strong capabilities in logical deduction, conversational coherence, and lightweight inference tasks.
16
+
17
+ ## Model Files
18
+
19
+ | File Name | Size | Type | Description |
20
+ |-----------|------|------|-------------|
21
+ | SmolLM2-Rethink-135M.Q2_K.gguf | 88.2 MB | Model | Q2_K quantized model (smallest) |
22
+ | SmolLM2-Rethink-135M.Q3_K_S.gguf | 88.2 MB | Model | Q3_K_S quantized model |
23
+ | SmolLM2-Rethink-135M.Q3_K_M.gguf | 93.5 MB | Model | Q3_K_M quantized model |
24
+ | SmolLM2-Rethink-135M.Q3_K_L.gguf | 97.5 MB | Model | Q3_K_L quantized model |
25
+ | SmolLM2-Rethink-135M.Q4_K_S.gguf | 102 MB | Model | Q4_K_S quantized model |
26
+ | SmolLM2-Rethink-135M.Q4_K_M.gguf | 105 MB | Model | Q4_K_M quantized model |
27
+ | SmolLM2-Rethink-135M.Q5_K_S.gguf | 110 MB | Model | Q5_K_S quantized model |
28
+ | SmolLM2-Rethink-135M.Q5_K_M.gguf | 112 MB | Model | Q5_K_M quantized model |
29
+ | SmolLM2-Rethink-135M.Q6_K.gguf | 138 MB | Model | Q6_K quantized model |
30
+ | SmolLM2-Rethink-135M.Q8_0.gguf | 145 MB | Model | Q8_0 quantized model |
31
+ | SmolLM2-Rethink-135M.BF16.gguf | 271 MB | Model | BF16 precision model |
32
+ | SmolLM2-Rethink-135M.F16.gguf | 271 MB | Model | F16 precision model |
33
+ | SmolLM2-Rethink-135M.F32.gguf | 540 MB | Model | F32 full precision model (largest) |
34
+ | .gitattributes | 2.4 kB | Config | Git LFS configuration |
35
+ | config.json | 29 Bytes | Config | Model configuration |
36
+ | README.md | 31 Bytes | Documentation | Repository documentation |
37
+
38
+ ## Quants Usage
39
+
40
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
41
+
42
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
43
+ types (lower is better):
44
+
45
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)