prithivMLmods commited on
Commit
03eba8a
·
verified ·
1 Parent(s): 30bb38f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +40 -3
README.md CHANGED
@@ -1,3 +1,40 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ base_model:
6
+ - prithivMLmods/Capricornus-MoT-1.7B-Supreme1
7
+ pipeline_tag: text-generation
8
+ library_name: transformers
9
+ tags:
10
+ - text-generation-inference
11
+ - math
12
+ - code
13
+ - science
14
+ ---
15
+
16
+ # **Capricornus-MoT-1.7B-Supreme1-GGUF**
17
+
18
+ > **Capricornus-MoT-1.7B-Supreme1** is a **high-precision, multi-domain expert model** fine-tuned from **Qwen3-1.7B**, built for **code generation**, **mathematical reasoning**, **scientific analysis**, and **open technical inference**. Trained on the **Mixture of Thoughts (MoT)** dataset with combined expert clusters in **code, math, and science**, and enhanced with an **Open Code Reasoning** dataset, it delivers powerful symbolic and structured outputs in a wide range of STEM and reasoning domains.
19
+
20
+ ## Model File
21
+
22
+ | File Name | Size | Format | Description |
23
+ |--------------------------------------------------|--------|---------------|------------------------------------------|
24
+ | Capricornus-MoT-1.7B-Supreme1.BF16.gguf | 3.45 GB| GGUF (BF16) | BFloat16 precision model file |
25
+ | Capricornus-MoT-1.7B-Supreme1.F16.gguf | 3.45 GB| GGUF (F16) | Float16 precision model file |
26
+ | Capricornus-MoT-1.7B-Supreme1.F32.gguf | 6.89 GB| GGUF (F32) | Float32 precision model file |
27
+ | Capricornus-MoT-1.7B-Supreme1.Q4_K_M.gguf | 1.11 GB| GGUF (Q4_K_M) | 4-bit quantized model file |
28
+ | Capricornus-MoT-1.7B-Supreme1.Q5_K_M.gguf | 1.26 GB| GGUF (Q5_K_M) | 5-bit quantized model file |
29
+ | Capricornus-MoT-1.7B-Supreme1.Q8_0.gguf | 1.83 GB| GGUF (Q8_0) | 8-bit quantized model file |
30
+ | config.json | 31 B | JSON | Configuration file |
31
+ | .gitattributes | 1.98 kB| Text | Git attributes configuration |
32
+
33
+ ## Quants Usage
34
+
35
+ (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
36
+
37
+ Here is a handy graph by ikawrakow comparing some lower-quality quant
38
+ types (lower is better):
39
+
40
+ ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)