prithivMLmods's picture
Update README.md
03eba8a verified
metadata
license: apache-2.0
language:
  - en
base_model:
  - prithivMLmods/Capricornus-MoT-1.7B-Supreme1
pipeline_tag: text-generation
library_name: transformers
tags:
  - text-generation-inference
  - math
  - code
  - science

Capricornus-MoT-1.7B-Supreme1-GGUF

Capricornus-MoT-1.7B-Supreme1 is a high-precision, multi-domain expert model fine-tuned from Qwen3-1.7B, built for code generation, mathematical reasoning, scientific analysis, and open technical inference. Trained on the Mixture of Thoughts (MoT) dataset with combined expert clusters in code, math, and science, and enhanced with an Open Code Reasoning dataset, it delivers powerful symbolic and structured outputs in a wide range of STEM and reasoning domains.

Model File

File Name Size Format Description
Capricornus-MoT-1.7B-Supreme1.BF16.gguf 3.45 GB GGUF (BF16) BFloat16 precision model file
Capricornus-MoT-1.7B-Supreme1.F16.gguf 3.45 GB GGUF (F16) Float16 precision model file
Capricornus-MoT-1.7B-Supreme1.F32.gguf 6.89 GB GGUF (F32) Float32 precision model file
Capricornus-MoT-1.7B-Supreme1.Q4_K_M.gguf 1.11 GB GGUF (Q4_K_M) 4-bit quantized model file
Capricornus-MoT-1.7B-Supreme1.Q5_K_M.gguf 1.26 GB GGUF (Q5_K_M) 5-bit quantized model file
Capricornus-MoT-1.7B-Supreme1.Q8_0.gguf 1.83 GB GGUF (Q8_0) 8-bit quantized model file
config.json 31 B JSON Configuration file
.gitattributes 1.98 kB Text Git attributes configuration

Quants Usage

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better):

image.png