Produced by Antigma Labs, Antigma Quantize Space

Follow Antigma Labs in X https://x.com/antigma_labs

Antigma's GitHub Homepage https://github.com/AntigmaLabs

Quantization Format (GGUF)

We use llama.cpp release b5572 for quantization. Original model: https://huggingface.co/pot99rta/GoldFusionReiV3-12B

Download a file (not the whole branch) from below:

Filename Quant type File Size Split
goldfusionreiv3-12b-q5_k_m.gguf Q5_K_M 8.13 GB False

Original Model Card

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the TIES merge method using Delta-Vector/Rei-V3-KTO-12B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Delta-Vector/Rei-V3-KTO-12B
    #no parameters necessary for base model
  - model: Delta-Vector/Rei-V3-KTO-12B
    parameters:
      density: 0.5
      weight: 0.5
  - model: pot99rta/GoldFusion-12B
    parameters:
      density: 0.5
      weight: 0.5

merge_method: ties
base_model: Delta-Vector/Rei-V3-KTO-12B
parameters:
  normalize: false
  int8_mask: true
dtype: float16

Downloading using huggingface-cli

Click to view download instructions First, make sure you have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"

Then, you can target the specific file you want:

huggingface-cli download https://huggingface.co/perfectlygray/GoldFusionReiV3-12B-GGUF --include "goldfusionreiv3-12b-q5_k_m.gguf" --local-dir ./

If the model is bigger than 50GB, it will have been split into multiple files. In order to download them all to a local folder, run:

huggingface-cli download https://huggingface.co/perfectlygray/GoldFusionReiV3-12B-GGUF --include "goldfusionreiv3-12b-q5_k_m.gguf/*" --local-dir ./

You can either specify a new local-dir (e.g. deepseek-ai_DeepSeek-V3-0324-Q8_0) or it will be in default hugging face cache

Downloads last month
1
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for perfectlygray/GoldFusionReiV3-12B-GGUF

Quantized
(4)
this model