Quantized using the default exllamav3 (0.0.3) quantization process.


image/png

Yanfei-v2-Qwen3-32B

A repair of Yanfei-Qwen-32B by TIES merging huihui-ai/Qwen3-32B-abliterated, Zhiming-Qwen3-32B, and Menghua-Qwen3-32B using mergekit.

Sponsorship

This model was made possible with compute support from Nectar AI. Thank you! ❤️

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: ./Zhiming-Qwen3-32B-merged
    parameters:
      weight: 1
      density: 1
  - model: ./Menghua-Qwen3-32B-merged
    parameters:
      weight: 1
      density: 1
  - model: huihui-ai/Qwen3-32B-abliterated
    parameters:
      weight: 1
      density: 1
merge_method: ties
base_model: nbeerbower/Yanfei-Qwen3-32B
parameters:
  weight: 1
  density: 1
  normalize: true
  int8_mask: true
dtype: bfloat16

Downloads last month
5
Safetensors
Model size
12.8B params
Tensor type
FP16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MetaphoricalCode/Yanfei-v2-Qwen3-32B-exl3-6bpw-hb6

Quantized
(8)
this model

Dataset used to train MetaphoricalCode/Yanfei-v2-Qwen3-32B-exl3-6bpw-hb6