Configuration Parsing
Warning:
In config.json: "quantization_config.bits" must be an integer
Quantized using the default exllamav3 (0.0.3) quantization process.
- Original model: https://huggingface.co/nbeerbower/Yanfei-v2-Qwen3-32B
- exllamav3: https://github.com/turboderp-org/exllamav3
Yanfei-v2-Qwen3-32B
A repair of Yanfei-Qwen-32B by TIES merging huihui-ai/Qwen3-32B-abliterated, Zhiming-Qwen3-32B, and Menghua-Qwen3-32B using mergekit.
Sponsorship
This model was made possible with compute support from Nectar AI. Thank you! ❤️
Configuration
The following YAML configuration was used to produce this model:
models:
- model: ./Zhiming-Qwen3-32B-merged
parameters:
weight: 1
density: 1
- model: ./Menghua-Qwen3-32B-merged
parameters:
weight: 1
density: 1
- model: huihui-ai/Qwen3-32B-abliterated
parameters:
weight: 1
density: 1
merge_method: ties
base_model: nbeerbower/Yanfei-Qwen3-32B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for MetaphoricalCode/Yanfei-v2-Qwen3-32B-exl3-4.5bpw-hb8
Base model
nbeerbower/Yanfei-v2-Qwen3-32B