Balls

Nice writing, good ERP potential.

<Thinking?> It works with LeCeption... But

It doesn't continue the story. The model just goes "...Finally, the response should blah-blah-blah" and that's it. It doesn't even close the </think> container!

So I guess you are better off just using the stepped thinking extension with <think> prefill in it rather than trying to get it to work in a single message.

Settings:

Samplers: top nsigma 1 with temp 1

Sys. prompt: aforementioned LeCeption or the one from here

Quants

Q4_K_S: https://huggingface.co/Yobenboben/Qwen2.5-32B-Juicy_Snowballs_Q4_K_S/resolve/main/bols.gguf?download=true

Merge Details

Merge Method

This model was merged using the TIES merge method using mergekit-community/Qwen2.5-32B-gokgok-step3 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: trashpanda-org/Qwen2.5-32B-Marigold-v1
    parameters:
      weight: 0.9
      density: 0.9
  - model: trashpanda-org/Qwen2.5-32B-Marigold-v0
    parameters:
      weight: 1
      density: 1
  - model: ArliAI/QwQ-32B-ArliAI-RpR-v1
    parameters:
      weight: 1
      density: 1
merge_method: ties
base_model: mergekit-community/Qwen2.5-32B-gokgok-step3
parameters:
  weight: 0.9
  density: 0.9
  normalize: true
  int8_mask: true
tokenizer_source: ArliAI/QwQ-32B-ArliAI-RpR-v1
dtype: float32
out_dtype: bfloat16
Downloads last month
23
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Yobenboben/Qwen2.5-32B-Juicy_Snowballs