--- thumbnail: >- https://cdn-uploads.huggingface.co/production/uploads/67c10cfba43d7939d60160ff/o9eVw82xaajZx_zCO9qyJ.png language: - en license: llama3.3 license_name: llama3.3 license_link: https://github.com/facebookresearch/llama/blob/main/LICENSE inference: false tags: - nsfw - explicit - roleplay - Furry base_model: - Mawdistical/Wanton-Wolf-70B base_model_relation: quantized quantized_by: ArtusDev ---

Wanton-Wolf-70B

User Discretion Advised

A furry finetune model based on L3.3-Cu-Mai-R1-70b, chosen for its exceptional features. *Tail swish*


✧ Quantized Formats


✧ Recommended Settings


✧ Recommended Templates

The following templates are recommended from the original Cu-Mai model page, Adjust if needed:

LLam@ception by @.konnect
LeCeption by @Steel - A completely revamped XML version of Llam@ception 1.5.2 with stepped thinking and reasoning

LeCeption Reasoning Configuration:

Start Reply With:

'<think> OK, as an objective, detached narrative analyst, let's think this through carefully:'

Reasoning Formatting (no spaces):

  • Prefix: '<think>'
  • Suffix: '</think>'

✧ Credits

Model Author

Original Model Creator

Contributors ✨