DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties-GGUF

Updated models using Dare-Ties methods. So i been tinkering for a while so i found that quantizing from F32 increase the quality slightly. And for the i-matrix i was still working on the calibration text that can enchance story perplexity.

  • Slightly more creativity when naming Characters.
  • Commitments. At the start of the story the model would choose its base behavior either to comply with you or go against you.

From my experience with Q5_K_M the defiant meter is high and can be adjusted through temperature

Temperature:

  • 0.8, Normal mode. The model would cooperate and won't try to swirl or twist your narative.
  • 0.9, This one perfect for you who want the model to rebel slightly.
  • 1.0, This one feels natural like roleplaying with a real person but you will need to set mirostat to 2 with mirostat_tau to 5 or 6 else it's just start to babbles.

In the end do your own experiment its fun.

And i think this is the best i can get from both models.

These are non i-matrix quant.

Downloads last month
96
GGUF
Model size
23.6B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for h34v7/DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties-GGUF

Collection including h34v7/DansXPantheon-RP-Engine-V1.4-24b-Small-Instruct-Dare-Ties-GGUF