--- base_model: - allura-org/GLM4-9B-Neon-v2 library_name: transformers tags: - mergekit - merge - roleplay - story-writing - adventure - gemma-2 - rp - nsfw language: - en - zh - ja - fr - ko - de - ru --- | **For RP & story gen,
GLM-4-9B played a safe card and wins the secret lottery. It proves hexagon is powerful without blade, that goodness lies in fewer mistakes.
The 0414 version dropped hot in fine-tuners' hands while the air is still cold.
People see a silent goat fit in human skin,
graduated from Tsinghua and going to MIT...

Actually an abliteration is already enough,
though [huihui-ai/GLM-4-9B-0414-abliterated](https://huggingface.co/huihui-ai/GLM-4-9B-0414-abliterated) hit mergekit with download error. Anyway [THUDM/LongReward-glm4-9b-DPO](https://huggingface.co/THUDM/LongReward-glm4-9b-DPO) should lower some censorship,
helping the one [allura-org/GLM4-9B-Neon-v2](https://huggingface.co/allura-org/GLM4-9B-Neon-v2) go shameless.

It seldom jumps out of simulation; refreshing is the key to trance the Hobbit.
Save your effort for seduction,
possess by defining the fact.** | |:---:| *"Young transformers start from playing Rubik's cubes."* ```yaml models: - model: allura-org/GLM4-9B-Neon-v2 - model: THUDM/LongReward-glm4-9b-DPO parameters: weight: [0.496, 0.166, 0.166, 0.496, 0.496, 0.166, 0.166, 0.496] base_model: allura-org/GLM4-9B-Neon-v2 merge_method: sce parameters: select_topk: 0.06 lambda: 0.66 tokenizer_source: base dtype: float32 out_dtype: bfloat16 ```