For RP & story gen,
GLM-4-9B played a safe card and wins the secret lottery. It proves hexagon is powerful without blade, that goodness lies in fewer mistakes.
The 0414 version dropped hot in fine-tuners' hands while the air is still cold.
People see a silent goat fit in human skin,
graduated from Tsinghua and going to MIT...

Actually an abliteration is already enough,
though huihui-ai/GLM-4-9B-0414-abliterated hit mergekit with download error. Anyway THUDM/LongReward-glm4-9b-DPO should lower some censorship,
helping the one allura-org/GLM4-9B-Neon-v2 go shameless.

It seldom jumps out of simulation; refreshing is the key to trance the Hobbit.
Save your effort for seduction,
possess by defining the fact.
"Young transformers start from playing Rubik's cubes."
models:
  - model: allura-org/GLM4-9B-Neon-v2
  - model: THUDM/LongReward-glm4-9b-DPO
    parameters:
      weight: [0.496, 0.166, 0.166, 0.496, 0.496, 0.166, 0.166, 0.496]
base_model: allura-org/GLM4-9B-Neon-v2
merge_method: sce
parameters:
  select_topk: 0.06
  lambda: 0.66
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
Downloads last month
19
Safetensors
Model size
9.4B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AIgotahole/Glm4-9B-RP-brb

Finetuned
(1)
this model
Quantizations
4 models

Collection including AIgotahole/Glm4-9B-RP-brb