Text Generation
Transformers
Safetensors
English
glm4
conversational

What about refusals?

#1
by MrDevolver - opened

Hello,

models with roleplay as primary use usually perform better when they are free of restrictions of the refusals usually present in models for general purpose. The regular GLM-4 model is quite impressive, but unfortunately not very suitable for wide range of roleplay scenarios due to its tendency to resort to refusals.

My question is the following:
Is this GLM4-9B-Neon-v2 model a roleplay finetune on top of the regular, unmodified GLM-4 model, or was the model modified in any way to address the aforementioned issue?

Hi,

Great question! Given that the training data had lots of synth RP and short story data, this should alleviate most of the refusals, any remaining ones usually can be dealt with by system prompt. Your mileage may vary though

Model was trained on large quantity of uncensored roleplaying and short story gen data, so most refusals should be dealt with. Testing with master import settings, provided in this repo, I have not encountered any refusals.

Thank you both. I consider the base GLM-4-0414 a very intelligent model for its size, in both 9B and 32B versions. Since I do a lot of roleplaying, I'm thrilled to test this particular intelligent model enhanced with roleplay data. If I find any issues with it regarding roleplay, I'll report them to you. Thank you again and thank you for using this particular model for creating this roleplaying model, I think it can help this model to gain more popularity. 😉👍

Hope you enjoy!
I'm planning to do a training run on 32B version soon

Hope you enjoy!
I'm planning to do a training run on 32B version soon

Thank you. 32B version would be lovely. Not too suitable for my current hardware (even Q2_K of 32B is usually very slow for me, under 3 t/s), but I can imagine people with better hardware would have a good time with it.

Your need to confirm your account before you can post a new comment.

Sign up or log in to comment