YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

this is a test.... trying to get Qwen3 30B to behave.

I started by fine-tuning SuperbEmphasis/Black-Eclipse-Test-ERP-RP-v2 with my improved dataset.

however this time, I cranked the activated experts to 64/128 to force more parameters to get trained.

Then I cranked the experts back down to 24/128.

it is surprisingly coherent. the biggest issue I have run into so far is that it REALLY doesnt like roleplay scenarios where '{{char}}' can be multiple characters. It wants to stay as the single initial characters This tells me that I need more scenario based examples...

Downloads last month
12
Safetensors
Model size
30.5B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for SuperbEmphasis/Black-Eclipse-Test-ERP-RP-V3-24E

Quantizations
3 models