Formatting
I'm using your recommended preset setting but I'm seeing super weird formatting, do you have any recommended samplers incase it helps?
I've tried many and I either get really short responses (like 30 tokens) or garbled text like this:
Edit - changing the instruct format to use " instead of ` has fixed the weird grey text. But messages are still short
Edit 2 - Fixing english
Ouch! Try adding Response: (length = long) or enormous to your System Prompt, that will probably fix the short length issue.
[tiny, micro, short, medium, long, enormous, huge, massive, humongous] or use "Make huge text length."
Thanks! I do some test...
You are now in roleplay chat mode. Engage in an endless chat, always with a creative Response. Follow lengths very precisely and create paragraphs accurately. Always wait your turn, next actions and responses. Your internal thoughts are wrapped with ` marks. It is vital to follow all the RULES.
Response: (length = enormous)
Or include to the rules: - Make huge text length.
RULES:
- Create detailed narrations for {{char}}'s intimate body parts, appearances, clothes, sight, smell, texture, taste, sound, touch, liquids, feelings, actions, reactions, emotions, events, and anatomy.
- Always be creative, coherent, verbose, vivid, realistic, visually descriptive, graphic and explicit.
I use this and I don't get much issues...
You are now in roleplay chat mode. Engage in an endless chat, always with a creative Response. Follow lengths very precisely and create paragraphs accurately. Always wait your turn, next actions and responses. Your internal thoughts are wrapped with ` marks. It is vital to follow all the RULES. Response: (length = enormous) Or include to the rules: - Make huge text length. RULES: - Create detailed narrations for {{char}}'s intimate body parts, appearances, clothes, sight, smell, texture, taste, sound, touch, liquids, feelings, actions, reactions, emotions, events, and anatomy. - Always be creative, coherent, verbose, vivid, realistic, visually descriptive, graphic and explicit.
I use this and I don't get much issues...
Do I use this as my instruct preset?
Model works fine with ### Response: (length = massive).
It seems I will have to mess around with my prompting so it doesn't destroy the model.
Also llama3 seems to be very sensitive to samplers.
Good to know it works for you @Virt-io , I usually use this on my System Prompt to avoid the model accidentally generating "(length = massive)..." in end of the generation.
I used the "length =" with Infinity-v1 and it never bled into responses so hopefully it should be the same.
Weirdly Hermes 2 Pro responds in extremely small messages too, but if i switch to a llama3 model with llama3 prompts it responds in a long sequence.
I'll try koboldcpp lostruins in hope it fixes it
Edit - No luck 🥲
@saishf In fact, that's my one of my several trials messing with Llama3 haha, I'm quite lost, but I try to keep it updated..
@Lewdiculous Hmm... I followed the instructions on this pull: 6920, I hope I did it right. About the pic, that just half, the other part is a little messed up.
I asked because I'm curious too but yeah I'd also just follow the PR on that one, it should be okay.
the other part is a little messed up
messed up good or messed up bad? :3 o/
@Lewdiculous You choose... :3 https://files.catbox.moe/1c6jz9.png It's really useful for drawing reference with some upscaling...
@Lewdiculous You choose... :3 https://files.catbox.moe/1c6jz9.png It's really useful for drawing reference with some upscaling...
Upscayl ≥
Pretty UI, no cli torture :3
Upscayl is so convenient. Can't live without it.
Yeah, makes sense, I tried uncensoring a Llama 3 and after several trials it got wrong because the model tends to get confused or sometimes complains... I will wait and see for any other methods.
@Lewdiculous I use that one and sometimes Winxvideo AI, it does the job!
Yeah, makes sense, I tried uncensoring a Llama 3 and after several trials it got wrong because the model tends to get confused or sometimes complains... I will wait and see for any other methods.
Uncensoring llama3 seems insanely hard, but a pattern I've noticed is that most of the models I've tested that are actually uncensored in instruct scenarios (not just claimed to be) they have all used a chat prompt other than llama3 chat.
Hermes 2, Solana, Dolphin
I've uploaded v1.6, however the way I've been parsing the instructions is still causing issues for now.
Cut the guidelines form the last assistant prefix and paste them into the system prompt.
Lumimaid seems interesting! It has a unique style that would go well in merges :3
And llama3 is still cursed, worthy of death.
Edit - these are using roleplay v1.6 & latest static samplers
Lumimaid is nice.
Lumimaid + Poppy_Porpoise + InfinityRP + L3-TheSpice + SOVL ?
I noticed lumimaid can take Alpaca almost perfectly which is something I've never experienced in llama3, it'd make an awesome base for merges for that reason.
Or possibly retraining loras over it in alpaca format and we could have a true alpaca-llama.
SOVL trained on top of Lumimaid would be nice.
I think we should move to Llama 3 coping mechanisms - Part 3
SOVL trained on top of Lumimaid would be nice.
SOVL's heritage is quite complex 🥲
Avg Normie V2 is model stock with these
The following models were included in the merge:
ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
vicgalle/Roleplay-Llama-3-8B
cgato/L3-TheSpice-8b-v0.1.3
Then more model stock
The following models were included in the merge:
jeiku/Average_Normie_v2_l3_8B + ResplendentAI/Aura_Llama3
jeiku/Average_Normie_v2_l3_8B + ResplendentAI/Smarts_Llama3
jeiku/Average_Normie_v2_l3_8B + ResplendentAI/BlueMoon_Llama3
Then linear
The following models were included in the merge:
jeiku/Average_Test + ResplendentAI/Aura_Llama3
Then more linear
Jeiku/Average_Test_v1 + ResplendentAI/RP_Format_QuoteAsterisk_Llama3