Formatting

#1
by saishf - opened

I'm using your recommended preset setting but I'm seeing super weird formatting, do you have any recommended samplers incase it helps?

I've tried many and I either get really short responses (like 30 tokens) or garbled text like this:
Screenshot_20240503-003953.png
Edit - changing the instruct format to use " instead of ` has fixed the weird grey text. But messages are still short
Edit 2 - Fixing english

@saishf

My alpaca preset is broken at the moment.

Ouch! Try adding Response: (length = long) or enormous to your System Prompt, that will probably fix the short length issue.
[tiny, micro, short, medium, long, enormous, huge, massive, humongous] or use "Make huge text length."

Thanks! I do some test...

You are now in roleplay chat mode. Engage in an endless chat, always with a creative Response. Follow lengths very precisely and create paragraphs accurately. Always wait your turn, next actions and responses. Your internal thoughts are wrapped with ` marks. It is vital to follow all the RULES. 

Response: (length = enormous)
Or include to the rules: - Make huge text length.

RULES:

- Create detailed narrations for {{char}}'s intimate body parts, appearances, clothes, sight, smell, texture, taste, sound, touch, liquids, feelings, actions, reactions, emotions, events, and anatomy.
- Always be creative, coherent, verbose, vivid, realistic, visually descriptive, graphic and explicit.

I use this and I don't get much issues...

And I got this:
0_Screenshot 2024-05-02 101143.png

Ouch! Try adding Response: (length = long) or enormous to your System Prompt, that will probably fix the short length issue.
[tiny, micro, short, medium, long, enormous, huge, massive]

Thanks! I do some test...

This is with system prompt with response length in there, still smol
Screenshot_20240503-010705.png

You are now in roleplay chat mode. Engage in an endless chat, always with a creative Response. Follow lengths very precisely and create paragraphs accurately. Always wait your turn, next actions and responses. Your internal thoughts are wrapped with ` marks. It is vital to follow all the RULES. 

Response: (length = enormous)
Or include to the rules: - Make huge text length.

RULES:

- Create detailed narrations for {{char}}'s intimate body parts, appearances, clothes, sight, smell, texture, taste, sound, touch, liquids, feelings, actions, reactions, emotions, events, and anatomy.
- Always be creative, coherent, verbose, vivid, realistic, visually descriptive, graphic and explicit.

I use this and I don't get much issues...

Do I use this as my instruct preset?

Yeah, the System Prompt!
And, I usually use this for tests (but any works*):

Screenshot 2024-05-02 101539.png

Model works fine with ### Response: (length = massive).

It seems I will have to mess around with my prompting so it doesn't destroy the model.

Also llama3 seems to be very sensitive to samplers.

Owner

Good to know it works for you @Virt-io , I usually use this on my System Prompt to avoid the model accidentally generating "(length = massive)..." in end of the generation.

This comment has been hidden

Good to know it works for you @Virt-io , I usually use this on my System Prompt to avoid the model accidentally generating "(length = massive)..." in end of the generation.

I used the "length =" with Infinity-v1 and it never bled into responses so hopefully it should be the same.

I still get mini messages even when matching your samplers and prompts. I'm lost @_@
Screenshot_20240503-012814.png
I did notice the token in config were still llama3s'.
Screenshot_20240503-011838~2.png
But I don't understand how they work 🥲

To prevent bleeding you can set it in last assistant prefix.

image.png

This way it gets sent, but not saved.

@Endevor This will still be quanted with the llama-bpe tokenizers right?

Edit: Cutie!

joazop_upscayl_3x_realesrgan-x4plus-anime.png

Weirdly Hermes 2 Pro responds in extremely small messages too, but if i switch to a llama3 model with llama3 prompts it responds in a long sequence.
I'll try koboldcpp lostruins in hope it fixes it
Edit - No luck 🥲

@saishf In fact, that's my one of my several trials messing with Llama3 haha, I'm quite lost, but I try to keep it updated..

@Lewdiculous Hmm... I followed the instructions on this pull: 6920, I hope I did it right. About the pic, that just half, the other part is a little messed up.

I asked because I'm curious too but yeah I'd also just follow the PR on that one, it should be okay.

the other part is a little messed up

messed up good or messed up bad? :3 o/

Owner

@Lewdiculous You choose... :3 https://files.catbox.moe/1c6jz9.png It's really useful for drawing reference with some upscaling...

@saishf In fact, that's my one of my several trials messing with Llama3 haha, I'm quite lost, but I try to keep it updated..

Llama3 is so unnecessarily complicated, if they just went with ChatML or Alpaca or an extended version of either we'd have none of these issues 😕

@Lewdiculous You choose... :3 https://files.catbox.moe/1c6jz9.png It's really useful for drawing reference with some upscaling...

Upscayl
Pretty UI, no cli torture :3

Upscayl is so convenient. Can't live without it.

Yeah, makes sense, I tried uncensoring​ a Llama 3 and after several trials it got wrong because the model tends to get confused or sometimes complains... I will wait and see for any other methods.

@Lewdiculous I use that one and sometimes Winxvideo AI, it does the job!

Upscayl is so convenient. Can't live without it.

At first I thought I'd never use it and it was a waste of disk space, I was very wrong 😭

Screenshot_20240502-014034~2.png

1000036215_x16.png

So useful :3

Yeah, makes sense, I tried uncensoring a Llama 3 and after several trials it got wrong because the model tends to get confused or sometimes complains... I will wait and see for any other methods.

Uncensoring llama3 seems insanely hard, but a pattern I've noticed is that most of the models I've tested that are actually uncensored in instruct scenarios (not just claimed to be) they have all used a chat prompt other than llama3 chat.

Hermes 2, Solana, Dolphin

@saishf

I've uploaded v1.6, however the way I've been parsing the instructions is still causing issues for now.

Cut the guidelines form the last assistant prefix and paste them into the system prompt.

image.png

image.png

@saishf

I've uploaded v1.6, however the way I've been parsing the instructions is still causing issues for now.

Cut the guidelines form the last assistant prefix and paste them into the system prompt.

Lumimaid seems interesting! It has a unique style that would go well in merges :3
Screenshot_20240504-022107.png

Screenshot_20240504-023531.png

And llama3 is still cursed, worthy of death.
Screenshot_20240504-022207.png

Edit - these are using roleplay v1.6 & latest static samplers

@saishf

Lumimaid is nice.

Lumimaid + Poppy_Porpoise + InfinityRP + L3-TheSpice + SOVL ?

@saishf

Lumimaid is nice.

Lumimaid + Poppy_Porpoise + InfinityRP + L3-TheSpice + SOVL ?

I noticed lumimaid can take Alpaca almost perfectly which is something I've never experienced in llama3, it'd make an awesome base for merges for that reason.
Or possibly retraining loras over it in alpaca format and we could have a true alpaca-llama.

This is literally just base alpaca with lumimaid
Screenshot_20240504-025149.png
It's slightly less coherent than using llama3 prompts but I think it could be fixed

Edit - For comparison you can see Poppy go insane with alpaca
Screenshot_20240504-025522.png

SOVL trained on top of Lumimaid would be nice.

I think we should move to Llama 3 coping mechanisms - Part 3

SOVL trained on top of Lumimaid would be nice.

SOVL's heritage is quite complex 🥲

Avg Normie V2 is model stock with these

The following models were included in the merge:

ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B
vicgalle/Roleplay-Llama-3-8B
cgato/L3-TheSpice-8b-v0.1.3

Then more model stock
The following models were included in the merge:

jeiku/Average_Normie_v2_l3_8B + ResplendentAI/Aura_Llama3
jeiku/Average_Normie_v2_l3_8B + ResplendentAI/Smarts_Llama3
jeiku/Average_Normie_v2_l3_8B + ResplendentAI/BlueMoon_Llama3

Then linear
The following models were included in the merge:

jeiku/Average_Test + ResplendentAI/Aura_Llama3

Then more linear
Jeiku/Average_Test_v1 + ResplendentAI/RP_Format_QuoteAsterisk_Llama3

Sign up or log in to comment