--- base_model: - meta-llama/Llama-3.1-8B-Instruct tags: - conversational - roleplay - chat license: llama3.1 --- # Llama 3.1 8b RP Ink ![00200-3491854549.png](https://cdn-uploads.huggingface.co/production/uploads/634262af8d8089ebaefd410e/XLm9ZK0bIPyo3HooA1EPc.png) A roleplay-focused LoRA finetune of Llama 3.1 8B Instruct. Methodology and hyperparams inspired by [SorcererLM](https://huggingface.co/rAIfle/SorcererLM-8x22b-bf16) and [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush). Yet another model in the Ink series, following in the footsteps of [the rest of them](https://huggingface.co/collections/allura-org/ink-6772fd1442308781594bbabb) ## Dataset The worst mix of data you've ever seen. Like, seriously, you do not want to see the things that went into this model. It's bad. "this is like washing down an adderall with a bottle of methylated rotgut" - inflatebot Update: I have sent the (public datasets in the) data mix publicly already so here's that
## Quants TODO! ## Recommended Settings Chat template: Llama 3.1 Recommended samplers (not the be-all-end-all, try some on your own!): - Temp 1.03 / Top A 0.3 / TFS 0.75 / Rep Pen 1.03 - Your samplers can go here! :3 ## Hyperparams ### General - Epochs = 2 - LR = 6e-5 - LR Scheduler = Cosine - Optimizer = Paged AdamW 8bit - Effective batch size = 16 ### LoRA - Rank = 16 - Alpha = 32 - Dropout = 0.25 (Inspiration: [Slush](https://huggingface.co/crestf411/Q2.5-32B-Slush)) ## Credits Humongous thanks to the people who created and curated the original data Big thanks to all Allura members, for testing and emotional support ilya /platonic especially to inflatebot who made the model card's image :3