Chatml format. The dataset is about 1400 entries ranging from 8-16k. It's split three ways between long context multi turn chat, long context summarization, and writing analysis. Full fine tune using linear a rope scale factor of 2.0. Trained for five epochs with a learning rate of 1e-5.

Downloads last month
22
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for openerotica/Llama-3-lima-nsfw-16k-test

Quantizations
1 model