|
--- |
|
license: apache-2.0 |
|
--- |
|
# ArliAI-RPMax-12B-v1.1 |
|
===================================== |
|
|
|
## Overview |
|
|
|
This repository is based on the Mistral-Nemo-Base-2407 model and is governed by the Apache 2.0 License agreement: https://huggingface.co/mistralai/Mistral-Nemo-Base-2407 |
|
|
|
## Model Description |
|
|
|
ArliAI-RPMax-12B-v1.1 is trained on a diverse set of curated RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive, with a unique approach to training that minimizes repetition. |
|
|
|
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/ |
|
|
|
### Training Details |
|
|
|
* **Sequence Length**: 8192 |
|
* **Training Duration**: Approximately 2 days on 2x3090Ti |
|
* **Epochs**: 1 epoch training for minimized repetition sickness |
|
* **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights |
|
* **Learning Rate**: 0.00001 |
|
* **Gradient accumulation**: Very low 32 for better learning. |
|
|
|
## Quantization |
|
|
|
The model is available in quantized formats: |
|
|
|
* **FP16**: https://huggingface.co/ArliAI/ArliAI-RPMax-12B-v1.1 |
|
* **GGUF**: https://huggingface.co/ArliAI/ArliAI-RPMax-12B-v1.1-GGUF |
|
|
|
## Suggested Prompt Format |
|
|
|
Mistral Instruct Prompt Format |
|
|