OwenArli's picture
Update README.md
1622999 verified
|
raw
history blame
2.11 kB
metadata
license: llama3.1

Llama-3.1-70B-ArliAI-RPMax-v1.1

=====================================

Overview

This repository is based on the Meta-Llama-3.1-70B-Instruct model and is governed by the Meta Llama 3.1 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3.1-70B-Instruct

Model Description

Llama-3.1-70B-ArliAI-RPMax-v1.1 is a variant of the Meta-Llama-3.1-70B-Instruct model, trained on a diverse set of curated RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive, with a unique approach to training that minimizes repetition.

This version is an early test on 70B that is only run with a short sequence length for training, we are planning to run another training with higher sequence length.

You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/

Let us know what you think of the model! The 8B and 12B versions of RPMax had great feedback from users, so we expect this 70B version to one of the best RP models.

Training Details

  • Sequence Length: 4096
  • Training Duration: Approximately 5 days on 2x3090Ti
  • Epochs: 1 epoch training for minimized repetition sickness
  • LORA: 64-rank 128-alpha, resulting in ~2% trainable weights
  • Learning Rate: 0.00001
  • Gradient accumulation: Very low 32 for better learning.

Quantization

The model is available in quantized formats:

Suggested Prompt Format

Llama 3 Instruct Format

Example:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are [character]. You have a personality of [personality description]. [Describe scenario]<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>