NarrativAIV2
This model is a fine-tuned version of the LLaMA 3.1 language model, specifically trained on a curated dataset of interactive roleplaying scenarios.
Model Description
- Base Model: LLaMA 3.1
- Training Data: A diverse dataset of 977 fictional scenarios featuring engaging characters in various settings, emphasizing emotional depth and complex interactions.
- Fine-tuning Method: This model was fine-tuned using supervised learning, focusing on continuing the given roleplay prompt in a consistent, immersive manner.
Limitations
- Bias: The model's responses may reflect biases present in the training data.
- Factual Accuracy: The model is not designed to provide factual information and may generate inaccurate statements.
- Repetitive Responses: Occasionally, the model may produce repetitive or predictable responses.
License
This model is released under the MIT license.
- Downloads last month
- 14