ArtusDev's picture
Update README.md
87044ec verified
metadata
datasets:
  - Delta-Vector/Hydrus-Instruct-SmolTalk-V2
  - Delta-Vector/Hydrus-SonnetOrca-V2
  - Delta-Vector/Hydrus-FeedSum-ShareGPT
  - Delta-Vector/Hydrus-Tulu-Personas-Filtered-Sharegpt
  - Delta-Vector/Hydrus-No_Robots-R1-Filtered
  - Delta-Vector/Hydrus-Chat_error-Pure-Dove-sharegpt
  - Delta-Vector/Hydrus-HelpSteer2
  - Delta-Vector/Hydrus-R1-Thinking-Sharegpt
  - Delta-Vector/Hydrus-Science-QA-sharegpt
  - Delta-Vector/Hydrus-Claude-Instruct-2.7K
  - Delta-Vector/Hydrus-Claude-Instruct-5K
  - PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4
  - PocketDoc/Dans-Toolmaxx-ShellCommands
  - PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
  - PocketDoc/Dans-Logicmaxx-SAT-AP
  - PocketDoc/Dans-Benchmaxx
  - Nitral-AI/ARES-ShareGPT
  - PocketDoc/Dans-Taskmaxx-TableGPT
  - Delta-Vector/Ursa-Erebus-16K
  - Delta-Vector/Ursa-Books-Light-Novels-V1
  - NewEden/Orion-LIT
  - Delta-Vector/Ursa-Asstr-V2-18k
  - Delta-Vector/Ursa-Books-V2
  - Delta-Vector/Ursa-Scribblehub-7k
  - Delta-Vector/Ursa-Orion-EA-Comp-Filtered
  - Delta-Vector/Ursa-HoneyFeed
  - Delta-Vector/Ursa-Falling-through-the-world
base_model:
  - Delta-Vector/Sol-Reaver-15B-Instruct
base_model_relation: quantized
quantized_by: ArtusDev
tags:
  - roleplay
  - instruct
  - creative_writing
  - story-writing
  - mistral
  - exl3

Sol Reaver 15B

Model banner

Model Information

Sol-Reaver-15B-Instruct

15B parameters Creative / Fresh Prose Co-writing/Roleplay/Adventure Generalist

The first in the line of a New series of Roleplay / Adventure / Co-writer Models - Finetuned ontop of Sol-Reaver-15B-Pretrain

This model has been trained on 200M tokens of high quality Instruct data, It's focus is to provide a base for further finetuning|Merging

It's goal is to have refreshing Prose, Creativity, Good Instruct following and the *Brains*.

Support me on Ko-Fi: https://ko-fi.com/deltavector

Quantized Versions

Available Downloads

Prompting

Model has been tuned with the ChatML formatting. A typical input would look like this:

<|im_start|>user
Hi there!<|im_end|>
<|im_start|>assistant
Nice to meet you!<|im_end|>
<|im_start|>user
Can I ask a question?<|im_end|>
<|im_start|>assistant

Samplers

For testing of this model, I used Temp=1, 0.1 Min-P.

See Axolotl Config

            https://files.catbox.moe/u9dakg.yml
            

Training

The training was done for 2 epoch using 8 x H200s GPUs graciously provided by Kalomaze for the fine-tuning of the model.

Credits

Thank you to Lucy Knada, Ateron, Alicat, Intervitens, Cgato, Kubernetes Bad and the rest of Anthracite.