language: | |
- en | |
license: cc-by-nc-4.0 | |
library_name: transformers | |
tags: | |
- 4-bit | |
- AWQ | |
- text-generation | |
- autotrain_compatible | |
- endpoints_compatible | |
- text-generation-inference | |
- transformers | |
- unsloth | |
- mistral | |
- trl | |
- sft | |
- Roleplay | |
- roleplay | |
base_model: Alsebay/NarumashiRTS-7B-V2-1 | |
pipeline_tag: text-generation | |
inference: false | |
quantized_by: Suparious | |
# Alsebay/NarumashiRTS-7B-V2-1 AWQ | |
- Model creator: [Alsebay](https://huggingface.co/Alsebay) | |
- Original model: [NarumashiRTS-7B-V2-1](https://huggingface.co/Alsebay/NarumashiRTS-7B-V2-1) | |
## Model Summary | |
> [!Important] | |
> Still in experiment | |
Remake [version 2](https://huggingface.co/Alsebay/NarumashiRTS-V2) with safetensor format, more safety and stable method, nothing change too much (base on the model hash). But to be real, in the previous version 2, I used unsafety method to save pretrain model, which could lead apply Lora layer twice to model, that make model have terrible performance. (Thanks Unsloth community told me about this :D ) | |
- **Finetuned with rough translate dataset, to increase the accuracy in TSF theme, which is not quite popular. (lewd dataset)** | |
- **Finetuned from model :** SanjiWatsuki/Kunoichi-DPO-v2-7B . Thank SanjiWatsuki a lot :) | |