Edit model card

Model Card: Minimalist Assistant

Model Details

  • Base Model: Mistral Instruct v2
  • Tokenizer: based on Mistral Instruction following

Intended Use

  • As Editor Assistant for revision and paraphrasing
  • Avoids technical jargon in favor of clear and accessible language

Training Data

  • Initial Training: 14,000 conversations in minimalist style and more accessible language
    • Dataset: kevin009/system-defined-sft-llama3-14k
  • Further Training: 8,000 revision conversations to enhance rewriting and paraphrasing tasks.

Performance and Limitations

  • Limitations:
    • May produce shorter outputs compared to original version.
    • Potential biases

Ethical Considerations

  • Designed for daily use, potential biases from training data should be considered
  • The model does not have implemented safety measures to prevent generation of potentially harmful or offensive content

Additional Information

  • Fine-tuned to address limitations in writing tasks observed in other models
  • Personalized for everyday use cases
  • Motivation for development was to create a model better suited for writing tasks, as existing models were found lacking in this area
  • SFT fine-tuned model
Downloads last month
5
Safetensors
Model size
7.24B params
Tensor type
F32
·
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for kevin009/minirewrite

Finetunes
2 models
Quantizations
1 model