TextHumanizer - Fine-tuned Ministral-8B-Instruct:
TextHumanizer is a fine-tuned version of the Ministral-8B-Instruct
model. It is designed to transform robotic, overly-formal, or synthetic AI-generated text into fluent, natural, and human-like language.
This model was fine-tuned using Apple's MLX framework on a custom dataset of AI-generated text paired with humanized rewrites.
Model Details:
- Base model:
mistralai/Ministral-8B-Instruct
- Model size: 8B parameters
- Fine-tuned using: MLX (Apple Silicon-native training)
- Fine-tuning method: QLora
- Precision: float16
- Hardware used: Apple Silicon M1 Macbook Pro
- Training time: 10 mins
- Epochs / Steps: 200
- Batch size: 4
Training Metrics:
Metric | Value |
---|---|
Final Training Loss | 0.171 |
Final Validation Loss | 0.175 |
Dataset:
- Source: Custom synthetic-to-human text pairs
- Size: 10k rows
- Structure: Pairs of
(synthetic_input, humanized_output)
- Preprocessing: Standard MLX tokenization, formatting for instruct tuning
- License: Public
Capabilities:
TextHumanizer is designed to:
- Improve fluency and tone of AI outputs
- Make answers sound more relatable and natural
- Polish robotic or overly formal language
MLX (Apple Silicon):
from mlx_lm import load, generate
model, tokenizer = load("vishal-adithya/ministral-8B-texthumanizer")
prompt = "Instruction: Make this sound more natural.Input: The individual proceeded to consume nourishment.Response:"
generate(model, tokenizer, prompt)
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
1
Ask for provider support
Model tree for vishal-adithya/ministral-8B-texthumanizer
Base model
mistralai/Ministral-8B-Instruct-2410Dataset used to train vishal-adithya/ministral-8B-texthumanizer
Evaluation results
- Training Loss on trainself-reported0.171
- Validation Loss on validationself-reported0.175
- Learning Rateself-reported0.000
- Tokens per secself-reported117.298