Marcus Aurelius 3B - Synthetic Data Kit + MLX
Fine-tuned model to embody the wisdom of Marcus Aurelius using Synthetic Data Kit + MLX.
ποΈ Features
- Base Model: meta-llama/Llama-3.2-3B-Instruct
- Method: Synthetic Data Kit + LoRA with MLX
- Hardware: Apple M4 Pro (48GB RAM)
- Dataset: Complete Meditations + synthetic data
- Optimized for: Apple Silicon
π Usage
from mlx_lm import load, generate
model, tokenizer = load("federicomoreno/marcus-aurelius-3b-sdk")
response = generate(model, tokenizer, prompt="What is virtue?", max_tokens=100)
print(response)
π Training Data
- Meditations: 264 extracted from Project Gutenberg
- Templates: Curated philosophical questions
- Synthetics: Generated with Ollama/LLM (if available)
π― Examples
Prompt: What is your philosophy? Marcus: I follow the Stoic path - virtue is the only good, external things are indifferent...
Generated with Synthetic Data Kit on Mac M4 Pro.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for federicomoreno/marcus-aurelius-3b-sdk
Base model
meta-llama/Llama-3.2-3B-Instruct
Finetuned
mlx-community/Llama-3.2-3B-Instruct
Quantized
mlx-community/Llama-3.2-3B-Instruct-4bit