Testing again Italian to English, and English to Italian with a Qwen2.5B 3B fine-tune, Whisper and TowerInstruct-7B-v0.2
Massimo Roberto Scamarcia PRO
AI & ML interests
Recent Activity
Organizations
mrs83's activity



https://huggingface.co/blog/mrs83/kurtis-e11-qwen25-3b-instruct
Question for Kurtis: Do you think training AI models on copyrighted material without consent is similar to the exploitation of nature in Princess Mononoke? In the film, the forest gods are sacrificed for industrial gain. Are artists today the forest spirits of our digital age, being consumed to fuel the progress of AI? How can we build AI that respects creative ecosystems instead of draining them?
Kurtis replies: .... Artists today do represent our digital age's "forest gods" or creators whose works can be used for various purposes, often including the development of cutting-edge technologies like AI. However, this does not negate their right to consent and control over their creative output. The exploitation you describe in Princess Mononoke could mirror scenarios where intellectual property is mishandledβwhether that involves direct theft or misappropriation ...

Testing it right now with https://huggingface.co/Unbabel/TowerInstruct-7B-v0.2
Text is translated from German to English, sent to the LLM and translated back from English to German

An offline, privacy-first voice assistant built for macOS (Apple Silicon), designed for empathetic, short-form interactions.
π§ Powered by:
- Whisper (via MLX) for speech-to-text: https://pypi.org/project/mlx-whisper/
- Kurtis-E1 (a custom SmolLM2 LLM) via Ollama
- Coqui-TTS XTTSv2 for multilingual TTS
- Optional translation layer via TowerInstruct-13B-v0.1 for non-English voice input/output: Unbabel/TowerInstruct-13B-v0.1
π§ Everything runs entirely on-device (Mac Mini M4 Max - 24gb) β no cloud, no remote API calls, no data leakage.
π‘ Code is fully handcrafted (no AI-generated code), and designed to showcase whatβs possible with local models, even on laptops.
π οΈ Open to contributions, ideas (e.g., LM Studio for MLX inference, MLX worker subprocess, optimize for latency and VRAM usage).
π Video demo (Italian): https://www.youtube.com/watch?v=8-1PcmUStaI
PoC: https://github.com/ethicalabs-ai/Kurtis-E1-MLX-Voice-Agent
Kurtis-E1: ethicalabs/kurtis-e1-67a9148e0836885c44c7902c
Kurtis-E1 WebGPU: ethicalabs/Kurtis-E1-WebGPU


Code: https://github.com/Jaykef/ai-algorithms/blob/main/smollm2_360M_135M_grpo_gsm8k.ipynb