๐ง We just implemented Andrej Karpathy's "third paradigm" for LLM learning!
System Prompt Learning (SPL) enables LLMs to automatically learn problem-solving strategies from experience, rather than relying on static prompts.
๐ How it works: Your LLM builds a database of effective strategies, selects the best ones for each problem, and refines them over time based on success rates.
The best part? All strategies are human-readable and the system gets progressively better at problem types you use frequently.
โจ Key benefits: ๐ Cumulative learning over time ๐ Transparent, inspectable strategies ๐ Works with any OpenAI-compatible API โก Simple integration: just add "spl-" prefix to your model
Built as an open-source plugin in optillm. After 500 queries, our system developed 129 strategies and refined 97 of them!
This feels like a genuine step toward AI that learns from experience while staying completely interpretable.
App-Use : Create virtual desktops for AI agents to focus on specific apps.
App-Use lets you scope agents to just the apps they need. Instead of full desktop access, say "only work with Safari and Notes" or "just control iPhone Mirroring" - visual isolation without new processes for perfectly focused automation.
Running computer-use on the entire desktop often causes agent hallucinations and loss of focus when they see irrelevant windows and UI elements. App-Use solves this by creating composited views where agents only see what matters, dramatically improving task completion accuracy
What you can build: Research agents working in Safari while writing agents draft in Notes, iPhone automation for messages and reminders, parallel testing across isolated app sessions, or teams of specialized agents working simultaneously without interference.
Hey! I built RAG MCP Server Space, a simple Gradio MCP server for RAG systems that allows you to search relevant results without passing huge contexts to your LLM.
You can use this space to integrate with your agents and improve the efficiency of your search results. Feel free to try it out and let me know if you have any feedback or questions!
CausVid LoRA V2 of Wan 2.1 is just amazing. In this tutorial video I will show you how to use the most powerful video generation model Wan 2.1 with CausVid LoRA effortlessly. Normally, Wan 2.1 requires 50 steps to get excellent results. With CausiVid LoRA we get such excellent results only in 8 steps. Morever, with newest version 2, now the quality is almost identical to base Wan 2.1. I will show how to download and use in SwarmUI with 1-click to apply download and apply presets. We will also leverage of ComfyUI and fastest attention (Sage Attention).