John Smith's picture

John Smith PRO

John6666

AI & ML interests

None yet

Recent Activity

published a model about 4 hours ago
John6666/dixar-2-double-the-dix-sdxl
published a model about 4 hours ago
John6666/phony-illustrious-mix-v10-sdxl
View all activity

Organizations

open/ acc's profile picture Solving Real World Problems's profile picture FashionStash Group meeting's profile picture No More Copyright's profile picture

John6666's activity

upvoted an article about 7 hours ago
view article
Article

System Prompt Learning: Teaching LLMs to Learn Problem-Solving Strategies from Experience

By codelion โ€ข
โ€ข 5
reacted to codelion's post with ๐Ÿš€ about 7 hours ago
view post
Post
834
๐Ÿง  We just implemented Andrej Karpathy's "third paradigm" for LLM learning!

System Prompt Learning (SPL) enables LLMs to automatically learn problem-solving strategies from experience, rather than relying on static prompts.

๐Ÿš€ How it works:
Your LLM builds a database of effective strategies, selects the best ones for each problem, and refines them over time based on success rates.

๐Ÿ“Š Results across math benchmarks:
Arena Hard: 29% โ†’ 37.6% (+8.6%)
AIME24: 23.33% โ†’ 30% (+6.67%)
OptILLMBench: 61% โ†’ 65% (+4%)

The best part? All strategies are human-readable and the system gets progressively better at problem types you use frequently.

โœจ Key benefits:
๐Ÿ”„ Cumulative learning over time
๐Ÿ“– Transparent, inspectable strategies
๐Ÿ”Œ Works with any OpenAI-compatible API
โšก Simple integration: just add "spl-" prefix to your model

Built as an open-source plugin in optillm. After 500 queries, our system developed 129 strategies and refined 97 of them!

This feels like a genuine step toward AI that learns from experience while staying completely interpretable.

๐Ÿ”— GitHub: https://github.com/codelion/optillm/tree/main/optillm/plugins/spl
๐Ÿ“– Full article: https://huggingface.co/blog/codelion/system-prompt-learning
๐Ÿฆ Original Karpathy tweet: https://x.com/karpathy/status/1921368644069765486

Have you experimented with advanced system prompting? What strategies would you want your LLM to learn?
reacted to dhruv3006's post with ๐Ÿš€ about 7 hours ago
view post
Post
679
App-Use : Create virtual desktops for AI agents to focus on specific apps.

App-Use lets you scope agents to just the apps they need. Instead of full desktop access, say "only work with Safari and Notes" or "just control iPhone Mirroring" - visual isolation without new processes for perfectly focused automation.

Running computer-use on the entire desktop often causes agent hallucinations and loss of focus when they see irrelevant windows and UI elements. App-Use solves this by creating composited views where agents only see what matters, dramatically improving task completion accuracy

What you can build: Research agents working in Safari while writing agents draft in Notes, iPhone automation for messages and reminders, parallel testing across isolated app sessions, or teams of specialized agents working simultaneously without interference.

Currently macOS-only (Quartz compositing engine).

Read the full guide: https://trycua.com/blog/app-use

Github : https://github.com/trycua/cua
reacted to frascuchon's post with ๐Ÿ‘ about 8 hours ago
view post
Post
640
Hey! I built RAG MCP Server Space, a simple Gradio MCP server for RAG systems that allows you to search relevant results without passing huge contexts to your LLM.

You can use this space to integrate with your agents and improve the efficiency of your search results. Feel free to try it out and let me know if you have any feedback or questions!

frascuchon/rag-mcp-server

Thanks for checking it out!
reacted to MonsterMMORPG's post with ๐Ÿ‘€ about 9 hours ago
view post
Post
1106
CausVid LoRA V2 of Wan 2.1 Brings Massive Quality Improvements, Better Colors and Saturation > https://youtu.be/1rAwZv0hEcU

Tutorial video : https://youtu.be/1rAwZv0hEcU

CausVid LoRA V2 of Wan 2.1 is just amazing. In this tutorial video I will show you how to use the most powerful video generation model Wan 2.1 with CausVid LoRA effortlessly. Normally, Wan 2.1 requires 50 steps to get excellent results. With CausiVid LoRA we get such excellent results only in 8 steps. Morever, with newest version 2, now the quality is almost identical to base Wan 2.1. I will show how to download and use in SwarmUI with 1-click to apply download and apply presets. We will also leverage of ComfyUI and fastest attention (Sage Attention).

๐Ÿ”—Follow below link to download the zip file that contains SwarmUI installer and AI models downloader Gradio App - the one used in the tutorial โคต๏ธ
โ–ถ๏ธ https://www.patreon.com/posts/SwarmUI-Installer-AI-Videos-Downloader-114517862

โ–ถ๏ธ CausVid Main Tutorial : https://youtu.be/fTzlQ0tjxj0

โ–ถ๏ธ How to install SwarmUI main tutorial : https://youtu.be/fTzlQ0tjxj0

๐Ÿ”—Follow below link to download the zip file that contains ComfyUI 1-click installer that has all the Flash Attention, Sage Attention, xFormers, Triton, DeepSpeed, RTX 5000 series support โคต๏ธ
โ–ถ๏ธ https://www.patreon.com/posts/Advanced-ComfyUI-1-Click-Installer-105023709

๐Ÿ”— Python, Git, CUDA, C++, FFMPEG, MSVC installation tutorial - needed for ComfyUI โคต๏ธ
โ–ถ๏ธ https://youtu.be/DrhUHnYfwC0

๐Ÿ”— SECourses Official Discord 10500+ Members โคต๏ธ
โ–ถ๏ธ https://discord.com/servers/software-engineering-courses-secourses-772774097734074388

๐Ÿ”— Stable Diffusion, FLUX, Generative AI Tutorials and Resources GitHub โคต๏ธ
โ–ถ๏ธ https://github.com/FurkanGozukara/Stable-Diffusion

๐Ÿ”— SECourses Official Reddit - Stay Subscribed To Learn All The News and More โคต๏ธ
โ–ถ๏ธ https://www.reddit.com/r/SECourses/