C. Emre Karataş

emredeveloper

AI & ML interests

LLM

Recent Activity

liked a model about 9 hours ago
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B
liked a model about 11 hours ago
Wan-AI/Wan2.1-T2V-1.3B
liked a Space about 15 hours ago
Wan-AI/Wan2.1
View all activity

Organizations

None yet

emredeveloper's activity

upvoted an article 1 day ago
view article
Article

FastRTC: The Real-Time Communication Library for Python

77
reacted to wassemgtk's post with 🤗🔥 1 day ago
view post
Post
1142
# GESAL: Real-Time Adaptation for LLMs


We’re excited to unveil **Graph-Enhanced Singular Adaptive Learning (GESAL)**, a framework that lets LLMs like meta-llama/Llama-3.2-1B adapt in real time using user feedback. Check out the code and white paper on GitHub!

🔗 **Code**: [https://github.com/writer/AI-Adaptive-Learning-GESAL](https://github.com/writer/AI-Adaptive-Learning-GESAL)

---

## Why GESAL?

Static LLMs struggle to adapt without heavy retraining. GESAL solves this with:
- **SVF**: Adapts weights via \( W' = U (\Sigma \cdot z) V^T \), using few parameters.
- **Graph Memory**: Stores adaptations in nodes for scalability.
- **RL**: Updates via \( J(z) = \mathbb{E}[\log \pi_z(y|x) r] \) based on feedback.

---

## How It Works

Ask "How many R’s in ‘strawberry’?" If it says "2" and you say "no," GESAL learns to say "3" next time, avoiding repeats.

---

## Try It

Built with Hugging Face’s transformers:
pip install transformers torch numpy
python Adaptive_Learning_(GESAL).py

Needs a Hugging Face token for Llama-3.2-1B.

---

## Results

GESAL hits 95% accuracy after 5 feedbacks vs. LoRA’s 70%. It’s efficient (~0.5M params) and scalable.
·
upvoted an article 2 days ago
view article
Article

Remote VAEs for decoding with HF endpoints 🤗

28
upvoted an article 5 days ago
view article
Article

Small Language Models (SLMs): A Comprehensive Overview

By jjokah
12
upvoted an article 5 days ago
upvoted an article 6 days ago
view article
Article

SigLIP 2: A better multilingual vision language encoder

95