-
Beyond A*: Better Planning with Transformers via Search Dynamics Bootstrapping
Paper • 2402.14083 • Published • 49 -
Linear Transformers are Versatile In-Context Learners
Paper • 2402.14180 • Published • 7 -
Training-Free Long-Context Scaling of Large Language Models
Paper • 2402.17463 • Published • 25 -
The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits
Paper • 2402.17764 • Published • 624
Yang Lee PRO
innovation64
AI & ML interests
AGI
Recent Activity
liked
a model
about 1 hour ago
Qwen/Qwen3-32B
upvoted
a
paper
2 days ago
The Landscape of Agentic Reinforcement Learning for LLMs: A Survey
upvoted
a
paper
2 days ago
Loong: Synthesize Long Chain-of-Thoughts at Scale through Verifiers