Forgetting Transformer: Softmax Attention with a Forget Gate Paper • 2503.02130 • Published Mar 3 • 32
L^2M: Mutual Information Scaling Law for Long-Context Language Modeling Paper • 2503.04725 • Published Mar 6 • 21