-
SageAttention2 Technical Report: Accurate 4 Bit Attention for Plug-and-play Inference Acceleration
Paper • 2411.10958 • Published • 56 -
SpargeAttn: Accurate Sparse Attention Accelerating Any Model Inference
Paper • 2502.18137 • Published • 57 -
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training
Paper • 2505.11594 • Published • 75 -
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration
Paper • 2410.02367 • Published • 50
Jintao Zhang
jt-zhang
AI & ML interests
Efficient ML
Recent Activity
new activity
22 days ago
huggingface/HuggingDiscussions:[FEEDBACK] Daily Papers
new activity
about 2 months ago
jt-zhang/SageAttention3:Any improvement on Ada Lovelace (RTX 4xxx) ?
updated
a collection
about 2 months ago
efficient ml