SageAttention 2++ Pre-compiled Wheel

πŸš€ Ultra-fast attention mechanism with 2-3x speedup over FlashAttention2

Pre-compiled Python wheel for high-performance GPU inference, optimized for RTX 4090 and CUDA 12.8+.

πŸš€ Quick Installation

Method 1: Direct Pip Install (Recommended)

wget https://huggingface.co/ModelsLab/Sage_2_plus_plus_build/resolve/main/sageattention-2.2.0-cp311-cp311-linux_x86_64.whl

pip install sageattention-2.2.0-cp311-cp311-linux_x86_64.whl
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support