SageAttention 2++ Pre-compiled Wheel
π Ultra-fast attention mechanism with 2-3x speedup over FlashAttention2
Pre-compiled Python wheel for high-performance GPU inference, optimized for RTX 4090 and CUDA 12.8+.
π Quick Installation
Method 1: Direct Pip Install (Recommended)
wget https://huggingface.co/ModelsLab/Sage_2_plus_plus_build/resolve/main/sageattention-2.2.0-cp311-cp311-linux_x86_64.whl
pip install sageattention-2.2.0-cp311-cp311-linux_x86_64.whl
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support