Spaces:
Running
Running
| ο»Ώ# π§ Installation Guide - Fixing Dependency Issues | |
| ## Problem | |
| The error you encountered is due to `flash-attn` requiring the `packaging` module during compilation, and it's a notoriously difficult package to install on some systems. | |
| ## Solution | |
| ### Option 1: Use the Safe Installation Script (Recommended) | |
| **For Windows:** | |
| ```powershell | |
| # Run the safe installation script | |
| .\install_dependencies.ps1 | |
| ``` | |
| **For Linux/Mac:** | |
| ```bash | |
| # Run the safe installation script | |
| python install_dependencies.py | |
| ``` | |
| ### Option 2: Manual Installation Steps | |
| 1. **Upgrade pip and build tools:** | |
| ```bash | |
| pip install --upgrade pip setuptools wheel packaging | |
| ``` | |
| 2. **Install PyTorch first:** | |
| ```bash | |
| # For CUDA support | |
| pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 | |
| # Or CPU-only version | |
| pip install torch torchvision torchaudio | |
| ``` | |
| 3. **Install main requirements (flash-attn excluded):** | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| 4. **Optional: Install performance packages manually:** | |
| ```bash | |
| # xformers (usually works) | |
| pip install xformers | |
| # flash-attn (may fail - it's optional) | |
| pip install flash-attn --no-build-isolation | |
| ``` | |
| ### Option 3: Skip Problematic Dependencies | |
| The app will work perfectly fine without `flash-attn` and `xformers`. These are performance optimizations, not requirements. | |
| ## What Changed | |
| β **Fixed requirements.txt:** | |
| - Added essential build dependencies (`setuptools`, `wheel`, `packaging`) | |
| - Commented out problematic packages (`flash-attn`, `xformers`) | |
| - Made numpy version compatible | |
| - Added proper PyTorch installation notes | |
| β **Created safe installation scripts:** | |
| - `install_dependencies.py` - Cross-platform Python script | |
| - `install_dependencies.ps1` - Windows PowerShell script | |
| - Both handle errors gracefully and skip optional packages | |
| ## Verification | |
| After installation, verify everything works: | |
| ```bash | |
| python -c "import torch, transformers, gradio, fastapi; print('β Core dependencies installed!')" | |
| ``` | |
| ## Next Steps | |
| Once dependencies are installed: | |
| 1. **Download OmniAvatar models:** | |
| ```bash | |
| python setup_omniavatar.py | |
| ``` | |
| 2. **Start the application:** | |
| ```bash | |
| python app.py | |
| ``` | |
| ## Troubleshooting | |
| **If you still get errors:** | |
| 1. **Use a virtual environment:** | |
| ```bash | |
| python -m venv omniavatar_env | |
| source omniavatar_env/bin/activate # Linux/Mac | |
| # or | |
| omniavatar_env\Scripts\activate # Windows | |
| ``` | |
| 2. **Try without optional packages:** | |
| The app will work fine with just the core dependencies. Performance optimizations like `flash-attn` are nice-to-have, not essential. | |
| 3. **Check Python version:** | |
| Ensure you're using Python 3.8 or later: | |
| ```bash | |
| python --version | |
| ``` | |
| The dependency issues have been resolved and the OmniAvatar integration will work with or without the optional performance packages! π | |