Stable Diffusion XL Turbo - PyTorch INT8
This is a INT8 pytorch version of stabilityai/sdxl-turbo.
Model Details
- Model Type: Stable Diffusion XL Turbo
- Parameters: 3.5B
- Backend: PyTorch
- Quantization: INT8
- Memory Usage: ~3.2GB
- Conversion Date: 2025-08-09
Usage
PyTorch INT8
# PyTorch INT8 quantized model
from diffusers import StableDiffusionXLPipeline
import torch
# Load INT8 quantized model
pipe = StableDiffusionXLPipeline.from_pretrained(
"Mitchins/sdxl-turbo-torch-int8",
torch_dtype=torch.qint8,
use_safetensors=True
)
# For CPU inference
pipe = pipe.to("cpu")
# Generate image
image = pipe("A beautiful landscape", num_inference_steps=20).images[0]
image.save("output.png")
Performance
Backend | Quantization | Memory | Speed (CPU) | Speed (GPU) | Quality |
---|---|---|---|---|---|
PyTorch | INT8 | ~3.2GB | Good | Fast | Slightly Reduced |
Limitations
- INT8 quantization may slightly reduce image quality
- Best suited for CPU inference or memory-constrained environments
Citation
@misc{sdxl-turbo-pytorch-int8,
title = {Stable Diffusion XL Turbo PyTorch INT8}
author = {ImageAI Server Contributors}
year = {2024}
publisher = {HuggingFace}
url = {https://huggingface.co/Mitchins/sdxl-turbo-torch-int8}
}
Converted using ImageAI Server Model Converter v1.0
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for imgailab/sdxl-turbo-torch-int8
Base model
stabilityai/sdxl-turbo