Stable Diffusion 1.5 - PyTorch INT8

This is a INT8 pytorch version of runwayml/stable-diffusion-v1-5.

Model Details

  • Model Type: Stable Diffusion 1.5
  • Parameters: 0.86B
  • Backend: PyTorch
  • Quantization: INT8
  • Memory Usage: ~1.0GB
  • Conversion Date: 2025-08-09

Usage

PyTorch INT8



# PyTorch INT8 quantized model
from diffusers import StableDiffusionPipeline
import torch

# Load INT8 quantized model
pipe = StableDiffusionPipeline.from_pretrained(
    "Mitchins/sd15-torch-int8",
    torch_dtype=torch.qint8,
    use_safetensors=True
)

# For CPU inference
pipe = pipe.to("cpu")

# Generate image
image = pipe("A beautiful landscape", num_inference_steps=20).images[0]
image.save("output.png")

Performance

Backend Quantization Memory Speed (CPU) Speed (GPU) Quality
PyTorch INT8 ~1.0GB Good Fast Slightly Reduced

Limitations

  • INT8 quantization may slightly reduce image quality
  • Best suited for CPU inference or memory-constrained environments

Citation

@misc{sd15-pytorch-int8,
  title = {Stable Diffusion 1.5 PyTorch INT8}
  author = {ImageAI Server Contributors}
  year = {2024}
  publisher = {HuggingFace}
  url = {https://huggingface.co/Mitchins/sd15-torch-int8}
}

Converted using ImageAI Server Model Converter v1.0

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for imgailab/sd15-torch-int8

Finetuned
(601)
this model