Update README.md
Browse files
README.md
CHANGED
@@ -43,12 +43,7 @@ Each quantization level offers a trade-off between speed and quality. **Q4_K_S**
|
|
43 |
|
44 |
## Usage
|
45 |
|
46 |
-
This model can be used with any **GGUF-compatible** inference engine, such as **
|
47 |
-
|
48 |
-
Example (using a compatible loader):
|
49 |
-
```bash
|
50 |
-
python infer.py --model sdxl-unet-q5_ks.gguf --vae sdxl-vae-fp16.gguf --clip sdxl-clip-fp16.gguf --prompt "A futuristic city at sunset"
|
51 |
-
```
|
52 |
|
53 |
## Hardware Requirements
|
54 |
|
|
|
43 |
|
44 |
## Usage
|
45 |
|
46 |
+
This model can be used with any **GGUF-compatible** inference engine, such as **ComfyUI**, **Kohya's SDXL GGUF loader**, or **custom scripts supporting GGUF-based SDXL inference**.
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
## Hardware Requirements
|
49 |
|