File size: 3,052 Bytes
3875d2a 9fe8a2b 3875d2a 9679654 216c0f3 46a5682 9679654 46a5682 bb4f9fa 46a5682 bb4f9fa 46a5682 9679654 46a5682 9679654 3875d2a 216c0f3 3875d2a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
base_model:
- MAGREF-Video/MAGREF
base_model_relation: quantized
library_name: gguf
tags:
- image-to-video
- quantized
language:
- en
license: apache-2.0
---
This is a GGUF conversion of [MAGREF-Video/MAGREF](https://huggingface.co/MAGREF-Video/MAGREF).
All quantized versions were created from the base FP16 model using the conversion scripts provided by city96, available at the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF/tree/main/tools) GitHub repository.
## Usage
The model files can be used in [ComfyUI](https://github.com/comfyanonymous/ComfyUI/) with the [ComfyUI-GGUF](https://github.com/city96/ComfyUI-GGUF) custom node. Place the required model(s) in the following folders:
| Type | Name | Location | Download |
| ------------ | ----------------------------------- | ------------------------------ | ---------------- |
| Main Model | MAGREF_Wan2.1_I2V_14B-GGUF | `ComfyUI/models/unet` | GGUF (this repo) |
| Text Encoder | umt5-xxl-encoder | `ComfyUI/models/text_encoders` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/tree/main/split_files/text_encoders) / [GGUF](https://huggingface.co/city96/umt5-xxl-encoder-gguf/tree/main) |
| CLIP Vision | clip_vision_h | `ComfyUI/models/clip_vision` | [Safetensors](https://huggingface.co/Comfy-Org/Wan_2.1_ComfyUI_repackaged/blob/main/split_files/clip_vision/clip_vision_h.safetensors) |
| VAE | Wan2_1_VAE_bf16 | `ComfyUI/models/vae` | [Safetensors](https://huggingface.co/Kijai/WanVideo_comfy/blob/main/Wan2_1_VAE_bf16.safetensors) |
[**ComfyUI example workflow**](https://huggingface.co/lym00/MAGREF_Wan2.1_I2V_14B-GGUF/blob/main/Magref_example_workflow.json)
## Demos
<table border="1">
<tr>
<th colspan="2" align="center">Input</th>
<th align="center">Output</th>
</tr>
<tr>
<td align="center">
<img src="demo/001/1.jpeg" style="max-height: 480px;" alt="Reference Image 1">
</td>
<td align="center">
<img src="demo/001/2.jpeg" style="max-height: 480px;" alt="Reference Image 2">
</td>
<td align="center">
<img src="demo/001/magref_14b_00001.webp" style="max-height: 480px;" alt="Generated Output">
</td>
</tr>
<tr>
<td colspan="3" align="center">
Two men taking a selfie together in an indoor setting. One of them, with a bright and expressive smile, holds the smartphone at arm’s length to frame the shot. He has voluminous, natural-textured hair and appears enthusiastic and energetic. Standing beside him is another man with neatly styled hair and a composed expression, wearing a white athletic jersey with black accents.
</td>
</tr>
</table>
### Notes
*All original licenses and restrictions from the base models still apply.*
## Reference
- For an overview of quantization types, please see the [GGUF quantization types](https://huggingface.co/docs/hub/gguf#quantization-types). |