nielsr HF Staff commited on
Commit
795e4f0
·
verified ·
1 Parent(s): 1137dac

Add library_name, pipeline_tag and set inference to true

Browse files

This PR adds the `library_name` tag, enabling the "how to use" button. It also adds a correct pipeline tag for this model, ensuring it can be found
at https://huggingface.co/models?pipeline_tag=text-to-video. It also enables the inference flag.

Files changed (1) hide show
  1. README.md +10 -8
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
- - en
 
 
 
5
  tags:
6
- - cogvideox
7
- - video-generation
8
- - thudm
9
- - text-to-video
10
- inference: false
11
  ---
12
 
13
  # CogVideoX-2B
@@ -180,7 +182,7 @@ pipe.vae.enable_tiling()
180
  + The 2B model is trained with `FP16` precision, and the 5B model is trained with `BF16` precision. We recommend using
181
  the precision the model was trained with for inference.
182
  + [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be
183
- used to quantize the text encoder, Transformer, and VAE modules to reduce CogVideoX's memory requirements. This makes
184
  it possible to run the model on a free T4 Colab or GPUs with smaller VRAM! It is also worth noting that TorchAO
185
  quantization is fully compatible with `torch.compile`, which can significantly improve inference speed. `FP8`
186
  precision must be used on devices with `NVIDIA H100` or above, which requires installing
 
1
  ---
 
2
  language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: diffusers
6
+ pipeline_tag: text-to-video
7
  tags:
8
+ - cogvideox
9
+ - video-generation
10
+ - thudm
11
+ - text-to-video
12
+ inference: true
13
  ---
14
 
15
  # CogVideoX-2B
 
182
  + The 2B model is trained with `FP16` precision, and the 5B model is trained with `BF16` precision. We recommend using
183
  the precision the model was trained with for inference.
184
  + [PytorchAO](https://github.com/pytorch/ao) and [Optimum-quanto](https://github.com/huggingface/optimum-quanto/) can be
185
+ used to quantize the text encoder, transformer, and VAE modules to reduce CogVideoX's memory requirements. This makes
186
  it possible to run the model on a free T4 Colab or GPUs with smaller VRAM! It is also worth noting that TorchAO
187
  quantization is fully compatible with `torch.compile`, which can significantly improve inference speed. `FP8`
188
  precision must be used on devices with `NVIDIA H100` or above, which requires installing