# MochiTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in [Mochi-1 Preview](https://huggingface.co/genmo/mochi-1-preview) by Genmo.

The model can be loaded with the following code snippet.

```python
from diffusers import MochiTransformer3DModel

transformer = MochiTransformer3DModel.from_pretrained("genmo/mochi-1-preview", subfolder="transformer", torch_dtype=torch.float16).to("cuda")
```

## MochiTransformer3DModel[[diffusers.MochiTransformer3DModel]]

#### diffusers.MochiTransformer3DModel[[diffusers.MochiTransformer3DModel]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/models/transformers/transformer_mochi.py#L309)

A Transformer model for video-like data introduced in [Mochi](https://huggingface.co/genmo/mochi-1-preview).

**Parameters:**

patch_size (`int`, defaults to `2`) : The size of the patches to use in the patch embedding layer.

num_attention_heads (`int`, defaults to `24`) : The number of heads to use for multi-head attention.

attention_head_dim (`int`, defaults to `128`) : The number of channels in each head.

num_layers (`int`, defaults to `48`) : The number of layers of Transformer blocks to use.

in_channels (`int`, defaults to `12`) : The number of channels in the input.

out_channels (`int`, *optional*, defaults to `None`) : The number of channels in the output.

qk_norm (`str`, defaults to `"rms_norm"`) : The normalization layer to use.

text_embed_dim (`int`, defaults to `4096`) : Input dimension of text embeddings from the text encoder.

time_embed_dim (`int`, defaults to `256`) : Output dimension of timestep embeddings.

activation_fn (`str`, defaults to `"swiglu"`) : Activation function to use in feed-forward.

max_sequence_length (`int`, defaults to `256`) : The maximum sequence length of text embeddings supported.

## Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

#### diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

[Source](https://github.com/huggingface/diffusers/blob/v0.37.1/src/diffusers/models/modeling_outputs.py#L21)

The output of [Transformer2DModel](/docs/diffusers/v0.37.1/en/api/models/transformer2d#diffusers.Transformer2DModel).

**Parameters:**

sample (`torch.Tensor` of shape `(batch_size, num_channels, height, width)` or `(batch size, num_vector_embeds - 1, num_latent_pixels)` if [Transformer2DModel](/docs/diffusers/v0.37.1/en/api/models/transformer2d#diffusers.Transformer2DModel) is discrete) : The hidden states output conditioned on the `encoder_hidden_states` input. If discrete, returns probability distributions for the unnoised latent pixels.

