Last update: 21st July 2025

image/png

New

Motif Vision 6B-preview marks the initial step in our “beyond LLM” strategy. This preview version, developed in January 2025, is exactly the same as the one currently deployed in our service: https://model-hub.motiftech.io.

We are actively working on improving the model, and the latest version—along with all accompanying artifacts—will be released in the near future.


Introduction

We are excited to introduce Motif Vision 6B Preview, a powerful text-to-image model trained entirely from scratch. 🖼️✨

This model leverages a state-of-the-art MMDiT (Multi-modal Diffusion Transformer) architecture and utilizes Flow Matching for efficient and high-quality image generation. Motif Vision 6B Preview is our latest step in pushing the boundaries of generative AI.


Training Information

The model was trained on a large-scale GPU cluster, demonstrating our commitment to developing cutting-edge models.

  • GPUs: 96 AMD Instinct™ MI250 (24 nodes × 4 GPUs)
  • Training Time: 90 days

Notice: A detailed technical report will be released at a later time.


Availability

Checkpoints

The model checkpoints are shared directly in this repository and are ready for use.

Live Demo

You can try an interactive demo of Motif Vision 6B Preview right now on the Motif Model Hub.

Code Release

The source code for inference and training will be made publicly available soon. Stay tuned for updates!

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including OpenLLM-Korea/Motif-Vision-6B-Preview