Instructions to use multimodalart/reachy with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use multimodalart/reachy with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", dtype=torch.bfloat16, device_map="cuda") pipe.load_lora_weights("multimodalart/reachy") prompt = "a white reachy robot being carried on a backpack" image = pipe(prompt).images[0] - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
Reachy Mini Robot LoRA

- Prompt
- a white reachy robot being carried on a backpack

- Prompt
- a white reachy robot vibing to a big disco ball, illuminated by disco lights

- Prompt
- two reachy robots playing chess
Model description
Reachy Mini Robot LoRA. Learn more about Reachy Mini
Trigger words
You should use reachy robot to trigger the image generation.
Download model
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Training at fal.ai
Training was done using fal.ai/models/fal-ai/flux-lora-fast-training.
- Downloads last month
- 18
Model tree for multimodalart/reachy
Base model
black-forest-labs/FLUX.1-dev