Use SmolVLA

SmolVLA is designed to be easy to use and integrate—whether you’re finetuning on your own data or plugging it into an existing robotics stack.

SmolVLA architecture.
Figure 1. SmolVLA takes as input a sequence of RGB images from multiple cameras, the robot’s current sensorimotor state, and a natural language instruction. The VLM encodes these into contextual features, which condition the action expert to generate a continuous sequence of actions.

Install

First, install the required dependencies:

git clone https://github.com/huggingface/lerobot.git
cd lerobot
pip install -e ".[smolvla]"
conda install ffmpeg -c conda-forge

Finetune the pretrained model

Use smolvla_base, our pretrained 450M model, with the lerobot training framework: Training the model for 20k steps will take around 3h on A100 GPU. You should increase the number of training steps based on your use-case.

Run the command below with your repo_id to start training.

python lerobot/scripts/train.py \
  --policy.path=lerobot/smolvla_base \
  --dataset.repo_id=lerobot/svla_so100_stacking \
  --batch_size=64 \
  --steps=20000 # 10% of total training budget

Comparison of SmolVLA across task variations.
Figure 2: Comparison of SmolVLA across task variations. From left to right: (1) asynchronous pick-place cube counting, (2) synchronous pick-place cube counting, (3) pick-place cube counting under perturbations, and (4) generalization on pick-and-place of the lego block with real-world SO101.

Train from scratch

​​If you’d like to build the architecture from scratch (pretrained VLM + action expert) rather than a pretrained checkpoint:

python lerobot/scripts/train.py \
  --policy.type=smolvla \
  --dataset.repo_id=lerobot/svla_so100_stacking \
  --batch_size=64 \
  --steps=200000

You can also load SmolVLAPolicy directly:

from lerobot.common.policies.smolvla.modeling_smolvla import SmolVLAPolicy
policy = SmolVLAPolicy.from_pretrained("lerobot/smolvla_base")

Evaluate the pretrained policy and run it in real-time

Important In the config.json of the pre-trained policy, set n_action_steps to 50 so the robot arm executes a chunk of actions in a single timestep.

If you want to record the evaluation process and safe the videos on the hub, login to your HF account by running:

huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential

Store your Hugging Face repository name in a variable to run these commands:

HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER

Now, indicate the path to the policy, which is lerobot/smolvla_base in this case, and run:


python -m lerobot.record \
  --robot.type=so101_follower \
  --robot.port=/dev/tty.usbmodem58760431541 \
  --robot.id=my_blue_follower_arm \
  --teleop.type=so101_leader \
  --teleop.port=/dev/tty.usbmodem58FA1015821 \
  --teleop.id=my_blue_leader_arm \
  --dataset.fps=30 \
  --dataset.single_task="Grasp a lego block and put it in the bin." \
  --dataset.repo_id=${HF_USER}/eval_svla_base_test \
  --dataset.tags='["tutorial"]' \
  --dataset.episode_time_s=30 \
  --dataset.reset_time_s=30 \
  --dataset.num_episodes=10 \
  --dataset.push_to_hub=true \
  --policy.path=lerobot/smolvla_base

Depending on your evaluation setup, you can configure the duration and the number of episodes to record for your evaluation suite.

Note!

SmolVLA was pretrained on SO100 arms and is not expected to perform well zero-shot on SO101. We strongly recommend finetuning the model on an SO101-based dataset before deployment/ running this command.

Additionally, running the base model zero-shot on SO100 arms with the updated lerobot repo may also lead to issues. The codebase has undergone significant refactoring and now includes a new calibration method, which uses a different range of motions from those used during pretraining. As a result, finetuning is essential before executing the command above and testing on a real robot.

< > Update on GitHub