Asynchronous Inference

In SmolVLA we introduced a new way to run inference on real-world robots, decoupling action prediction from action execution. In this tutorial, we’ll show how to use asynchronous inference (async inference) with SmolVLA, and all the policies supported by LeRobot.

With async inference your robot keeps acting while the policy server is already busy computing the next chunk of actions—eliminating “wait-for-inference” lag and unlocking smoother, more reactive behaviours. This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions.

What you’ll learn:

  1. Why asynchronous inference matters and how it compares to the traditional sequential loop.
  2. How to spin-up a PolicyServer and connect a RobotClient from the same machine, or over the network.
  3. How to tune key parameters (actions_per_chunk, chunk_size_threshold) for your robot and policy.

If you get stuck, hop into our Discord community

Async vs. synchronous inference

Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in idle frames, frames where the robot awaits idle the policy’s output: a new action chunk. This results in evident real-time lags, where the robot simply stops acting due to the lack of available actions.

Synchronous inference makes the robot idle while the policy is computing the next chunk of actions.

On the contrary, async inference overlaps action planning and execution, resulting in (1) higher adaptability and, most importantly (2) no idle frames. Crucially, with async inference, the next action chunk is computed before the current one is exhausted, resulting in no idleness. Adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan.

Asynchronous inference results in no idleness because the next chunk is computed before the current chunk is exhausted.


Getting started with async inference

You can read more information on asynchronous inference on the in-detail blogpost. Here, we report a getting started guide meant to help you setup and run asynchronous inference in your setup.

Just install lerobot with the smolvla option to install the extra dependencies (grpcio==1.71.0) required to run async inference.

pip install -e ".[smolva]"

1 Start the Policy Server

Policy servers are wrappers around a PreTrainedPolicy interfacing them with observations coming from a robot client. Policy servers are initialized as empty containers which are populated with the requested policy specified in the initial handshake between the robot client and the policy server. As such, spinning up a policy server is as easy as specifying the host address and port. If you’re running the policy server on the same machine as the robot client, you can use localhost as the host address.

Command
API example
python -m lerobot.scripts.server.policy_server \
    --host="localhost" \
    --port=8080

This listens on localhost:8080 for an incoming connection from the associatedRobotClient, which will specify the policy to run during handshake.


2 Launch the Robot Client

RobotClient is a wrapper around a Robot instance, which RobotClient connects to the (possibly remote) PolicyServer. The RobotClient streams observations to the PolicyServer, and receives action chunks obtained running inference on the server (which we assume to have better computational resources than the robot controller).

Command
API example
python -m lerobot.scripts.server.robot_client \
    --server_address="localhost:8080" \
    --robot.type="so100_follower" \  # <-- change this to your robot's type
    --robot.port="/dev/tty.usbmodem585A0076841" \  # <-- change this to your robot's port (find_port.py)
    --robot.id="follower_so100" \  # id of the robot
    --robot.cameras="{"laptop": {"index_or_path": 0, "width": 1920, "height": 1080, "fps": 30}}" \  # cameras of the robot. They must match the camera keys expected by the policy
    --policy.device="mps" \  # device to run the policy on
    --chunk_size_threshold=0.6 \  # Threshold for the chunk size
    --task="Fold my T-shirt"  # Textual description of the task to run the model runs
    --policy.type=... \  # Change to the policy type to run
    --policy.pretrained_name_or_path=... \  # path of the policy or pretrained name if available on the Hub

The following two parameters are key in every setup: | Hyperparameter | Default | What it does | |------|---------|-------------| | actions_per_chunk | 50 | How many actions the policy outputs at once. Typical values: 10-50. | | chunk_size_threshold | 0.7 | When the queue is ≤ 50 % full, the client sends a fresh observation. Value in [0, 1]. |

Different values of actions_per_chunk and chunk_size_threshold do result in different behaviours. On the one hand, increasing the value of actions_per_chunk will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed. However, larger values of actions_per_chunk might also result in less precise actions, due to the compounding errors consequent to predicting actions over longer timespans.

On the other hand, increasing the value of chunk_size_threshold will result in sending out to the PolicyServer observations for inference more often, resulting in a larger number of updates action chunks, overlapping on significant portions. This results in high adaptability, in the limit predicting one action chunk for each observation, which is in turn only marginally consumed while a new one is produced. This option does also put more pressure on the inference pipeline, as a consequence of the many requests. Conversely, values of chunk_size_threshold close to 0.0 collapse to the synchronous edge case, whereby new observations are only sent out whenever the current chunk is exhausted.

We found the default values of actions_per_chunk and chunk_size_threshold to work well in the experiments we developed for the SmolVLA paper, but recommend experimenting with different values to find the best fit for your setup.

Tuning async inference for your setup

  1. Choose your hardware right. PI0 occupies 14GB of memory at inference time, while SmolVLA requires only ~2GB. You should identify the best computational resource for your use case keeping in mind smaller policies require less computational resources. The combination of policy and device used (CPU-intensive, using MPS, or the number of CUDA cores on a given NVIDIA GPU) directly impacts the average inference latency you should expect.
  2. Adjust your fps based on inference latency. While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue.
  3. Adjust chunk_size_threshold.

The action queue size is plotted at runtime when the `--debug-visualize-queue-size` flag is passed, for various levels of `chunk_size_threshold` (`g` in the SmolVLA paper).

If you want to discuss this further, hop into our Discord community, or open an issue on our GitHub repository.

< > Update on GitHub