This tutorial will teach you how to use RobotProcessor, a powerful pipeline system for preprocessing robot observations and postprocessing actions. You’ll learn how to create modular, composable data transformations that make your robot learning pipelines more maintainable and reproducible.
You’ll learn:
Before diving into code, let’s understand the real problems RobotProcessor solves. If you’ve worked with robots and learned policies, you’ve likely encountered these challenges:
Your environment might output observations with keys like "rgb_camera_front" and "joint_positions", but your pre-trained policy expects "observation.images.wrist" and "observation.state". You need to rename these keys consistently across your pipeline.
Your robot’s camera captures 1920x1080 RGB images, but your policy was trained on 224x224 cropped images focusing on the workspace. You need to:
Your policy outputs desired end-effector positions in Cartesian space, but your robot expects joint angles. You need to:
Your environment provides joint positions, but your policy also needs velocities and accelerations. You need to:
You want to deploy the same policy on different robots with varying:
During data collection, you need to:
RobotProcessor solves all these problems by providing a modular, composable system where each transformation is a separate, shareable component. Let’s see how.
If you’ve worked with robot learning before, you’ve likely written preprocessing code like this:
# Traditional procedural approach - hard to maintain and share
def preprocess_observation(obs, device='cuda'):
processed_obs = {}
# Process images
if "pixels" in obs:
for cam_name, img in obs["pixels"].items():
# Convert uint8 [0, 255] to float32 [0, 1]
img = img.astype(np.float32) / 255.0
# Convert from HWC to CHW format
img = np.transpose(img, (2, 0, 1))
# Add batch dimension
img = np.expand_dims(img, axis=0)
# Convert to torch and move to device
img = torch.from_numpy(img).to(device)
# Pad to standard size
if img.shape[-2:] != (224, 224):
img = F.pad(img, (0, 224 - img.shape[-1], 0, 224 - img.shape[-2]))
processed_obs[f"observation.images.{cam_name}"] = img
# Process state
if "agent_pos" in obs:
state = torch.from_numpy(obs["agent_pos"]).float()
state = state.unsqueeze(0).to(device)
processed_obs["observation.state"] = state
return processed_obsThis approach has several problems:
.to(device) callsRobotProcessor solves these issues by providing a declarative pipeline approach where each transformation is a separate, testable, shareable component.
RobotProcessor works with two data formats:
An EnvTransition is a dictionary that represents a complete transition in the environment:
from lerobot.processor.pipeline import TransitionKey
# EnvTransition structure:
transition = {
TransitionKey.OBSERVATION: {"observation.image": ..., "observation.state": ...}, # observation at time t
TransitionKey.ACTION: [0.1, -0.2, 0.3], # action taken at time t
TransitionKey.REWARD: 1.0, # reward received
TransitionKey.DONE: False, # episode done flag
TransitionKey.TRUNCATED: False, # episode truncated flag
TransitionKey.INFO: {"success": True}, # additional info from environment
TransitionKey.COMPLEMENTARY_DATA: {"step_idx": 42} # complementary_data for inter-step communication
}This is the format used by LeRobot datasets and replay buffers:
# Batch dictionary format (used by LeRobotDataset, ReplayBuffer)
batch = {
"observation.image": torch.tensor(...), # Image observations
"observation.state": torch.tensor(...), # State observations
"action": torch.tensor(...), # Actions
"next.reward": torch.tensor(...), # Rewards
"next.done": torch.tensor(...), # Done flags
"next.truncated": torch.tensor(...), # Truncated flags
"info": {...}, # Info dictionary
# Additional keys are preserved but ignored during conversion
}RobotProcessor automatically handles both formats:
from lerobot.processor.pipeline import RobotProcessor
from lerobot.processor.observation_processor import ImageProcessor
processor = RobotProcessor([ImageProcessor()])
# Works with EnvTransition dictionaries
transition = {
TransitionKey.OBSERVATION: {"pixels": image_array},
TransitionKey.ACTION: None,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: {},
TransitionKey.COMPLEMENTARY_DATA: {}
}
processed_transition = processor(transition) # Returns EnvTransition dictionary
# Also works with batch dictionaries
batch = {
"observation.pixels": image_tensor,
"action": action_tensor,
"next.reward": reward_tensor,
"next.done": done_tensor,
"next.truncated": truncated_tensor,
"info": info_dict
}
processed_batch = processor(batch) # Returns batch dictionaryUse the TransitionKey enum to access dictionary elements:
from lerobot.processor.pipeline import TransitionKey
# Good - using TransitionKey
obs = transition[TransitionKey.OBSERVATION]
action = transition[TransitionKey.ACTION]
reward = transition[TransitionKey.REWARD]
done = transition[TransitionKey.DONE]
truncated = transition[TransitionKey.TRUNCATED]
info = transition[TransitionKey.INFO]
comp_data = transition[TransitionKey.COMPLEMENTARY_DATA]
# Alternative - using .get() for optional access
obs = transition.get(TransitionKey.OBSERVATION)
action = transition.get(TransitionKey.ACTION)RobotProcessor uses these default conversion functions:
def _default_batch_to_transition(batch):
"""Default conversion from batch dict to EnvTransition dictionary."""
# Extract observation keys (anything starting with "observation.")
observation_keys = {k: v for k, v in batch.items() if k.startswith("observation.")}
observation = observation_keys if observation_keys else None
# Extract padding and task keys for complementary data
pad_keys = {k: v for k, v in batch.items() if "_is_pad" in k}
task_key = {"task": batch["task"]} if "task" in batch else {}
complementary_data = {**pad_keys, **task_key} if pad_keys or task_key else {}
transition = {
TransitionKey.OBSERVATION: observation,
TransitionKey.ACTION: batch.get("action"),
TransitionKey.REWARD: batch.get("next.reward", 0.0),
TransitionKey.DONE: batch.get("next.done", False),
TransitionKey.TRUNCATED: batch.get("next.truncated", False),
TransitionKey.INFO: batch.get("info", {}),
TransitionKey.COMPLEMENTARY_DATA: complementary_data,
}
return transition
def _default_transition_to_batch(transition):
"""Default conversion from EnvTransition dictionary to batch dict."""
batch = {
"action": transition.get(TransitionKey.ACTION),
"next.reward": transition.get(TransitionKey.REWARD, 0.0),
"next.done": transition.get(TransitionKey.DONE, False),
"next.truncated": transition.get(TransitionKey.TRUNCATED, False),
"info": transition.get(TransitionKey.INFO, {}),
}
# Add padding and task data from complementary_data
complementary_data = transition.get(TransitionKey.COMPLEMENTARY_DATA)
if complementary_data:
pad_data = {k: v for k, v in complementary_data.items() if "_is_pad" in k}
batch.update(pad_data)
if "task" in complementary_data:
batch["task"] = complementary_data["task"]
# Handle observation - flatten dict to observation.* keys if it's a dict
observation = transition.get(TransitionKey.OBSERVATION)
if isinstance(observation, dict):
batch.update(observation)
return batchYou can customize how RobotProcessor converts between formats:
def custom_batch_to_transition(batch):
"""Custom conversion from batch dict to EnvTransition dictionary."""
# Extract observation keys (anything starting with "observation.")
observation = {k: v for k, v in batch.items() if k.startswith("observation.")}
return {
TransitionKey.OBSERVATION: observation,
TransitionKey.ACTION: batch.get("action"),
TransitionKey.REWARD: batch.get("reward", 0.0), # Use "reward" instead of "next.reward"
TransitionKey.DONE: batch.get("done", False), # Use "done" instead of "next.done"
TransitionKey.TRUNCATED: batch.get("truncated", False),
TransitionKey.INFO: batch.get("info", {}),
TransitionKey.COMPLEMENTARY_DATA: batch.get("complementary_data", {})
}
def custom_transition_to_batch(transition):
"""Custom conversion from EnvTransition dictionary to batch dict."""
batch = {
"action": transition.get(TransitionKey.ACTION),
"reward": transition.get(TransitionKey.REWARD), # Use "reward" instead of "next.reward"
"done": transition.get(TransitionKey.DONE), # Use "done" instead of "next.done"
"truncated": transition.get(TransitionKey.TRUNCATED),
"info": transition.get(TransitionKey.INFO),
}
# Flatten observation dict
obs = transition.get(TransitionKey.OBSERVATION)
if obs:
batch.update(obs)
return batch
# Use custom converters
processor = RobotProcessor(
steps=[ImageProcessor()],
to_transition=custom_batch_to_transition,
to_output=custom_transition_to_batch
)The to_output function determines what format is returned when you call the processor with a batch dictionary. Sometimes you want to output EnvTransition dictionaries even when you input batch dictionaries:
# Identity function to always return EnvTransition dictionaries
def keep_as_transition(transition):
"""Always return EnvTransition dictionary regardless of input format."""
return transition
# Processor that always outputs EnvTransition dictionaries
processor = RobotProcessor(
steps=[ImageProcessor(), StateProcessor()],
to_output=keep_as_transition # Always return dictionary format
)
# Even when called with batch dict, returns EnvTransition dictionary
batch = {
"observation.image": image_tensor,
"action": action_tensor,
"next.reward": reward_tensor,
"next.done": done_tensor,
"next.truncated": truncated_tensor,
"info": info_dict
}
result = processor(batch) # Returns EnvTransition dictionary, not batch dict!
print(type(result)) # <class 'dict'>This is particularly useful for environment interaction where you want consistent dictionary output:
from lerobot.processor.observation_processor import VanillaObservationProcessor
# Create processor that always outputs EnvTransition for environment interaction
# This avoids format conversion overhead during real-time control
env_processor = RobotProcessor(
[VanillaObservationProcessor()],
to_transition=lambda x: x, # Pass through - no conversion needed
to_output=lambda x: x, # Always return EnvTransition dictionary
)
# Environment interaction loop
env = make_env()
obs, info = env.reset()
for step in range(1000):
# Create transition - input is already in dictionary format
transition = {
TransitionKey.OBSERVATION: obs,
TransitionKey.ACTION: None,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: info,
TransitionKey.COMPLEMENTARY_DATA: {"step": step}
}
# Process - output is guaranteed to be EnvTransition dictionary
processed_transition = env_processor(transition)
processed_obs = processed_transition[TransitionKey.OBSERVATION]
# Use with policy
action = policy.select_action(processed_obs)
obs, reward, done, truncated, info = env.step(action)
if done or truncated:
breakUse EnvTransition dictionary output when:
Use batch dictionary output when:
# For environment interaction - use dictionary output
env_processor = RobotProcessor(
steps=[ImageProcessor(), StateProcessor()],
to_output=lambda x: x # Return EnvTransition dictionary
)
# For training - use batch output (default)
train_processor = RobotProcessor(
steps=[ImageProcessor(), StateProcessor(), NormalizerProcessor(...)],
# to_output defaults to _default_transition_to_batch
)
# Training loop
for batch in dataloader:
processed_batch = train_processor(batch) # Returns batch dict
loss = model.compute_loss(processed_batch)
# Environment loop
for step in range(1000):
transition = {
TransitionKey.OBSERVATION: obs,
TransitionKey.ACTION: None,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: info,
TransitionKey.COMPLEMENTARY_DATA: {}
}
processed_transition = env_processor(transition) # Returns EnvTransition dictionary
obs = processed_transition[TransitionKey.OBSERVATION]
action = policy.select_action(obs)The default conversion uses “next.*” prefixes because this matches the standard format used by LeRobot datasets and follows the convention that rewards, done flags, and truncated flags are the result of taking an action (i.e., they come from the “next” state):
# Standard RL transition format
# (s_t, a_t, r_{t+1}, done_{t+1}, truncated_{t+1})
# ^ ^ ^ ^ ^
# | | | | |
# | | | | +-- Result of action a_t
# | | | +-- Result of action a_t
# | | +-- Result of action a_t
# | +-- Action taken in state s_t
# +-- State at time t
batch = {
"observation.state": s_t,
"action": a_t,
"next.reward": r_{t+1}, # Reward received after taking action
"next.done": done_{t+1}, # Done flag after taking action
"next.truncated": truncated_{t+1}, # Truncated flag after taking action
}Let’s create a processor that properly handles image and state preprocessing:
from lerobot.processor.pipeline import RobotProcessor, TransitionKey
from lerobot.processor.observation_processor import ImageProcessor, StateProcessor
import numpy as np
# Create sample data
observation = {
"pixels": {
"camera_front": np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8),
"camera_side": np.random.randint(0, 255, (480, 640, 3), dtype=np.uint8)
},
"agent_pos": np.array([0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7], dtype=np.float32)
}
# Create a full transition
transition = {
TransitionKey.OBSERVATION: observation,
TransitionKey.ACTION: None,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: {},
TransitionKey.COMPLEMENTARY_DATA: {}
}
# Create and use the processor
processor = RobotProcessor([
ImageProcessor(), # Converts uint8[0,255] to float32[0,1], HWC to CHW, adds batch dim
StateProcessor(), # Converts numpy arrays to torch tensors, adds batch dim
])
processed_transition = processor(transition)
processed_obs = processed_transition[TransitionKey.OBSERVATION]
# Check the results
print("Original keys:", observation.keys())
print("Processed keys:", processed_obs.keys())
print("Image shape:", processed_obs["observation.images.camera_front"].shape) # [1, 3, 480, 640]
print("Image dtype:", processed_obs["observation.images.camera_front"].dtype) # torch.float32
print("Image range:", processed_obs["observation.images.camera_front"].min().item(),
"to", processed_obs["observation.images.camera_front"].max().item()) # 0.0 to 1.0RobotProcessor seamlessly works with LeRobot’s batch dictionary format:
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from lerobot.processor.pipeline import RobotProcessor
from lerobot.processor.normalize_processor import NormalizerProcessor
from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature
# Load a dataset
dataset = LeRobotDataset("lerobot/pusht")
# Define features and normalization
features = {
"observation.image": PolicyFeature(FeatureType.VISUAL, (3, 96, 96)),
"observation.state": PolicyFeature(FeatureType.STATE, (2,)),
"action": PolicyFeature(FeatureType.ACTION, (2,)),
}
norm_map = {
FeatureType.VISUAL: NormalizationMode.MEAN_STD,
FeatureType.STATE: NormalizationMode.MIN_MAX,
FeatureType.ACTION: NormalizationMode.MEAN_STD,
}
# Create processor with normalization
processor = RobotProcessor([
NormalizerProcessor.from_lerobot_dataset(dataset, features, norm_map),
])
# Process a batch from the dataset
batch = dataset[0] # Get first batch
print("Original batch keys:", list(batch.keys()))
print("Original image shape:", batch["observation.image"].shape)
# Process the batch - automatically converts to/from batch format
processed_batch = processor(batch)
print("Processed batch keys:", list(processed_batch.keys()))
print("Processed image range:", processed_batch["observation.image"].min().item(),
"to", processed_batch["observation.image"].max().item())
# Use with DataLoader
from torch.utils.data import DataLoader
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
for batch in dataloader:
# Process entire batch at once
processed_batch = processor(batch)
# Use processed batch for training
# model.train_step(processed_batch)
breakRobotProcessor works great with replay buffers for online learning:
from lerobot.common.utils.buffer import ReplayBuffer
from lerobot.processor.pipeline import RobotProcessor
from lerobot.processor.device_processor import DeviceProcessor
# Create replay buffer
buffer = ReplayBuffer(capacity=10000)
# Create processor for online data
online_processor = RobotProcessor([
ImageProcessor(),
StateProcessor(),
DeviceProcessor(device="cuda"),
])
# During environment interaction
env = make_env()
obs, info = env.reset()
for step in range(1000):
# Raw environment observation
transition = {
TransitionKey.OBSERVATION: obs,
TransitionKey.ACTION: None,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: info,
TransitionKey.COMPLEMENTARY_DATA: {}
}
# Process for policy input
processed_transition = online_processor(transition)
processed_obs = processed_transition[TransitionKey.OBSERVATION]
# Get action from policy
action = policy.select_action(processed_obs)
# Execute action
next_obs, reward, done, truncated, info = env.step(action)
# Store in replay buffer (can store either format)
buffer.add(obs, action, reward, next_obs, done, truncated, info)
obs = next_obs
if done or truncated:
obs, info = env.reset()
# Sample and process batches for training
batch = buffer.sample(batch_size=32)
processed_batch = online_processor(batch) # Processes entire batchA processor step must follow certain conventions. Let’s create a complete example that shows all required and optional methods:
from dataclasses import dataclass
from typing import Any, Dict
import torch
import torch.nn.functional as F
@dataclass
class ImagePadder:
"""Pad images to a standard size for batch processing."""
target_height: int = 224
target_width: int = 224
pad_value: float = 0.0
def __call__(self, transition: EnvTransition) -> EnvTransition:
"""Main processing method - required for all steps."""
obs = transition.get(TransitionKey.OBSERVATION)
if obs is None:
return transition
# Process all image observations
processed_obs = dict(obs) # Create a copy
for key in list(processed_obs.keys()):
if key.startswith("observation.images."):
img = processed_obs[key]
# Calculate padding
_, _, h, w = img.shape
pad_h = max(0, self.target_height - h)
pad_w = max(0, self.target_width - w)
if pad_h > 0 or pad_w > 0:
# Pad symmetrically
pad_left = pad_w // 2
pad_right = pad_w - pad_left
pad_top = pad_h // 2
pad_bottom = pad_h - pad_top
img = F.pad(img, (pad_left, pad_right, pad_top, pad_bottom),
mode='constant', value=self.pad_value)
processed_obs[key] = img
# Return modified transition
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = processed_obs
return new_transition
def get_config(self) -> Dict[str, Any]:
"""Return JSON-serializable configuration - required for save/load."""
return {
"target_height": self.target_height,
"target_width": self.target_width,
"pad_value": self.pad_value
}
def state_dict(self) -> Dict[str, torch.Tensor]:
"""Return tensor state - only include torch.Tensor objects!"""
# This step has no learnable parameters
return {}
def load_state_dict(self, state: Dict[str, torch.Tensor]) -> None:
"""Load tensor state - required if state_dict returns non-empty dict."""
# Nothing to load for this step
pass
def reset(self) -> None:
"""Reset internal state at episode boundaries - required for stateful steps."""
# This step is stateless, so nothing to reset
passThese two methods serve different purposes and it’s crucial to use them correctly:
@dataclass
class AdaptiveNormalizer:
"""Example showing proper use of get_config and state_dict."""
learning_rate: float = 0.01
epsilon: float = 1e-8
def __post_init__(self):
self.running_mean = None
self.running_var = None
self.num_samples = 0 # Python int, not tensor
def get_config(self) -> Dict[str, Any]:
"""ONLY Python objects that can be JSON serialized!"""
return {
"learning_rate": self.learning_rate, # float ✓
"epsilon": self.epsilon, # float ✓
"num_samples": self.num_samples, # int ✓
# "running_mean": self.running_mean, # torch.Tensor ✗ WRONG!
}
def state_dict(self) -> Dict[str, torch.Tensor]:
"""ONLY torch.Tensor objects!"""
if self.running_mean is None:
return {}
return {
"running_mean": self.running_mean, # torch.Tensor ✓
"running_var": self.running_var, # torch.Tensor ✓
# "num_samples": self.num_samples, # int ✗ WRONG!
# Instead, convert to tensor if needed:
"num_samples_tensor": torch.tensor(self.num_samples)
}
def load_state_dict(self, state: Dict[str, torch.Tensor]) -> None:
"""Load tensors and convert back to Python types if needed."""
self.running_mean = state.get("running_mean")
self.running_var = state.get("running_var")
if "num_samples_tensor" in state:
self.num_samples = int(state["num_samples_tensor"].item())The complementary_data field is perfect for passing information between steps without polluting the info dictionary:
@dataclass
class ImageStatisticsCalculator:
"""Calculate image statistics and pass to next steps."""
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA) or {}
if obs is None:
return transition
# Calculate statistics for all images
image_stats = {}
for key in obs:
if key.startswith("observation.images."):
img = obs[key]
stats = {
"mean": img.mean().item(),
"std": img.std().item(),
"min": img.min().item(),
"max": img.max().item(),
}
image_stats[key] = stats
# Store in complementary_data for next steps
comp_data = dict(comp_data) # Make a copy
comp_data["image_statistics"] = image_stats
# Return transition with updated complementary_data
new_transition = transition.copy()
new_transition[TransitionKey.COMPLEMENTARY_DATA] = comp_data
return new_transition
@dataclass
class AdaptiveBrightnessAdjuster:
"""Adjust brightness based on statistics from previous step."""
target_brightness: float = 0.5
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA) or {}
if obs is None or "image_statistics" not in comp_data:
return transition
# Use statistics from previous step
image_stats = comp_data["image_statistics"]
processed_obs = dict(obs) # Create a copy
for key in processed_obs:
if key.startswith("observation.images.") and key in image_stats:
current_mean = image_stats[key]["mean"]
brightness_adjust = self.target_brightness - current_mean
# Adjust brightness
processed_obs[key] = torch.clamp(processed_obs[key] + brightness_adjust, 0, 1)
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = processed_obs
return new_transition
# Use them together
processor = RobotProcessor([
ImageProcessor(),
ImageStatisticsCalculator(), # Calculates stats
AdaptiveBrightnessAdjuster(), # Uses stats from previous step
])One of the most powerful features of RobotProcessor is the ability to override step configurations when loading from saved checkpoints. This is particularly useful for handling non-serializable objects like environment instances, database connections, or hardware interfaces that can’t be saved to JSON.
Imagine you have a processor step that needs a live gym environment:
from dataclasses import dataclass
import gym
@ProcessorStepRegistry.register("action_repeat_step")
@dataclass
class ActionRepeatStep:
"""Step that repeats actions using environment feedback."""
repeat_count: int = 3
env: gym.Env = None # This can't be serialized to JSON!
def __call__(self, transition: EnvTransition) -> EnvTransition:
action = transition.get(TransitionKey.ACTION)
if self.env is not None and action is not None:
# Repeat action multiple times in environment
total_reward = 0
for _ in range(self.repeat_count):
_, r, d, t, _ = self.env.step(action)
total_reward += r
if d or t:
break
# Update reward in transition
new_transition = transition.copy()
new_transition[TransitionKey.REWARD] = total_reward
return new_transition
return transition
def get_config(self) -> dict[str, Any]:
# Note: env is NOT included because it's not serializable
return {"repeat_count": self.repeat_count}
def state_dict(self) -> dict[str, torch.Tensor]:
return {}
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
passIf you try to save and load this processor normally, the env parameter will be lost because it can’t be serialized to JSON.
The overrides parameter in from_pretrained() allows you to provide non-serializable objects when loading:
from lerobot.processor.pipeline import RobotProcessor
# Create processor with environment
env = gym.make("CartPole-v1")
action_step = ActionRepeatStep(repeat_count=2, env=env)
processor = RobotProcessor([action_step], name="CartPoleProcessor")
# Save the processor (env won't be saved)
processor.save_pretrained("./cartpole_processor")
# Later, load with environment override
new_env = gym.make("CartPole-v1")
loaded_processor = RobotProcessor.from_pretrained(
"./cartpole_processor",
overrides={
"action_repeat_step": {"env": new_env} # Provide the environment
}
)The overrides parameter is a dictionary where:
overrides = {
"StepClassName": {"param1": "new_value", "param2": 42},
"registered_step_name": {"device": "cuda", "learning_rate": 0.01}
}The loading process:
@ProcessorStepRegistry.register("ik_solver_step")
@dataclass
class InverseKinematicsStep:
"""Convert Cartesian positions to joint angles."""
robot_model: str = "ur5"
solver_timeout: float = 0.1
kinematics_solver: Any = None # Non-serializable solver instance
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs, action, reward, done, truncated, info, comp_data = transition
if self.kinematics_solver is not None and action is not None:
# Convert Cartesian action to joint angles
joint_angles = self.kinematics_solver.solve(action, timeout=self.solver_timeout)
action = joint_angles
return (obs, action, reward, done, truncated, info, comp_data)
def get_config(self) -> dict[str, Any]:
return {
"robot_model": self.robot_model,
"solver_timeout": self.solver_timeout
}
# Save processor without solver
processor = RobotProcessor([InverseKinematicsStep(robot_model="ur5")])
processor.save_pretrained("./robot_processor")
# Load with solver instance
from robotics_toolbox import URKinematics
solver = URKinematics("ur5")
loaded_processor = RobotProcessor.from_pretrained(
"./robot_processor",
overrides={
"ik_solver_step": {
"kinematics_solver": solver,
"solver_timeout": 0.05 # Also override timeout
}
}
)@ProcessorStepRegistry.register("camera_capture_step")
@dataclass
class CameraCaptureStep:
"""Capture images from physical camera."""
camera_id: int = 0
resolution: tuple = (640, 480)
camera_interface: Any = None # Hardware interface
def get_config(self) -> dict[str, Any]:
return {
"camera_id": self.camera_id,
"resolution": self.resolution
}
# Deploy on different robots with different camera setups
# Robot A
camera_a = CameraInterface("/dev/video0")
processor_a = RobotProcessor.from_pretrained(
"shared/vision_processor",
overrides={
"camera_capture_step": {
"camera_interface": camera_a,
"resolution": (1920, 1080) # High-res camera
}
}
)
# Robot B
camera_b = CameraInterface("/dev/video1")
processor_b = RobotProcessor.from_pretrained(
"shared/vision_processor",
overrides={
"camera_capture_step": {
"camera_interface": camera_b,
"resolution": (640, 480) # Lower-res camera
}
}
)# Training processor that works with simulation
@ProcessorStepRegistry.register("physics_validator")
@dataclass
class PhysicsValidatorStep:
"""Validate actions against physics constraints."""
max_force: float = 100.0
physics_engine: Any = None
def get_config(self) -> dict[str, Any]:
return {"max_force": self.max_force}
# Different physics engines for different environments
import pybullet as pb
import mujoco
# Simulation deployment
sim_engine = pb.connect(pb.DIRECT)
sim_processor = RobotProcessor.from_pretrained(
"shared/control_processor",
overrides={
"physics_validator": {
"physics_engine": sim_engine,
"max_force": 150.0 # Higher limits in sim
}
}
)
# Real robot deployment
real_engine = RealRobotInterface()
real_processor = RobotProcessor.from_pretrained(
"shared/control_processor",
overrides={
"physics_validator": {
"physics_engine": real_engine,
"max_force": 50.0 # Conservative limits on real robot
}
}
)The override system uses exact string matching:
Use the registry name (the string passed to @ProcessorStepRegistry.register()):
@ProcessorStepRegistry.register("my_custom_step")
class MyStep:
pass
# Use registry name in overrides
overrides = {"my_custom_step": {"param": "value"}}Use the exact class name:
class MyUnregisteredStep:
pass
# Use class name in overrides
overrides = {"MyUnregisteredStep": {"param": "value"}}The override system provides helpful error messages:
# This will raise KeyError with helpful message
overrides = {"NonExistentStep": {"param": "value"}}
try:
processor = RobotProcessor.from_pretrained("./processor", overrides=overrides)
except KeyError as e:
print(e)
# Output: Override keys ['NonExistentStep'] do not match any step in the saved configuration.
# Available step keys: ['ActualStepName', 'AnotherStepName']# Invalid parameter types are caught
overrides = {"MyStep": {"numeric_param": "not_a_number"}}
try:
processor = RobotProcessor.from_pretrained("./processor", overrides=overrides)
except ValueError as e:
print(e)
# Output: Failed to instantiate processor step 'MyStep' with config: {...}When you have multiple steps of the same class, all instances get the same override:
step1 = MyStep(param=1)
step2 = MyStep(param=2)
processor = RobotProcessor([step1, step2])
# Both steps will get the override
overrides = {"MyStep": {"param": 999}}
loaded = RobotProcessor.from_pretrained("./processor", overrides=overrides)
# Both steps now have param=999To override steps individually, use different classes or register with different names:
@ProcessorStepRegistry.register("step_1")
class MyStep:
pass
@ProcessorStepRegistry.register("step_2")
class MyStep: # Same class, different registry names
pass
# Now you can override them separately
overrides = {
"step_1": {"param": 1},
"step_2": {"param": 2}
}When creating steps that need non-serializable objects, design them with overrides in mind:
@dataclass
class WellDesignedStep:
# Serializable configuration
timeout: float = 1.0
retry_count: int = 3
# Non-serializable objects with default None
database: Any = None
api_client: Any = None
def __post_init__(self):
# Validate that required non-serializable objects are provided
if self.database is None:
raise ValueError("database must be provided via overrides")
def get_config(self) -> dict[str, Any]:
# Only include serializable parameters
return {
"timeout": self.timeout,
"retry_count": self.retry_count
}Register steps with descriptive names to make overrides clearer:
@ProcessorStepRegistry.register("robot_arm_controller")
class ArmControlStep:
pass
@ProcessorStepRegistry.register("gripper_controller")
class GripperControlStep:
pass
# Clear override keys
overrides = {
"robot_arm_controller": {"joint_limits": arm_limits},
"gripper_controller": {"force_limit": gripper_force}
}Include clear documentation about what overrides are needed:
@ProcessorStepRegistry.register("vision_processor")
class VisionProcessor:
"""Process camera images for robot vision.
Required overrides when loading:
camera_interface: Hardware camera interface object
Optional overrides:
resolution: Camera resolution tuple (default: (640, 480))
fps: Camera frame rate (default: 30)
"""
camera_interface: Any = None
resolution: tuple = (640, 480)
fps: int = 30Create configuration helpers for different deployment environments:
# config/simulation.py
def get_simulation_overrides():
return {
"camera_step": {"camera_interface": SimCamera()},
"physics_step": {"engine": SimPhysics()},
"control_step": {"safety_limits": False}
}
# config/production.py
def get_production_overrides():
return {
"camera_step": {"camera_interface": RealCamera("/dev/video0")},
"physics_step": {"engine": RealPhysics()},
"control_step": {"safety_limits": True}
}
# Usage
from config.production import get_production_overrides
processor = RobotProcessor.from_pretrained(
"shared/processor",
overrides=get_production_overrides()
)Overrides work seamlessly with Hugging Face Hub sharing:
# Save processor without non-serializable objects
processor.push_to_hub("my-lab/robot-processor")
# Anyone can load and provide their own environment
import gym
local_env = gym.make("MyRobotEnv-v1")
processor = RobotProcessor.from_pretrained(
"my-lab/robot-processor",
overrides={"env_step": {"env": local_env}}
)This enables sharing of preprocessing logic while allowing each user to provide their own environment-specific dependencies.
Here’s a complete example showing proper device management and all features:
from lerobot.processor.pipeline import RobotProcessor, ProcessorStepRegistry, TransitionKey
import torch
import torch.nn.functional as F
import numpy as np
@ProcessorStepRegistry.register("device_mover")
@dataclass
class DeviceMover:
"""Move all tensors to specified device."""
device: str = "cuda"
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
if obs is None:
return transition
# Move all tensor observations to device
processed_obs = dict(obs) # Create a copy
for key, value in processed_obs.items():
if isinstance(value, torch.Tensor):
processed_obs[key] = value.to(self.device)
# Also handle action if present
action = transition.get(TransitionKey.ACTION)
if action is not None and isinstance(action, torch.Tensor):
action = action.to(self.device)
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = processed_obs
new_transition[TransitionKey.ACTION] = action
return new_transition
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = processed_obs
return new_transition
def get_config(self) -> Dict[str, Any]:
return {"device": str(self.device)}
@ProcessorStepRegistry.register("running_normalizer")
@dataclass
class RunningNormalizer:
"""Normalize using running statistics with proper device handling."""
feature_dim: int
momentum: float = 0.1
def __post_init__(self):
# Initialize as None - will be created on first call with correct device
self.running_mean = None
self.running_var = None
self.initialized = False
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
if obs is None or "observation.state" not in obs:
return transition
state = obs["observation.state"]
# Initialize on first call with correct device
if not self.initialized:
device = state.device
self.running_mean = torch.zeros(self.feature_dim, device=device)
self.running_var = torch.ones(self.feature_dim, device=device)
self.initialized = True
# Update statistics
with torch.no_grad():
batch_mean = state.mean(dim=0)
batch_var = state.var(dim=0, unbiased=False)
self.running_mean = (1 - self.momentum) * self.running_mean + self.momentum * batch_mean
self.running_var = (1 - self.momentum) * self.running_var + self.momentum * batch_var
# Normalize
state_normalized = (state - self.running_mean) / (self.running_var + 1e-8).sqrt()
processed_obs = dict(obs) # Create a copy
processed_obs["observation.state"] = state_normalized
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = processed_obs
return new_transition
def get_config(self) -> Dict[str, Any]:
return {
"feature_dim": self.feature_dim,
"momentum": self.momentum,
"initialized": self.initialized
}
def state_dict(self) -> Dict[str, torch.Tensor]:
if not self.initialized:
return {}
return {
"running_mean": self.running_mean,
"running_var": self.running_var
}
def load_state_dict(self, state: Dict[str, torch.Tensor]) -> None:
if state:
self.running_mean = state["running_mean"]
self.running_var = state["running_var"]
self.initialized = True
def reset(self) -> None:
# Don't reset statistics - they persist across episodes
pass
# Create a complete pipeline
processor = RobotProcessor([
ImageProcessor(), # Convert images to float32 [0,1]
StateProcessor(), # Convert states to torch tensors
ImagePadder(target_height=224, target_width=224), # Pad images to standard size
DeviceMover("cuda"), # Move everything to GPU
RunningNormalizer(7), # Normalize states
], name="CompletePreprocessor")
# The processor handles device transfers automatically
processor = processor.to("cuda") # Moves all stateful components to GPU
# Use it
obs = {
"pixels": {"cam": np.random.randint(0, 255, (200, 300, 3), dtype=np.uint8)},
"agent_pos": np.random.randn(7).astype(np.float32)
}
transition = {
TransitionKey.OBSERVATION: obs,
TransitionKey.ACTION: None,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: {},
TransitionKey.COMPLEMENTARY_DATA: {}
}
# Everything is processed and on GPU
processed = processor(transition)
print(processed[TransitionKey.OBSERVATION]["observation.images.cam"].device) # cuda:0Let’s see how RobotProcessor elegantly solves the scenarios we discussed earlier:
When your environment and policy speak different “languages”:
@ProcessorStepRegistry.register("key_remapper")
@dataclass
class KeyRemapper:
"""Rename observation keys to match policy expectations."""
key_mapping: Dict[str, str] = field(default_factory=lambda: {
"rgb_camera_front": "observation.images.wrist",
"joint_positions": "observation.state",
"gripper_state": "observation.gripper"
})
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
if obs is None:
return transition
# Create new observation with renamed keys
processed_obs = dict(obs) # Create a copy
renamed_obs = {}
for old_key, new_key in self.key_mapping.items():
if old_key in processed_obs:
renamed_obs[new_key] = processed_obs[old_key]
# Keep any unmapped keys as-is
for key, value in processed_obs.items():
if key not in self.key_mapping:
renamed_obs[key] = value
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = renamed_obs
return new_transitionWhen you need to crop and resize images to focus on the manipulation workspace:
@ProcessorStepRegistry.register("workspace_cropper")
@dataclass
class WorkspaceCropper:
"""Crop and resize images to focus on robot workspace."""
crop_bbox: Tuple[int, int, int, int] = (400, 200, 1200, 800) # (x1, y1, x2, y2)
output_size: Tuple[int, int] = (224, 224)
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
if obs is None:
return transition
processed_obs = dict(obs) # Create a copy
for key in list(processed_obs.keys()):
if key.startswith("observation.images."):
img = processed_obs[key]
# Crop to workspace
x1, y1, x2, y2 = self.crop_bbox
img_cropped = img[:, :, y1:y2, x1:x2]
# Resize to expected dimensions
img_resized = F.interpolate(
img_cropped,
size=self.output_size,
mode='bilinear',
align_corners=False
)
processed_obs[key] = img_resized
new_transition = transition.copy()
new_transition[TransitionKey.OBSERVATION] = processed_obs
return new_transitionNow you can compose these steps into robot-specific pipelines:
# Pipeline for Robot A with wrist camera
robot_a_processor = RobotProcessor([
# Observation preprocessing
KeyRemapper({
"wrist_rgb": "observation.images.wrist",
"arm_joints": "observation.state"
}),
ImageProcessor(),
WorkspaceCropper(crop_bbox=(300, 100, 900, 700), output_size=(224, 224)),
StateProcessor(),
VelocityCalculator(state_key="observation.state"),
DeviceMover("cuda"),
# Action postprocessing
ActionSmoother(alpha=0.2),
], name="RobotA_ACT_Processor")
# Pipeline for Robot B with overhead camera
robot_b_processor = RobotProcessor([
# Different key mapping for Robot B
KeyRemapper({
"overhead_cam": "observation.images.wrist", # Reuse same policy!
"joint_encoders": "observation.state"
}),
ImageProcessor(),
WorkspaceCropper(crop_bbox=(100, 50, 1100, 950), output_size=(224, 224)),
StateProcessor(),
VelocityCalculator(state_key="observation.state"),
DeviceMover("cuda"),
ActionSmoother(alpha=0.3), # Different smoothing
], name="RobotB_ACT_Processor")
# Share processors with the community
robot_a_processor.push_to_hub("my-lab/robot-a-act-processor")
robot_b_processor.push_to_hub("my-lab/robot-b-act-processor")
# Now anyone can use the same preprocessing
processor = RobotProcessor.from_pretrained("my-lab/robot-a-act-processor")The beauty of this approach is that:
def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
# Always check if observation exists
if obs is None:
return transition
# Also check for specific keys
if "observation.state" not in obs:
return transition# Good - preserve all elements
return (modified_obs, *transition[1:])
# Bad - losing information
return (modified_obs, None, 0.0, False, False, {}, {})def __call__(self, transition: EnvTransition) -> EnvTransition:
obs = transition.get(TransitionKey.OBSERVATION)
if self.store_previous:
# Good - clone to avoid reference issues
self.previous_state = obs["observation.state"].clone()
# Bad - stores reference that might be modified
# self.previous_state = obs["observation.state"]def state_dict(self) -> Dict[str, torch.Tensor]:
if self.buffer is None:
return {}
# Good - clone to avoid memory sharing issues
return {"buffer": self.buffer.clone()}Here’s how to use RobotProcessor in a real robot control loop, showing both tuple and batch formats:
from lerobot.processor.pipeline import RobotProcessor, ProcessorStepRegistry, TransitionKey
from lerobot.policies.act.modeling_act import ACTPolicy
from pathlib import Path
import time
# Create preprocessing pipeline
preprocessor = RobotProcessor([
ImageProcessor(),
StateProcessor(),
ImagePadder(target_height=224, target_width=224),
DeviceMover("cuda"),
RunningNormalizer(feature_dim=14), # For 14-DOF robot
], name="ACTPreprocessor")
# Create postprocessing pipeline for actions
@ProcessorStepRegistry.register("action_clipper")
@dataclass
class ActionClipper:
"""Clip actions to safe ranges."""
min_value: float = -1.0
max_value: float = 1.0
def __call__(self, transition: EnvTransition) -> EnvTransition:
action = transition.get(TransitionKey.ACTION)
if action is not None:
action = torch.clamp(action, self.min_value, self.max_value)
new_transition = transition.copy()
new_transition[TransitionKey.ACTION] = action
return new_transition
return transition
def get_config(self) -> Dict[str, Any]:
return {"min_value": self.min_value, "max_value": self.max_value}
postprocessor = RobotProcessor([
ActionClipper(min_value=-0.5, max_value=0.5),
ActionSmoother(alpha=0.3),
], name="ACTPostprocessor")
# Load policy
policy = ACTPolicy.from_pretrained("lerobot/act_aloha_sim_transfer_cube_human")
# Move everything to GPU
preprocessor = preprocessor.to("cuda")
postprocessor = postprocessor.to("cuda")
policy = policy.to("cuda")
# Control loop using EnvTransition format
env = make_robot_env()
obs, info = env.reset()
for episode in range(10):
print(f"Episode {episode + 1}")
for step in range(1000):
# Create transition with raw observation
transition = {
TransitionKey.OBSERVATION: obs,
TransitionKey.ACTION: None,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: info,
TransitionKey.COMPLEMENTARY_DATA: {"step": step}
}
# Preprocess - works with dictionary format
processed_transition = preprocessor(transition)
processed_obs = processed_transition.get(TransitionKey.OBSERVATION)
# Get action from policy
with torch.no_grad():
action = policy.select_action(processed_obs)
# Postprocess action
action_transition = {
TransitionKey.OBSERVATION: processed_obs,
TransitionKey.ACTION: action,
TransitionKey.REWARD: 0.0,
TransitionKey.DONE: False,
TransitionKey.TRUNCATED: False,
TransitionKey.INFO: info,
TransitionKey.COMPLEMENTARY_DATA: {"raw_action": action.clone()} # Store raw action in complementary_data
}
processed_action_transition = postprocessor(action_transition)
final_action = processed_action_transition.get(TransitionKey.ACTION)
# Execute action
obs, reward, terminated, truncated, info = env.step(final_action.cpu().numpy())
if terminated or truncated:
# Reset at episode boundary
preprocessor.reset()
postprocessor.reset()
obs, info = env.reset()
break
# Save preprocessor with learned statistics
preprocessor.save_pretrained(f"./checkpoints/preprocessor_ep{episode}")
# Alternative: Using batch dictionary format
# This is useful when integrating with existing LeRobot training code
def control_loop_with_batch_format():
"""Example using batch dictionary format."""
obs, info = env.reset()
for step in range(1000):
# Create batch dictionary
batch = {
"observation.image": torch.from_numpy(obs["pixels"]).unsqueeze(0),
"observation.state": torch.from_numpy(obs["agent_pos"]).unsqueeze(0),
"action": torch.zeros(1, 7), # Placeholder
"next.reward": torch.tensor([0.0]),
"next.done": torch.tensor([False]),
"next.truncated": torch.tensor([False]),
"info": info,
}
# Preprocess - works with batch format
processed_batch = preprocessor(batch)
# Get action from policy
with torch.no_grad():
action = policy.select_action({
k: v for k, v in processed_batch.items()
if k.startswith("observation.")
})
# Add action to batch for postprocessing
processed_batch["action"] = action
# Postprocess
final_batch = postprocessor(processed_batch)
final_action = final_batch["action"]
# Execute action
obs, reward, terminated, truncated, info = env.step(final_action.cpu().numpy())
if terminated or truncated:
break
# Push final version to hub
preprocessor.push_to_hub("my-username/act-preprocessor")
postprocessor.push_to_hub("my-username/act-postprocessor")Use the full power of RobotProcessor for debugging:
# Enable detailed logging
def log_observation_shapes(step_idx: int, transition: EnvTransition):
obs = transition.get(TransitionKey.OBSERVATION)
if obs:
print(f"Step {step_idx} observations:")
for key, value in obs.items():
if isinstance(value, torch.Tensor):
print(f" {key}: shape={value.shape}, device={value.device}, dtype={value.dtype}")
return None
processor.register_after_step_hook(log_observation_shapes)
# Monitor complementary data flow
def monitor_complementary_data(step_idx: int, transition: EnvTransition):
comp_data = transition.get(TransitionKey.COMPLEMENTARY_DATA)
if comp_data:
print(f"Step {step_idx} complementary_data: {list(comp_data.keys())}")
return None
processor.register_before_step_hook(monitor_complementary_data)
# Validate data integrity
def validate_tensors(step_idx: int, transition: EnvTransition):
obs = transition.get(TransitionKey.OBSERVATION)
if obs:
for key, value in obs.items():
if isinstance(value, torch.Tensor):
if torch.isnan(value).any():
raise ValueError(f"NaN detected in {key} after step {step_idx}")
if torch.isinf(value).any():
raise ValueError(f"Inf detected in {key} after step {step_idx}")
return None
processor.register_after_step_hook(validate_tensors)RobotProcessor provides a powerful, modular approach to data preprocessing in robotics:
.to(device)complementary_data for passing informationTransitionKey instead of magic numbersto_transition and to_output functions for specific needsKey advantages of the dual format approach:
to_output to control return format regardless of input formatBy following these patterns, your preprocessing code becomes more maintainable, shareable, and robust while being compatible with the entire LeRobot ecosystem.
For the full API reference, see the RobotProcessor API documentation.
< > Update on GitHub