Upload Final Robot.mp4
SteadyArm is an AI-powered robotic camera operator that responds to your voice, your vibe, and your vision. Built on ROS, real-time APIs, and voice AI, SteadyArm lets directors control robotic camera arms using natural, conversational commands — no joystick, no GUI, just intention.
Think:
“Give me a slow push-in, Dutch angle, subject center frame — now track him.”
It just happens.
⸻
🔥 Core Features
🎙 Vibe Directing
Direct like you’re talking to your crew. Say the shot, the angle, the emotion — SteadyArm listens, interprets, and moves. Uses Whisper + GPT-4o + custom shot logic to translate spoken commands into real-time robot motion.
🎨 Vibe Styling
Add flair on the fly. Apply visual styles, lighting moods, or framing references (e.g., “make it look like Blade Runner” or “give me early Spielberg”). Uses ElevenLabs voice interaction + Stable Diffusion for visual feedback + Blender for real-time previews.
🧠 Modular Robot Control
• Built with ROS and URScript compatibility
• Supports UR5e, Canon XC10, and other camera rigs
• Real-time control via web interface, Lovable-style Vibe Code, or live voice
• Integrated with Blender for shot simulation and pre-viz
🖼 Visualizer Mode
Use an overhead camera to visualize actor positions and camera moves. AI creates real-time, stylized feedback on screens in the director’s village or playback monitor.