Give your OpenClaw AI agent a Reachy Mini robot body. OpenClaw is the brain โ it controls what the robot says, how it moves, and what it sees. OpenAI Realtime API handles voice I/O.
Works with physical robot OR MuJoCo simulator!
You don't need a physical Reachy Mini robot to use ReachyClaw!
ReachyClaw works with the Reachy Mini Simulator, a MuJoCo-based physics simulation
that runs on your computer. Watch your agent move and express emotions on screen
while you talk.
# Install simulator support pip install "reachy-mini[mujoco]" # Start the simulator (opens 3D window) reachy-mini-daemon --sim # In another terminal, run ReachyClaw reachyclaw --gradio
๐ Mac Users: Use mjpython -m reachy_mini.daemon.app.main --sim instead
ReachyClaw makes OpenClaw the actual brain โ every message, every movement, every decision.
Every user message goes through your OpenClaw agent. No GPT-4o guessing โ real responses with full tool access.
OpenAI Realtime API for low-latency speech-to-text and text-to-speech. Voice I/O only โ no GPT-4o brain.
OpenClaw controls the robot body via action tags โ head movement, emotions, dances, camera, face tracking.
See through the robot's camera. Your agent can look around and describe what it sees.
No robot? Run with MuJoCo simulator and watch your agent move in a 3D window.
No 30-second context fetch. GPT-4o is just a relay โ the session starts immediately.
OpenClaw controls everything
Choose your setup:
Option A: ๐ค Physical Reachy Mini robot
Option B: ๐ฅ๏ธ MuJoCo Simulator (free, no hardware!)
Get ReachyClaw running with the simulator
# Clone ReachyClaw git clone https://github.com/EdLuxAI/reachyclaw cd reachyclaw # Create virtual environment python -m venv .venv source .venv/bin/activate # Install ReachyClaw + simulator pip install -e . pip install "reachy-mini[mujoco]" # Configure (edit with your OpenClaw URL and OpenAI key) cp .env.example .env nano .env # Terminal 1: Start simulator reachy-mini-daemon --sim # Terminal 2: Run ReachyClaw reachyclaw --gradio