ReachyClaw
Voice conversation ยท OpenClaw brain ยท Full body control
Reachy Mini App

Your OpenClaw agent, embodied.

Give your OpenClaw AI agent a Reachy Mini robot body. OpenClaw is the brain โ€” it controls what the robot says, how it moves, and what it sees. OpenAI Realtime API handles voice I/O.

๐Ÿง  OpenClaw brain ๐ŸŽ™๏ธ OpenAI Realtime voice ๐Ÿ’ƒ Full body control ๐Ÿ–ฅ๏ธ No robot required!
Reachy Mini Robot Dancing

Works with physical robot OR MuJoCo simulator!

๐Ÿ–ฅ๏ธ No Robot? No Problem!

You don't need a physical Reachy Mini robot to use ReachyClaw!

ReachyClaw works with the Reachy Mini Simulator, a MuJoCo-based physics simulation that runs on your computer. Watch your agent move and express emotions on screen while you talk.

# Install simulator support
pip install "reachy-mini[mujoco]"

# Start the simulator (opens 3D window)
reachy-mini-daemon --sim

# In another terminal, run ReachyClaw
reachyclaw --gradio

๐ŸŽ Mac Users: Use mjpython -m reachy_mini.daemon.app.main --sim instead

๐Ÿ“š Simulator Setup Guide

What's inside

ReachyClaw makes OpenClaw the actual brain โ€” every message, every movement, every decision.

๐Ÿง 

OpenClaw is the brain

Every user message goes through your OpenClaw agent. No GPT-4o guessing โ€” real responses with full tool access.

๐ŸŽค

Real-time voice

OpenAI Realtime API for low-latency speech-to-text and text-to-speech. Voice I/O only โ€” no GPT-4o brain.

๐Ÿค–

Full body control

OpenClaw controls the robot body via action tags โ€” head movement, emotions, dances, camera, face tracking.

๐Ÿ‘€

Vision

See through the robot's camera. Your agent can look around and describe what it sees.

๐Ÿ–ฅ๏ธ

Simulator support

No robot? Run with MuJoCo simulator and watch your agent move in a 3D window.

โšก

Instant startup

No 30-second context fetch. GPT-4o is just a relay โ€” the session starts immediately.

How it works

OpenClaw controls everything

  1. ๐ŸŽค Robot captures your voice
  2. ๐Ÿ“ OpenAI Realtime transcribes your speech
  3. ๐Ÿง  Your message goes to OpenClaw (the real brain)
  4. ๐Ÿค– OpenClaw responds with text + action tags like [EMOTION:happy]
  5. ๐Ÿ’ƒ ReachyClaw executes the actions on the robot
  6. ๐Ÿ”Š Clean text goes to TTS โ€” robot speaks while moving

Prerequisites

Choose your setup:

๐Ÿง  OpenClaw Gateway ๐Ÿ”‘ OpenAI API Key ๐Ÿ Python 3.11+

Option A: ๐Ÿค– Physical Reachy Mini robot
Option B: ๐Ÿ–ฅ๏ธ MuJoCo Simulator (free, no hardware!)

View installation guide

Quick start

Get ReachyClaw running with the simulator

# Clone ReachyClaw
git clone https://github.com/EdLuxAI/reachyclaw
cd reachyclaw

# Create virtual environment
python -m venv .venv
source .venv/bin/activate

# Install ReachyClaw + simulator
pip install -e .
pip install "reachy-mini[mujoco]"

# Configure (edit with your OpenClaw URL and OpenAI key)
cp .env.example .env
nano .env

# Terminal 1: Start simulator
reachy-mini-daemon --sim

# Terminal 2: Run ReachyClaw
reachyclaw --gradio