Overview
This project creates a real-time human pose imitation system for the NAO humanoid robot. A camera captures the human operator’s movements, which are processed through a pose estimation pipeline and mapped to corresponding joint commands on the NAO robot, enabling it to mirror human motions in real time.
Key Features
- Real-Time Pose Detection: Uses MediaPipe Pose for fast and accurate human body landmark detection
- Joint Angle Mapping: Converts 3D human pose landmarks to NAO-compatible joint angles
- Smooth Motion: Implements interpolation and filtering for natural robot movements
- Full Upper Body: Maps arms, hands, and head movements
- Low Latency: Optimized pipeline for minimal delay between human motion and robot response
System Architecture
- Camera Input: Captures video frames of the human operator
- Pose Estimation: MediaPipe extracts 33 body landmarks in 3D
- Angle Computation: Calculates joint angles from landmark positions using inverse kinematics
- NAO Control: Sends joint angle commands to NAO via NAOqi SDK
- Safety Limits: Enforces joint limits and smooth transitions
Technologies Used
- Python
- MediaPipe Pose
- NAOqi SDK
- OpenCV
- NumPy
- NAO V6 humanoid robot
Results
The system achieved responsive real-time imitation with the NAO robot accurately reproducing upper-body human poses, demonstrating potential applications in human-robot interaction, teleoperation, and robotics education.
Contributors
- Imad-Eddine NACIRI