Animatronic Companion Robot
By Erik Masovn Haave and Andreas B. Nore
1) Topic and Motivation — Animatronics
We chose animatronics as our project topic because it offers a more engaging and creative challenge than conventional robotics projects.
Our goal is to design a robot that reacts to people in ways that appear expressive, lifelike and fun, that will be capable of behaviors that might even be described as cute or personable.
Rather than building another walking robot, we want to focus on personality and interaction.
This involves enabling the robot to respond visually and physically to human presence, gestures, and identity.
We were inspired by the ElectronBot [^1] and CLEVER Project [^2], which demostrated how a human-robot interaction could be implemented successfully. Both the shape and interaction are based on these projects, but we have decided to make artistic changes, like basing the shape on Wheezy from Toy Story [^3] and a camera with a speaker for the interaction.
[^1]: 稚晖君. (2022, March 13). I made a cute mini desktop robot !. YouTube. https://www.youtube.com/watch?v=FmKTiH5Lca4
[^2]: The Robotics Club. (2025, August 22). I Made A CLEVER Mini Robot. YouTube. https://www.youtube.com/watch?v=bPpk2lbAovk
[^3]: Disney Wiki. (n.d.). Wheezy. Fandom. https://disney.fandom.com/wiki/Wheezy
2) Goals
Our main goals are:
- To create a stationary animatronic robot that detects and reacts to people in its field of view.
- To make the reactions visually expressive, for example through servo-driven gestures or sound clips.
- To design a simple and modular software-hardware interface for combining vision, audio, and motion control.
As a possible optimization, if we meet our goals, the robot may (if time and hardware allow) recognize individual faces and display personalized reactions. This can be, for example, greeting a known user differently than a stranger via a sound clip or predefined gesture.
3) Sketch and Concept
The robot will combine servos for movement and a small speaker for audible reactions/interactions.
The mechanical design will be inspired by compact desk-friendly companion robots such as ElectronBot.
We plan to design a 3D-printable chassis and integrate sensors for basic visual perception.
Servo control and camera/vision system will be handled by the Raspberry Pi, although it may also be run externally on a host computer as a backup.
Internally, the design includes:
- A Raspberry Pi 3 B+ as the main controller.
- A camera module for visual perception.
- A servo driver board (e.g., PCA9685) to control 5 servos for expressive movement.
- A small speaker for audible feedback.

4) Bill of Materials (BOM)
| Item | Description | Quantity |
|---|---|---|
| 1 | Hobby servos (SG90 or equivalent) | 5 |
| 2 | Raspberry Pi 3 B+ | 1 |
| 3 | PCA9685 servo driver board | 1 |
| 4 | 3D-printed parts (body, base, arms) | Various |
| 5 | 5 V / 6 A power supply (single brick powering all components) | 1 |
| 6 | DC power splitter or screw terminal block (for power distribution) | 1 |
| 7 | Large electrolytic capacitor (1000–2200 ?F, ≥ 10 V) for servo rail stabilization | 1 |
| 8 | Bulk ceramic capacitors (0.1 ?F) near each servo lead | 5 |
| 9 | Raspberry Pi Camera Module 3 | 1 |
| 10 | Small I?S amplifier board (e.g., MAX98357A) | 1 |
| 11 | Small speaker (3 W, 4–8 Ω) | 1 |
| 12 | Assorted wiring (18–22 AWG for power, female-female jumper wires for signal) | Various |
| 13 | Optional inline fuse (3–5 A) on servo power branch | 1 |
Notes:
- The 5 V / 6 A power brick supplies both the Raspberry Pi and servos through two separate branches (star configuration).
- Grounds will be common between the Pi, PCA9685, and servos.
- The large capacitor across V+ and GND at the PCA9685 smooths servo current spikes.
- I?S amplifier and speaker are powered from the Pi branch to reduce PWM noise from the servo rail.
5) Plan
We will follow an iterative development process — starting with mechanical design and servo control, followed by vision integration and behavior optimization.
| Week | Milestone | Responsible |
|---|---|---|
| 42 | Finish the project description on Github | Andreas |
| 43 | CAD the robot and test the hardware and software | Both |
| 44 | Finalize the CAD model and training of face/pose recognition | Andreas |
| 45 | Print the model, implement the hardware and implement reactive motion | Erik |
| 46 | Optimize interactions and prepare for demo | Both |
Throughout the project, we will document progress with pictures, videos, and notes for later assignments.