Placeholder for a cool funny video.

I uploaded a random video to the drive I received by email since a video was requested in the course documentation minimum requirements.

That is not the final video I will use to demonstrate the project - I have scripted and filmed a much nicer video, and will edit and color grade it during the Christmas holidays. I just did not have time to do it by this Friday - it’s a lot of work and I want to do it well.

Once the final video is done, I will upload it to Vimeo and embed it right here.


BUSINESS TIME is a performative game, where the player dresses up in formal clothes and tries to balance a coin on a platform by repeating “buy” and “sell” on the phone.

The player wears an old-fashioned suit jacket, a fedora and a briefcase. They grab an old Nokia phone and say “business time” to start the game.

The purpose of the game is to keep the coin in the middle of a platform. The player controls the platform’s rotation by calling “buy” and “sell” on the phone. The game is challenging and it takes a lot of focus to balance the coin.

As the player focuses on the game, they tend to forget how their actions actually look. A passer-by will see a person dressed up in clothes from the last millenium repeatedly shouting “BUY” and “SELL” to a phone with great passion (and sometimes frustration). The whole spectacle tends to be quite funny for the audience.

I felt like creating a humorous performance, where the visitor is the performer. I wanted to give the visitor a chance to step out of their comfort zone and do something ridiculous that they normally wouldn’t - all while giving them a genuinely interesting game to play.

I gave the player an outfit, so it would be easier for them to dive into the role. And I tried to make the game difficult yet addictive, so it would require complete focus from the player making them forget that they are performing.

I got a lot of positive feedback for the project, and people seemed to generally like both the game and the performance aspects. The audiences I observed laughed a lot, and the players seemed really invested in the game. So, overall, I was happy with the result.

Player from the side

The process

In this section, I’ll go through each individual part of the work and explain my choices. I’ll also provide some self-reflection for things I could’ve done differently.


Honestly, the theme popped to my head quite randomly. From the beginning, I knew I wanted to create something that would:

  1. allow the visitor to perform something ridiculous.
  2. force me to work with materials I am uncomfortable with, such as metal and plastic.
  3. use the voice as a control mechanism.

I was playing around with several ideas, but one day just clicked with the stereotypical old-fashioned stockbroker theme. This probably came from the joke we sometimes repeat with my partner - shouting “buy, sell, buy, sell” on our phones pretending to be busy. And generally, the joke is quite old, so I’m sure I had heard it on multiple other occasions as well.

In the end, I really liked the theme choice, since it allowed me to go over-the-top with the tasteless golden aesthetics and OVERUSE OF CAPS LOCK, especially with the term BUSINESS.

Overall setup


I used two Arduino RP2040 Connect modules: one for the speech recognition and the other for controlling the platform. They both drew power from the wall with a 5 V at 2 A power supply. On these Arduinos, I cross-connected the TX and RX pins and connected the grounds to allow for serial communication (see this tutorial and Stack Exchange).

The speech recognition Arduino was connected to the power supply and the motor Arduino. I soldered a total of four wires into the speech recognition Arduino: TX, RX, Vin and ground.

The motor Arduino was connected to the speech recognition Arduino, a piezo speaker, a servo motor and two LEDs. These connections are straightforward and, if needed, can be inferred from the code and this photo of the motor Arduino breadboard:

Photo of the electronics for platform RP2040

Ideally, I would have designed and produced a printed circuit board. However, I couldn’t find the time for this. Anyhow, the setup I went for worked without any trouble for the exhibition week.

Speech recognition

I saw speech recognition as the most technically difficult part of the project, and hence decided to tackle it first. I needed to develop noise-resistant speech recognition that could recognize four commands:

  1. “Business time” - starting the game
  2. “Buy” - rotating the platform clockwise
  3. “Sell” - rotating the platform counter-clockwise
  4. “I’m done” - ending the game

There were numerous existing methods for speech recognition on desktop computers and through online APIs. However, I wanted to challenge myself by developing something reliable that can work offline and is small enough to run locally on an Arduino RP2040.

I started my journey into speech recognition by studying the basics of audio processing from HuggingFace’s audio course. Quite early on, I realised I want to go for a phoneme-based recognition method, since it should generalize the best to various speakers.

I did a lot of research looking for different methods, and ended up testing Cyberon’s speech recognition engine. This was licensed software, but the free trial allowed for a good amount of testing. The library was relatively easy to use, and seemed to have sufficient accuracy, so I proceeded with it.

I used the free version until the very last week - until I was completely sure of the commands I wanted to use. Then, I got the license and started using the full version to get rid of the limitations.

The speech recognition engine worked well enough for this project and saved me a lot of time compared to the other solutions I found. However, there were some downsides:

  • The model/library was essentially closed-source, so it wasn’t very flexible.
  • The model was particularly bad at recognizing “business time” from my East Asian classmates. It was very strict with the pronunciation, and generally seemed worse at capturing higher-pitched voices than lower-pitched ones.
  • The monetization scheme for the licensed version is ridiculous - it’s tied to device AND model instead of only device, so the user has to pay 9 e every time they want to change their model. The company seems quite greedy with their BUSINESS.

For this project, I believe the speech recognition engine was still the correct choice, since it allowed me to spend maximal time focusing on other aspects of the work. However, if I had more time, I could’ve also tried the following options that I found during my research:

  • Tweaking and updating uspeech - an old but promising library for phoneme-based speech recognition on Arduino.
  • Taking a closer look at PicoVoice SDK - on a first look it seemed more complex and less flexible than the solution I chose, but it could have potential.
  • Training my own model with the freshly released ExecuTorch by PyTorch.


As the phone, I decided to use my mother’s old Nokia 6110 that I had lying around at home. I chose the Nokia, since it’s from the 90s and feels like from another time. It is also really satisfying to hold: it’s heavy and robust.

I took out most of the electronics inside the phone, coated the walls with electrical tape and hid the Arduino RP2040 Connect within the shell. This Arduino had the speech recognition logic running on it.

Phone on the platform

I connected the Arduino within the Nokia to the platform-controlling Arduino with cables providing power and serial communication. This meant that there were some cables coming out of the phone. Wireless solutions would’ve been a more elegant, but I couldn’t get any working two-ways in a reliable way. And I anyway needed cables for powering the Arduino within the Nokia - I didn’t feel like swapping batteries every day.

A possibly better solution could have been to use an old landline phone - this would’ve brought the player even further back in time and the wires would’ve been better motivated / less noticeable. Anyway, I felt like reanimating the Nokia and it worked well, so I was generally happy with my choice.

Platform setup

I prepared the platform in the metal workshop from leftover sheet metal. The result turned out well and I became more comfortable with working with metal, which was one my goals.


I 3D-printed a connector between the servo motor and the platform with the help of the mechatronics workshop master. I used the connector and some gaffer tape to bind the platform to the servo motor.

I borrowed a stative from Aalto Takeout and attached the motor to it with gaffer tape. I installed the motor at an angle, so that the platform leans backwards increasing the friction between the coin and the platform back wall, which makes the coin’s movements more controllable.

I wrapped the stative in thick golden paper, which screams “rich but tasteless”, trying to create an over-the-top visual look. I used the same paper to mark the center of the platform and to block the ends.

Platform closeup

I lit the setup bottom-up with a LED tube from Takeout to make it look more dramatic and noticeable. The LED-tube wasn’t made for filming so it flickered in the videos I shot.

Overall, I’m happy with the platform setup and the aesthetic, even though some details could still be refined (e.g. folding the paper in a cleaner way, color/quality of the light).


I have uploaded the code with the full commit history to this repository, which should be visible to anybody logged in to the Aalto GitLab.