Fortu

Discover music through what machines ‘see’

Fortu is a web-based contextual music player that lets people discover music through objects at home. It recommends songs based on what the ‘eye of the machine’ detects through computer vision.

Project Background:

Our design challenge was to explore how “machines” could help us cope better with social distancing, remote communication and spending more time at home. This project was developed over four weeks with Jose Chavarria (Costa Rica) and Yoshio Mikamo (Japan) during the machine learning course at CIID.

Design Opportunity

During COVID-19 lockdowns, unplanned serendipitous moments that we experienced were suddenly reduced to the same monotonous routines.

Exploring what role machines could play to make isolation at home a bit more fun, our team focused on the generative aspect of machine learning to create something new for people to explore.

How does it work?

Fortu is a web-based application that plays music based on objects detected by the camera. It uses COCO-SSD, a pre-trained TensorFlow object detection model accessed via ml5.js. Detection results generated from the model are parsed to p5.js, where calls are made to the Spotify API, showing songs related to the detection results.

for·tu·i·ty (noun)

a chance occurrence, the state of being controlled by chance rather than design.

Play what your machine ‘sees’

How did we get there?

When the team discussed what we missed the most about life before the pandemic, the social and spontaneity aspect of music came up. We missed dancing to music and discovering new music with loved ones. A few concepts were explored before we landed on our final concept.

01. Music to the Dance

What if your dances can command what music to play?

Teachable Machine Image | ML5 PoseNet pose detection | p5.js

You know that great feeling when your favourite song comes on and you just can’t help but dance? What if we could train our machines to recognise our dance moves to play the songs we want to dance to?

We trained our machines to recognise a few distinctive dance, including Macarena, Stayin’ Alive, and The Ketchup Song, it was challenging to develop the concept further as we ran out of distinctive dance moves for the machine to recognise.

02. Hum to the Beat

What if we could get our machines to play songs based on just ‘da daaaa da na’?

Teachable Machine Audio | p5.js

We all know how frustrating it is when you can’t remember the name of a song or any of the words but the tune is stuck in your head. This led to the idea of humming to play songs.

We trained the model with our humming which represented three genres — electro, hip-hop, and rock. A few songs were selected for each genre and a random song would be played from the list when humming was detected. This concept was scrapped because of its inability to scale and a similar product is already in place.

03. Contextual Music Player

What if we could use a machine to generate new music recommendations?

RunwayML | Processing | Spotify API | ml5.js | p5.js

Have you found yourself stuck in a ‘musical echo chamber’ where you listen to the same music over and over again? What if we could generate new music recommendations at home through machines?

This led us to develop the first iteration of ‘Fortu’, a physical speaker that detects objects in proximity and plays music based on the objects identified using RunwayML and Processing.

We later moved the sketch from Processing to a web browser and connected to Spotify to improve scalability. It was developed using ml5.js and p5.js where calls were made to the Spotify API. Songs that matched the object detection results would be played in the browser.

Previous
Previous

Afterglow

Next
Next

Limited Memory