bg-image-7.jpg

Music Sounds Better With You

Music Sounds Better With You

december 2018

neural network for quadrophonic Sound installation

Overview

What makes us dance? Our state of mind or the music we hear? I trained a neural network to control the “dancey-ness” of a track based on how much the machine thinks you are dancing. The piece was installed in a room in a gallery, where visitors were directed to enter alone. They saw a vague figure of themselves in leds, with a mellow loop playing on the speakers, and as soon as the began to dance, the music and lighting became more intense.

Brief

In Fall of 2018, I took Gene Kogan’s Neural Aesthetic and Morton Subotnick’s Creating With Interactive Media. The finals for each class were to create a piece using a neural network and sound, respectively. I wanted to use those finals to continue my investigation into partying, interaction, and spatial norms, so I created a magic room where your body interacts with music in a recursive manner.

Process

First, I had to create my data set to train the neural network. I made a setup in TouchDesigner to record skeletal data of my test subjects. I asked them to dance, recorded their movements, then asked them not to dance, and recorded that as well. I just wanted the network to ‘classify’ the users as some value between 0 and 1.

From there, I would try processing the data in various ways, and send it into Wekinator over osc. Then it was a matter of wrestling with Wekinator’s settings, tweaking my data processing, and expanding the data set to try and see the output I wanted. In the end, I sent a vector of length ~600 value per frame, calculating the squared velocities of 30 positions over the last 20 frames.

For the audio, I experimented with a number of techniques to get the feeling of the user hearing the music “drop” when they started dancing. I first tried using my generative system but it was a bit too much manage. So instead I searched for a song to decompose and loop. I finally settled on “Music Sounds Better With You” as I could use the song it sampled, a remix of it, and the original to pull out “stems.”

Final Thoughts

I stressed quite a lot over the accuracy of the “dance detection” but people were elated when they could see the connection between their movements and the music and lighting. It was difficult to use a neural network to classify something as subjective and varied as dancing; so for future iterations, I’ll be using more traditional computer vision techniques that are simpler from the user’s perspective.