Feb 2020 @New Inc., NY
Dec 2019 @The Paper Box, Brooklyn, NY
Contact Mic, Fabricated Metal Sheet, DMX Lights, Ableton Live, Max/MSP, p5.js, Megenta
Developer, Performer
Do you hear the voices inside your head?
Can you make sense of them?
Are there words or merely feelings seeping
through the cracks you fail to hold together?
Using a machine learning model to generate
a composition as internal voices,
3 AM pries open the dialogue with a man
and his uncontrollable thoughts...
"3 AM" is an AI-powered NIME (New Interface for Musical Expression) performance about the hidden struggles we all have as human beings. If a sequence of water drops is presented as an internal voice, what do we hear as humans? How do we choose to respond? Should we obey, question, or fight against it?
Using analog acoustic sounds, AI-generated rhythms, movement sensing, interactive visuals, and lighting effects, "3 AM" pries open an intense, difficult and revealing dialogue inside a human mind.
By asking an AI-powered computational system to play the role of the internal voice, "3 am" also investigates the possibility of real-time dialogue making between human and machine perfomers on stage.
This is a project co-created by me and Nuntinee Tansrisakul.
“The origin. The beginning. The perpetual law of physics that gives birth to
everything.”
A sequence of water drops is created algorithmically as the initial seed input for
the performance. The rhythm and dynamics of the water drops are analyzed overtime,
and eventually translated into a midi sequence.
“We ape, we mimic, we mock. We act.”
A deep generative networks initiates the "midnight dialogue". A variational
autoencoder (VAE) listens to the translated midi sequence and generates accompanying
sequences that either mimic or complement the original sequence. A human performer
chimes in by tapping an umbrella on top of a fabricated metal sheet. Contact
microphones
are installed to pick up the sound and trajectory of the
umbrella’s movements for real-time audio playback.
“The blue pill, or the red pill?”
Another LSTM follows by generating subsequent notes that carry patterns and
characteristics inside the previous sequences.
The human performer then mediates between the drop sequence and the AI-generated
sequence by tapping, rubbing, scratching the metal surface, modulating the sound with
body movements, and gradually
reveals a third, improvised layer of composition.
“A battle destined to be lost.”
A dynamic session takes place among the three parties. The water drop sequence, the
AI, and the human will alternatively listen, engage and
disengage, tangle and untangle, and carry on until a mutual termination.