Team Members:
                  Selin Dursun
                  Christopher Lock

Type: Opera of the Future (Projects in Media and Music)
Instructor: Tod Machover
Time Frame: February - May 2023 
Keywords: Performance, ML, AI, Improvisation, Human-Computer Interaction, Music, Max/MSP, AI-Based Instrument, Variational Autoencoder, Real-Time Encoding/Decoding

Echo Drifter is a performance piece integrating the state of humanness and machine-ness. Central to this performance is the use of the machine learning algorithm 'Rave', operated through the Max MSP environment. The stage itself is conceptualized as a latent space, where spatial positioning plays a crucial role in dictating the nature of the sonic response from the machine.
As the performer sings, their location on the stage triggers different responses from three distinct sound datasets embedded in the system. One dataset is rich in percussive sounds, offering rhythmic textures, while the other encompasses a melodic array, adding harmonic depth. This spatially-aware system creates a dynamic and responsive auditory landscape, uniquely shaped by the performer's position and vocal inputs.
Inspired by the intriguing concept of machine improvisation as discussed in the article section "Can Machines be Creative?", Echo Drifter explores this theme through a structured yet fluid dialogue between human and machine. The performance is orchestrated in alternating sequences where the performer sings a phrase for four bars, followed by the machine's improvised response for an equal duration. This creates a captivating interplay, a call-and-response format, showcasing not only a technological feat but also a new form of artistic expression where the boundaries between the performer and the digital realm are beautifully blurred.
Echo Drifter is more than a performance; it's an exploration into the possibilities of human-computer interaction, pushing the boundaries of improvisation and showcasing the potential of machines as creative partners.
Each latent space is associated with a sound dataset. As the performer moves along the stage, the responses she gets from the latent spaces are completely different. 
Line of Operations:
Pose Detection (MediaPipe) --> Camera --> Rave (nn~ Max/MSP) --> Latent Space Mapped to Physical Stage (TouchDesigner-OSC) --> Microphone --> Mixer --> Speakers

Back to Top