Amniotic
November 5th, 2025
Premiered as part of DXARTS Fall Concert: Hum Under the Riverstone
Tech:
# Wearable Sensors
# Machine Learning
# Movement Capture
# Real-time Classification
Media Arts:
# Interactive Performance
# Electro-Acoustic Music
# Immersive Media
# Video Projection
# Sculpture
# Mechatronics
Katharyn Alvord Gerlich Theater,
Meany Center for the Performing Arts
Department of Digital Arts & Experimental Media (DXARTS)
University of Washington
Seattle, WA, USA
Choreographer:
Rose Xu
Performers: Ifeyinwa Onyekonwu, Mary Jane Senger, Dasha Orlov, Samantha Tien, Emily Jiang, Vincent Le, Samantha Spahr, Rose Xu
Music Composition: Natalia Quintanilla Cabrera
Machine Learning: Zoe Cai, Rose Xu
Visual: Cristina Brambila, Rose Xu
Camera and Editing: Maria Thrän, Eunsun Choi, Cristina Brambila, Rose Xu
Special thanks to:
Jennifer Salk
Undercurrent (Alethea Alexander and Hillary Grumman)
Juan Pampin
Zoe Cai
Hum Under the Riverstone was created through a collaboration of the members of the Human-Machine Interaction Lab, coordinated by DXARTS Assistant Professor Laura Luna Castillo.
Amniotic is a choreographic exploration of memory, touch, and the unseen but ubiquitous forces that carry us. Inspired by the fluid support of amniotic waters, the work traces how tenderness and strength continue to live in the body, unfolding through dance, theater, sound, and projected imagery. It lingers in the space where gestures of care ripple outward like water, where tenderness anchors strength, and where human and machine meet to shape new ways of seeing and being with one another.
Technology is not merely a tool but a collaborator: a bespoke, dance-literate machine that listens, remembers, and responds mindfully to the dancers in real time. Built with wearable sensors and custom motion-recognition techniques, the system learns the vocabulary of the performers’ movement, mapping embodied memory into sound and visual media. This framework supports an environment of fluidity, care, and resonance. On stage, movement, story, and technology converge as one.
Technology is not merely a tool but a collaborator: a bespoke, dance-literate machine that listens, remembers, and responds mindfully to the dancers in real time. Built with wearable sensors and custom motion-recognition techniques, the system learns the vocabulary of the performers’ movement, mapping embodied memory into sound and visual media. This framework supports an environment of fluidity, care, and resonance. On stage, movement, story, and technology converge as one.
Notes:
- The dance is inspired by and partially choreographed using Undercurrent dance technique.
- Read more about dance-literate machines.
- Visuals are inspired by and partially created with TouchDesigner sketches by Supermarket Sallad.
- Narrative voice pre-recorded by Ifeyinwa Onyekonwu, and trigger by the ML system live. The narrative script is adapted from PauseLab. Enjoy the full meditation score.
Other Projects ︎
Rose (Ziyu) Xu
xrose@uw.edu Seattle, USA
xrose@uw.edu Seattle, USA
