PROJECTS         ABOUT ME         CONTACT

Human–Machine Ritual: 

Synergic Performance through Real-Time Motion Recognition


# Wearable Sensors
# Machine Learning
# Movement Capture
# Real-time Classification 




We design and develop a machine learning system that perceive human movement in real time, enabling machine intellegence to participate in performance as a responsive partner rather than a generative replacement.


Department of Digital Arts & Experimental Media (DXARTS)
University of Washington
Seattle, WA, USA

Overview


Human–Machine Ritual introduces a lightweight, real-time movement recognition system that enables synergic interaction between dancers and computational media during live performance. The system combines wearable inertial measurement unit (IMU) sensors with a MiniRocket-based time-series classification model to recognize dancer-specific movements and control responsive audiovisual elements with low latency.

Unlike generative approaches that synthesize new content, this work foregrounds responsive perception: the machine interprets and recalls embodied motion patterns, preserving expressive depth while supporting real-time creative interaction.

Research Motivation


This research reframes human–machine collaboration in performance through the lens of attentive recognition rather than generation. Large-scale AI models often rely on generic datasets and aim for broad generalization; by contrast, this system is trained on dancer-specific motion paired with personally meaningful sound stimuli.

The work explores what it means for a machine to listen and remember movement in context, maintaining sensitivity to temporal ambiguity, somatic memory, and the unique qualities of embodied expression. The machine does not act as an autonomous creator, but as a perceptual partner whose behavior is shaped by the dancer’s training data and movement vocabulary.

System Design



Dancers wear four wireless IMU sensors (placed on wrists and ankles) that capture six channels (three-axis accelerometer and gyroscope) of motion data at 48 Hz.

During a pre-performance training phase, the dancer improvises with sound, generating movement-sound pairings rooted in somatic memory. These recordings are preprocessed into fixed-length segments and augmented to enhance dataset diversity. A MiniRocket feature extractor combined with a ridge regression classifier learns to distinguish between multiple motion types.

During live application, real-time IMU streams are classified and mapped to corresponding audiovisual controls. The system maintains end-to-end latency under 50 ms, ensuring synchronous interaction between movement and media without perceptible delay.



Key Features 


  • Wearable IMU sensing: unobtrusive motion capture from wrists and ankles at performance-ready sampling rates.
  • MiniRocket classification: efficient, high-accuracy time-series model tailored to continuous sensor data.
  • Low latency: system latency under 50 ms, enabling real-time responses in performance conditions.
  • Dancer-specific training: movement classes are grounded in somatic memory and gesture patterns unique to the performer.
  • Responsive media control: predicted motion classes drive sound and projection outputs as part of a tightly coupled human–machine loop.

Quantitatively, the classifier achieved high accuracy and strong discriminability across multiple motion classes, demonstrating the viability of this architecture for live creative contexts.

Integration with Creative Practice


This system was developed in tandem with choreographic development, forming a research and artistic practice that privileges embodied interaction over automated generation. The technical pipeline, from sensing to multimedia control, is designed to support embodied nuance, temporal ambiguity, and contextual responsiveness rather than predefined mappings or fixed choreography.

By situating the dancer’s body as both archive and oracle within the system’s training regime, the project offers a replicable framework for integrating dance-literate machines into creative, educational, and live performance settings.


Research Outputs


Read the full paper:
Zhuodi (Zoe) Cai, Ziyu (Rose) Xu, Juan Pampin, Human–Machine Ritual: Synergic Performance through Real-Time Motion Recognition. Proceedings of the Thirty-Ninth Conference on Neural Information Processing Systems (NeurIPS 2025 Creative AI Track).
 
Thank you Zoe for being such a brilliant and supportive research partner, and for pulling through the 4-hr poster session and panel talk with me at Neurips!

Looking Ahead


Moving forward, we plan to expand the system’s movement vocabulary by capturing a broader range of gestures, improve recognition of transitions between motions, and enable on-the-fly learning of new movements for adaptability. Additionally, we aim to integrate the system into interactive gallery installations and other art environments, and to improve its generalizability while preserving its human-centered design. Finally, we intend to open-source the platform, inviting broader creative collaboration and further research development.






























Other Projects ︎
Rose (Ziyu) Xu
xrose@uw.edu        Seattle, USA