<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:media="http://search.yahoo.com/mrss/"
	
	>

<channel>
	<title>Rose's Portfolio</title>
	<link>https://roseziyuxu.cargo.site</link>
	<description>Rose's Portfolio</description>
	<pubDate>Wed, 28 Jan 2026 04:08:41 +0000</pubDate>
	<generator>https://roseziyuxu.cargo.site</generator>
	<language>en</language>
	
		
	<item>
		<title>Main</title>
				
		<link>https://roseziyuxu.cargo.site/Main</link>

		<pubDate>Wed, 09 Nov 2022 04:17:57 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/Main</guid>

		<description>“Rose” Ziyu Xu 徐子瑜


Performance Arts + Machine Intelligence = Multimedia Theater + Future




	

After a certain high level of technical skill is achieved, science and art tend to coalesce in aesthetics, plasticity, and form. The greatest scientists are always artists as well.

	

&#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp;Theoretical physicist and violin enthusiast, Albert Einstein








	︎ ︎
	
Research Philosophy
How does the future of arts look like? What is the point of advancing technology? What do artists and designers need?
Being a science researcher, performance artist, and programmer at the same time, I am curious how science and art would coalesce and what technology enables that.&#38;nbsp;
In quest of my wild wonders, Einstein’s fantasy, and the horizon of liberal arts education, I think and experiment; I make impossible possible &#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp;&#38;nbsp;


	


</description>
		
	</item>
		
		
	<item>
		<title>Amniotic</title>
				
		<link>https://roseziyuxu.cargo.site/Amniotic</link>

		<pubDate>Wed, 28 Jan 2026 04:08:41 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/Amniotic</guid>

		<description>
	Amniotic



November 5th, 2025Premiered as part of&#38;nbsp;DXARTS Fall Concert: Hum Under the Riverstone





Tech:# Wearable Sensors# Machine Learning# Movement Capture# Real-time Classification&#38;nbsp;
Media Arts:# Interactive Performance


#&#38;nbsp;Electro-Acoustic Music
# 
Immersive Media # Video Projection
# Sculpture
# Mechatronics&#38;nbsp;









Katharyn Alvord Gerlich Theater,Meany Center for the Performing ArtsDepartment of Digital Arts &#38;amp; Experimental Media (DXARTS)University of WashingtonSeattle, WA, USA



	


Choreographer:Rose XuPerformers: Ifeyinwa Onyekonwu, Mary Jane Senger, Dasha Orlov, Samantha Tien, Emily Jiang, Vincent Le, Samantha Spahr, Rose Xu

Music Composition: Natalia Quintanilla Cabrera

Machine Learning: Zoe Cai, Rose Xu

Visual: Cristina Brambila, Rose Xu

Camera and Editing: Eunsun Choi,&#38;nbsp;Maria Thrän,  Cristina Brambila, Rose Xu
Special thanks to:Jennifer SalkUndercurrent (Alethea Alexander and Hillary Grumman)Juan PampinZoe CaiHum Under the Riverstone was created through a collaboration of the members of the Human-Machine Interaction Lab, coordinated by DXARTS Assistant Professor Laura Luna Castillo. 




	

&#60;img width="3840" height="2160" width_o="3840" height_o="2160" data-src="https://freight.cargo.site/t/original/i/c36720db6ee7f6d12cc80e18387650ecfe9c7cb3af3d1c1a5becca5f27f34b2d/IMG_8177-2.JPG" data-mid="244066339" border="0"  src="https://freight.cargo.site/w/1000/i/c36720db6ee7f6d12cc80e18387650ecfe9c7cb3af3d1c1a5becca5f27f34b2d/IMG_8177-2.JPG" /&#62;
&#60;img width="3787" height="2128" width_o="3787" height_o="2128" data-src="https://freight.cargo.site/t/original/i/016545744b349805046f72a6d59c547dc075c50b2dfc59ac2928c23f46e6f92b/IMG_8165-3.jpg" data-mid="244066502" border="0"  src="https://freight.cargo.site/w/1000/i/016545744b349805046f72a6d59c547dc075c50b2dfc59ac2928c23f46e6f92b/IMG_8165-3.jpg" /&#62;
&#60;img width="3802" height="2138" width_o="3802" height_o="2138" data-src="https://freight.cargo.site/t/original/i/de39e0d332a8b7e7968c86f50245a9b8a41179d37b56b7f7fdae9ebcda7f33ff/IMG_9417-2.jpg" data-mid="244066501" border="0"  src="https://freight.cargo.site/w/1000/i/de39e0d332a8b7e7968c86f50245a9b8a41179d37b56b7f7fdae9ebcda7f33ff/IMG_9417-2.jpg" /&#62;
&#60;img width="3553" height="1999" width_o="3553" height_o="1999" data-src="https://freight.cargo.site/t/original/i/fad6429040e8f7f686d9666747270aa05885aeaac60b684d689f454fa93b2d67/IMG_8179-3.jpg" data-mid="244066544" border="0"  src="https://freight.cargo.site/w/1000/i/fad6429040e8f7f686d9666747270aa05885aeaac60b684d689f454fa93b2d67/IMG_8179-3.jpg" /&#62;
Amniotic is a choreographic exploration of memory, touch, and the unseen but ubiquitous forces that carry us. Inspired by the fluid support of amniotic waters, the work traces how tenderness and strength continue to live in the body, unfolding through dance, theater, sound, and projected imagery. It lingers in the space where gestures of care ripple outward like water, where tenderness anchors strength, and where human and machine meet to shape new ways of seeing and being with one another.
Technology is not merely a tool but a collaborator: a bespoke, dance-literate machine that listens, remembers, and responds mindfully to the dancers in real time. Built with wearable sensors and custom motion-recognition techniques, the system learns the vocabulary of the performers’ movement, mapping embodied memory into sound and visual media. This framework supports an environment of fluidity, care, and resonance. On stage, movement, story, and technology converge as one.









The dance is inspired by and partially choreographed using Undercurrent dance technique.Read more about dance-literate machines.Visuals are inspired by and partially created with TouchDesigner sketches by Supermarket Sallad.Narrative voice pre-recorded by Ifeyinwa Onyekonwu, and trigger by the ML system live. The narrative script is adapted from PauseLab. Enjoy the full meditation score.








Other Projects ︎ </description>
		
	</item>
		
		
	<item>
		<title>Human-Machine Ritual</title>
				
		<link>https://roseziyuxu.cargo.site/Human-Machine-Ritual</link>

		<pubDate>Wed, 28 Jan 2026 01:18:51 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/Human-Machine-Ritual</guid>

		<description>
	Human–Machine Ritual:&#38;nbsp;Synergic Performance through Real-Time Motion Recognition


# Wearable Sensors#&#38;nbsp;Machine Learning# Movement Capture# Real-time Classification&#38;nbsp;

	We design and develop a machine learning system that perceive human movement in real time, enabling machine intellegence to participate in performance as a responsive partner rather than a generative replacement.

Department of Digital Arts &#38;amp; Experimental Media (DXARTS)
University of Washington
Seattle, WA, USA

    







	Overview
Human–Machine Ritual introduces a lightweight, real-time movement recognition system that enables synergic interaction between dancers and computational media during live performance. The system combines wearable inertial measurement unit (IMU) sensors with a MiniRocket-based time-series classification model to recognize dancer-specific movements and control responsive audiovisual elements with low latency. 


Unlike generative approaches that synthesize new content, this work foregrounds responsive perception: the machine interprets and recalls embodied motion patterns, preserving expressive depth while supporting real-time creative interaction. 


Research Motivation

This research reframes human–machine collaboration in performance through the lens of attentive recognition rather than generation. Large-scale AI models often rely on generic datasets and aim for broad generalization; by contrast, this system is trained on dancer-specific motion paired with personally meaningful sound stimuli. 


The work explores what it means for a machine to listen and remember movement in context, maintaining sensitivity to temporal ambiguity, somatic memory, and the unique qualities of embodied expression. The machine does not act as an autonomous creator, but as a perceptual partner whose behavior is shaped by the dancer’s training data and movement vocabulary. 


System Design
&#60;img width="2194" height="990" width_o="2194" height_o="990" data-src="https://freight.cargo.site/t/original/i/364e4ad10cd24aa8ae2caee89c0923014699e97bed7540e3480e451b5bee935f/Screen-Shot-2026-01-27-at-5.59.22-PM.png" data-mid="244064166" border="0"  src="https://freight.cargo.site/w/1000/i/364e4ad10cd24aa8ae2caee89c0923014699e97bed7540e3480e451b5bee935f/Screen-Shot-2026-01-27-at-5.59.22-PM.png" /&#62;
Dancers wear four wireless IMU sensors (placed on wrists and ankles) that capture six channels (three-axis accelerometer and gyroscope) of motion data at 48 Hz. 


During a pre-performance training phase, the dancer improvises with sound, generating movement-sound pairings rooted in somatic memory. These recordings are preprocessed into fixed-length segments and augmented to enhance dataset diversity. A MiniRocket feature extractor combined with a ridge regression classifier learns to distinguish between multiple motion types. 


During live application, real-time IMU streams are classified and mapped to corresponding audiovisual controls. The system maintains end-to-end latency under 50 ms, ensuring synchronous interaction between movement and media without perceptible delay. 




&#60;img width="2184" height="744" width_o="2184" height_o="744" data-src="https://freight.cargo.site/t/original/i/66f443642f4b80ece710c0796231b62d7e29a5cf26b5b076dc31a25142ca32a8/Screen-Shot-2026-01-27-at-6.00.10-PM.png" data-mid="244064168" border="0"  src="https://freight.cargo.site/w/1000/i/66f443642f4b80ece710c0796231b62d7e29a5cf26b5b076dc31a25142ca32a8/Screen-Shot-2026-01-27-at-6.00.10-PM.png" /&#62;


	

	Key Features&#38;nbsp;



Wearable IMU sensing: unobtrusive motion capture from wrists and ankles at performance-ready sampling rates. 



MiniRocket classification: efficient, high-accuracy time-series model tailored to continuous sensor data. 



Low latency: system latency under 50 ms, enabling real-time responses in performance conditions. 



Dancer-specific training: movement classes are grounded in somatic memory and gesture patterns unique to the performer. 



Responsive media control: predicted motion classes drive sound and projection outputs as part of a tightly coupled human–machine loop. 




Quantitatively, the classifier achieved high accuracy and strong discriminability across multiple motion classes, demonstrating the viability of this architecture for live creative contexts. 

Integration with Creative Practice

This system was developed in tandem with choreographic development, forming a research and artistic practice that privileges embodied interaction over automated generation. The technical pipeline, from sensing to multimedia control, is designed to support embodied nuance, temporal ambiguity, and contextual responsiveness rather than predefined mappings or fixed choreography. 


By situating the dancer’s body as both archive and oracle within the system’s training regime, the project offers a replicable framework for integrating dance-literate machines into creative, educational, and live performance settings.  
Research Outputs
Paper publication, invited panel talk, and poster presentation at NeurIPS 2025 San Diego.

Read the full paper:Zhuodi (Zoe) Cai, Ziyu (Rose) Xu, Juan Pampin,&#38;nbsp;Human–Machine Ritual: Synergic Performance through Real-Time Motion Recognition.&#38;nbsp;Proceedings of the Thirty-Ninth Conference on Neural Information Processing Systems (NeurIPS 2025 Creative AI Track).
&#38;nbsp;
&#60;img width="828" height="1798" width_o="828" height_o="1798" data-src="https://freight.cargo.site/t/original/i/869c623c8a1b3e6782adbdac7b29f7551fc76d2826215a63dd81d0cf35c0415d/Screen-Shot-2026-01-27-at-7.01.26-PM.png" data-mid="244065059" border="0"  src="https://freight.cargo.site/w/828/i/869c623c8a1b3e6782adbdac7b29f7551fc76d2826215a63dd81d0cf35c0415d/Screen-Shot-2026-01-27-at-7.01.26-PM.png" /&#62;
&#60;img width="1170" height="2532" width_o="1170" height_o="2532" data-src="https://freight.cargo.site/t/original/i/11bca3cea4f0003b60888ad81b348f4eb342086e8305febd0ac121d11b4b46c1/panel.JPG" data-mid="244065062" border="0"  src="https://freight.cargo.site/w/1000/i/11bca3cea4f0003b60888ad81b348f4eb342086e8305febd0ac121d11b4b46c1/panel.JPG" /&#62;
&#60;img width="830" height="1800" width_o="830" height_o="1800" data-src="https://freight.cargo.site/t/original/i/c10ca5a67b85229df01b5a009d42e6be5869b913a66228cadf7d835d25740fdf/Screen-Shot-2026-01-27-at-7.00.19-PM.png" data-mid="244065060" border="0"  src="https://freight.cargo.site/w/830/i/c10ca5a67b85229df01b5a009d42e6be5869b913a66228cadf7d835d25740fdf/Screen-Shot-2026-01-27-at-7.00.19-PM.png" /&#62;

Thank you Zoe for being such a brilliant and supportive research partner, and for pulling through the 4-hr poster session and panel talk with me at Neurips!

Looking Ahead

Moving forward, we plan to expand the system’s movement vocabulary by capturing a broader range of gestures, improve recognition of transitions between motions, and enable on-the-fly learning of new movements for adaptability. Additionally, we aim to integrate the system into interactive gallery installations and other art environments, and to improve its generalizability while preserving its human-centered design. Finally, we intend to open-source the platform, inviting broader creative collaboration and further research development.

 




	
	




Other Projects ︎ </description>
		
	</item>
		
		
	<item>
		<title>Genesis</title>
				
		<link>https://roseziyuxu.cargo.site/Genesis</link>

		<pubDate>Fri, 04 Apr 2025 22:10:26 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/Genesis</guid>

		<description>
	Genesis
Tech:
# Emotion Recognition#&#38;nbsp;Machine Learning# Computer Vision# Python, TouchDesigner


Art and Science:# Psychology# Video Installation
# Performance Art# Data-Driven Art 

	An exploration of artificial life through data-driven video installation, live performance, and system art

Dimensions: space: 2 meter x 2 meters; video display: 62cm x 38cm&#38;nbsp;

Dec 2nd - 4th 2024, DXARTS GalleryMar 4th - 6th 2025, School of Art Gallery 10D

Department of Digital Arts &#38;amp; Experimental Media (DXARTS)
University of Washington
Seattle, WA, USA

    







	

Project Description
Genesis is an interactive installation that bridges art and technology to explore artificial life through data-driven video, system art, and live performance. Integrating Python-based emotion recognition with TouchDesigner, the system captures and responds to real-time facial expressions.

Inspired by The Artist is Present (2024) by Marina Abramović and philosopher Emmanuel Levinas’s notion of the face as the most expressive site of the Other, Genesis creates a one-on-one encounter between the viewer and a screen displaying soundless portraits. Installed in an intimate corner of the DXARTS gallery, a webcam records the viewer’s face, feeding data into a machine learning system that selects response videos based on perceived emotion.

Rooted in definitions of artificial life from DXARTS 200 Machine Art Lecture II, Genesis simulates face perception, emotional inference, and introjection—where unconscious identification transfers affect between entities. This bidirectional exchange allows the system to ‘perform’ uniquely for each participant, creating ephemeral moments of connection.

With audience consent, the system can record novel expressions and add them to its growing archive, allowing the work to evolve over time—it is alive.

Prototype
&#60;img width="1280" height="1125" width_o="1280" height_o="1125" data-src="https://freight.cargo.site/t/original/i/a940140684a3ecb9987ef89b2be1866df945135ee1feb4a3654ba523c8937c1b/Screen-Shot-2025-05-12-at-12.59.58-AM-Photoroom.png" data-mid="233073425" border="0"  src="https://freight.cargo.site/w/1000/i/a940140684a3ecb9987ef89b2be1866df945135ee1feb4a3654ba523c8937c1b/Screen-Shot-2025-05-12-at-12.59.58-AM-Photoroom.png" /&#62;Figure 1. Prototype flowchart for Genesis, an interactive emotion-based video system. This diagram outlines the system’s structure, including three main loops: Standby, Interaction, and Data Growth. Real-time facial analysis is performed using OpenCV and DeepFace, with emotion scores transmitted to TouchDesigner via OSC to trigger responsive video playback. With audience consent, new emotion clips are recorded and added to a growing archive, allowing the system to evolve over time.

Development
&#60;img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXf-a9IKPNIrFtQXI-UxOsqMQS03ggYBuZIeUmDHgESXgDgcIMI6s8R4iDYo82A-lCSrpuhG8RZ7ZAQLPuuxeXGzw32hwnUf2znCze9fEb8PbU9fxOymA4TrWso7_W2NPVs6kjr73w?key=hmH5u7AvFrUc0NugBGUB6VXv" width="624" height="332" style="width: 408.921px; height: 217.567px;"&#62;Figure 2. Excerpt of the data processing pipeline. The left panel shows a modular folder structure, including Python scripts for video handling and OSC communication. The right panels display key code components and terminal output, highlighting real-time emotion detection, recording status, and system feedback during performance.

&#60;img width="868" height="1166" width_o="868" height_o="1166" data-src="https://freight.cargo.site/t/original/i/2fd6ab557f73aa25a5d9b3c91cd780fcdb7958b492be6a362fab91c89a11f54b/Screen-Shot-2025-05-12-at-1.25.42-AM.png" data-mid="233072387" border="0"  src="https://freight.cargo.site/w/868/i/2fd6ab557f73aa25a5d9b3c91cd780fcdb7958b492be6a362fab91c89a11f54b/Screen-Shot-2025-05-12-at-1.25.42-AM.png" /&#62;
&#60;img width="1370" height="1030" width_o="1370" height_o="1030" data-src="https://freight.cargo.site/t/original/i/f32e403e7df6e9dbdf34379c8fc0c080b0015510ab4ef2bdd4d31609283793e2/Screen-Shot-2025-05-12-at-1.25.51-AM.png" data-mid="233072388" border="0"  src="https://freight.cargo.site/w/1000/i/f32e403e7df6e9dbdf34379c8fc0c080b0015510ab4ef2bdd4d31609283793e2/Screen-Shot-2025-05-12-at-1.25.51-AM.png" /&#62;
Figure 3. Installation diagram of Genesis in DXARTS Gallery. &#38;nbsp;




	

	ReferencesFrom Totality and Infinity by Emmanuel Levinas, “The face opens the primordial discourse whose first word is obligation… It is that discourse that obliges the entering into discourse. The Other faces me and puts me in question and obliges me.”  
 From A Cyborg Manifesto by Donna Haraway, "The boundary between human and machine is thoroughly breached; we are living in a world where we are both and neither."From The Language of New Media by Lev Manovich,“In new media, the content is an interface; the interface becomes content.” From A Sea of Data: Apophenia and Pattern (Mis-)Recognition by Hito Steyerl, “In the age of machine vision, humans are no longer the measure of things… but rather, they are a data resource to be quantified and classified.” From Atlas of AI by Kate Crawford, “Artificial intelligence is not artificial or intelligent. It is made from natural resources and human labor, embedded with histories, and entangled with the lives of all who encounter it.” 
Photo Documentations
&#60;img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcJxPbosFoV0B1oHJ8yEjZRMLEuKFpVHJSdNuUZOjAPRNmkqe_iT93TgX33PPct6hgidWjR_6KeOJoQufzDvbnubotT2a4MJXKORarWsDvcvh_UYFwCH8wtYpChIa0393sDzhZkAw?key=hmH5u7AvFrUc0NugBGUB6VXv" width="626.2052146304987" height="248.9450102420047" style="width: 282.155px; height: 111.78px;" data-scale="69"&#62;&#60;img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcoyp36ebXzk70rYZz0hEzcPkJ7bGiLvXSpEeZFbkGjzqG8NWdHKaYN3WphLbllcWgfkqtzpLugcoQwz4emhepcve1bWJd9gwudhCWlZKtIujJ6Lx6Kyo1h0Ry3xQSzMl8zrXKqCQ?key=hmH5u7AvFrUc0NugBGUB6VXv" width="664.416794996824" height="348.6955307798656" style="width: 278.066px; height: 145.733px;" data-scale="68"&#62; Figure 4. Demo of backstage facial analysis (left) and final video feed (right). The left panels show real-time emotion detection via Python, including scores and overlays; the right panels display the corresponding TouchDesigner output. The top row captures “standby” mode with no audience present, while the bottom shows active interaction, with a "sad" expression reflected in the visual response. 
&#60;img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXffeNk_qnhjOPgHCeAwjQhCFLNlo2n38a5s4pbgtrPMxzKYNbRSI0AixML7WByocXu7689aOnmUg8WnWjDcdydT-nl5QjZsxUnY5QSvtgHIUSRt68ISy84SlXJEeevqUsuxHSSZmA?key=hmH5u7AvFrUc0NugBGUB6VXv" width="624" height="372.1500153997902" style="width: 408.921px; height: 243.78px;"&#62;&#60;img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXeNNSIBp3EAXVx4aYUvBaqDMAyQaGu8gURRjowO2YfWYKS2RBI8sCdae4LSXWKMMswfsukF1zy_XPlOJxX9DUf4R3Wq5tBSVBlWQQQf6XcYH4gSRzriaasTOIaL1WmpdOo9fbSgUg?key=hmH5u7AvFrUc0NugBGUB6VXv" width="624" height="364.20592305304893" style="width: 408.921px; height: 238.537px;"&#62;Figure 5. Close-up of the final video feed. The top panel shows a pre-recorded face from the original archive, representing the artificial life at launch; the bottom panel features a clip added post-show, capturing a participant and marking the system’s evolution as audiences become performers.
&#60;img src="https://lh7-rt.googleusercontent.com/docsz/AD_4nXcdkzx16ThYmGlnaJOxM_JKwgHpV-GOGrJ17c2WH73j8ShvCwsxqs_SqqBcPjwZH1z5UnMkX7kslP3PB5My_bctmWhAzP3bLYkjb1zgj-uDfRbCQSydKdTcRHucGzYx4rYumAuv?key=hmH5u7AvFrUc0NugBGUB6VXv" width="624" height="624" style="width: 408.921px; height: 408.921px;"&#62;Figure 6. More photo documentations from the audience’s perspective mid-show. On the second day of installation (Dec. 4th), the orientation of the monitor was changed to vertical, thanks to Laura’s advice. The 28 inch 16x9 monitor has a length of 24.4 inches (62 cm), and so the head portrait videos are displayed at a scale very similar to humans.


	
	




Other Projects ︎ </description>
		
	</item>
		
		
	<item>
		<title>Fictions in Fugue</title>
				
		<link>https://roseziyuxu.cargo.site/Fictions-in-Fugue</link>

		<pubDate>Mon, 28 Apr 2025 17:22:34 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/Fictions-in-Fugue</guid>

		<description>
	Fictions in Fugue



November 7th - 8th, 2024





Katharyn Alvord Gerlich Theater,Meany Center for the Performing Arts
Department of Digital Arts &#38;amp; Experimental Media (DXARTS)University of WashingtonSeattle, WA, USA




Tech:# AR/XR#&#38;nbsp;Machine Learning# Computer Vision

# Physical Simulation
# Depth Camera
# IMU Movement Sensor

# Gesture Recognition



# Meshes and Point Clouds
Media Arts:# Interactive Performance


#&#38;nbsp;Electro-Acoustic Music
# 
Immersive Media # Video Projection
# Sculpture
#&#38;nbsp;Mechatronics 











	"In classical music, a fugue is a contrapuntal, polyphonic compositional technique in two or more voices, built on a subject that is introduced at the beginning in imitation, which recurs frequently throughout the course of the composition."
Fictions in Fugue is an interdisciplinary collaboration by new media artists/performers who come together to activate Meany Theater as a space in fugue and fragmentation. Combining interactive storytelling, Extended Reality technologies and Machine Learning experiments, a series of embodied narratives emerge throughout the evening. Inspired by the short story "The Spiral" by Italo Calvino, formlessness and “indeterminate ways of feeling oneself there” emerge as vignettes of narrative possibilities.&#38;nbsp; 
Artists/Performers:
Ashley Menestrina
Cristina Brambila
Althea Rao
Derek Crescenti
Eunsun Choi
Emily Schoen Branch
Sadaf Sadri
Maria Thraen
Alex Lee Place
Laura Luna Castillo
Ziyu (Rose) Xu
Daniel Peterson


Cinematography:Cristina BrambilaEunsun ChoiShashank Shivashankar
Video Editing:Shashank ShivashankarZiyu (Rose) Xu




Presented with support from the Department of Digital Arts and Experimental Media (DXARTS) and Meany Center for the Performing Arts at University of Washington.

	

&#60;img width="4086" height="2286" width_o="4086" height_o="2286" data-src="https://freight.cargo.site/t/original/i/56951db07ed0d3bd1cb4f527beaa862d51fa06a943816ac7dcaff904e6624231/Screenshot-2025-03-24-at-5.00.19PM.png" data-mid="233071158" border="0"  src="https://freight.cargo.site/w/1000/i/56951db07ed0d3bd1cb4f527beaa862d51fa06a943816ac7dcaff904e6624231/Screenshot-2025-03-24-at-5.00.19PM.png" /&#62;
&#60;img width="3540" height="1980" width_o="3540" height_o="1980" data-src="https://freight.cargo.site/t/original/i/79e1b37190b52fa9d2905fce1ff595ba3a30e6ecfe06600a8d836875d335550c/Screenshot-2025-04-07-at-4.08.49PM.png" data-mid="233071163" border="0"  src="https://freight.cargo.site/w/1000/i/79e1b37190b52fa9d2905fce1ff595ba3a30e6ecfe06600a8d836875d335550c/Screenshot-2025-04-07-at-4.08.49PM.png" /&#62;
&#60;img width="3540" height="1970" width_o="3540" height_o="1970" data-src="https://freight.cargo.site/t/original/i/6881b0d75f34f4f9768ac78ae10d72ab00678ef544c75864a139e177dd02257c/Screenshot-2025-04-07-at-4.39.35PM.png" data-mid="233071174" border="0"  src="https://freight.cargo.site/w/1000/i/6881b0d75f34f4f9768ac78ae10d72ab00678ef544c75864a139e177dd02257c/Screenshot-2025-04-07-at-4.39.35PM.png" /&#62;
&#60;img width="4080" height="2284" width_o="4080" height_o="2284" data-src="https://freight.cargo.site/t/original/i/201eb80f319b4de7a8a6d7d0fbaeb50f0f50155602dbe52f5ef743b30c34ab52/Screenshot-2025-03-24-at-5.03.29PM.png" data-mid="233071160" border="0"  src="https://freight.cargo.site/w/1000/i/201eb80f319b4de7a8a6d7d0fbaeb50f0f50155602dbe52f5ef743b30c34ab52/Screenshot-2025-03-24-at-5.03.29PM.png" /&#62;
&#60;img width="3534" height="1982" width_o="3534" height_o="1982" data-src="https://freight.cargo.site/t/original/i/212aa8ec7474a7620a5e80f14aea2e61d9a21d9655e011515e37f77e6dad21ee/Screenshot-2025-04-07-at-4.18.51PM.png" data-mid="233071170" border="0"  src="https://freight.cargo.site/w/1000/i/212aa8ec7474a7620a5e80f14aea2e61d9a21d9655e011515e37f77e6dad21ee/Screenshot-2025-04-07-at-4.18.51PM.png" /&#62;
&#60;img width="3538" height="1976" width_o="3538" height_o="1976" data-src="https://freight.cargo.site/t/original/i/e5e78e236cd5ec207e893fb5818e1400dda8bb1ec2092366f9a4d80ddaa5b869/Screenshot-2025-04-07-at-4.10.54PM.png" data-mid="233071166" border="0"  src="https://freight.cargo.site/w/1000/i/e5e78e236cd5ec207e893fb5818e1400dda8bb1ec2092366f9a4d80ddaa5b869/Screenshot-2025-04-07-at-4.10.54PM.png" /&#62;
&#60;img width="3542" height="1982" width_o="3542" height_o="1982" data-src="https://freight.cargo.site/t/original/i/9358172b15b7c7d3b7f1dbaf76268949d47a40f54df672018a7d1d19c9cdae0d/Screenshot-2025-04-07-at-4.22.41PM.png" data-mid="233071172" border="0"  src="https://freight.cargo.site/w/1000/i/9358172b15b7c7d3b7f1dbaf76268949d47a40f54df672018a7d1d19c9cdae0d/Screenshot-2025-04-07-at-4.22.41PM.png" /&#62;
&#60;img width="4082" height="2284" width_o="4082" height_o="2284" data-src="https://freight.cargo.site/t/original/i/bf8ee080ff421d5e36079ff1361ce585851d71ecf1a82d57835d55cd20e8f3e4/Screenshot-2025-03-24-at-4.58.18PM.png" data-mid="233071199" border="0"  src="https://freight.cargo.site/w/1000/i/bf8ee080ff421d5e36079ff1361ce585851d71ecf1a82d57835d55cd20e8f3e4/Screenshot-2025-03-24-at-4.58.18PM.png" /&#62;
&#60;img width="4084" height="2288" width_o="4084" height_o="2288" data-src="https://freight.cargo.site/t/original/i/09d5d266486677fbc55f3cc714fac7c8913bfdf95b90c2e50bf2d6b516b0c947/Screenshot-2025-03-24-at-4.28.09PM.png" data-mid="233071198" border="0"  src="https://freight.cargo.site/w/1000/i/09d5d266486677fbc55f3cc714fac7c8913bfdf95b90c2e50bf2d6b516b0c947/Screenshot-2025-03-24-at-4.28.09PM.png" /&#62;
&#60;img width="3536" height="1976" width_o="3536" height_o="1976" data-src="https://freight.cargo.site/t/original/i/544dcad4d301a1ad4a5a3ec9af50bb06539e97a0456bf4feec70ed8bc97f4740/Screenshot-2025-04-07-at-4.53.49PM.png" data-mid="233071176" border="0"  src="https://freight.cargo.site/w/1000/i/544dcad4d301a1ad4a5a3ec9af50bb06539e97a0456bf4feec70ed8bc97f4740/Screenshot-2025-04-07-at-4.53.49PM.png" /&#62;
&#60;img width="3532" height="1978" width_o="3532" height_o="1978" data-src="https://freight.cargo.site/t/original/i/efac97c655d1aa14cb0e661d25db7da74c27a29921e0929efa6126315e0d309e/Screenshot-2025-04-07-at-4.14.14PM.png" data-mid="233071169" border="0"  src="https://freight.cargo.site/w/1000/i/efac97c655d1aa14cb0e661d25db7da74c27a29921e0929efa6126315e0d309e/Screenshot-2025-04-07-at-4.14.14PM.png" /&#62;
&#60;img width="3544" height="1970" width_o="3544" height_o="1970" data-src="https://freight.cargo.site/t/original/i/40172497f4efbfdb2d47100711b9a75042d57d7c9b0c2d75c6066249ede7685d/Screenshot-2025-04-07-at-4.12.45PM.png" data-mid="233071168" border="0"  src="https://freight.cargo.site/w/1000/i/40172497f4efbfdb2d47100711b9a75042d57d7c9b0c2d75c6066249ede7685d/Screenshot-2025-04-07-at-4.12.45PM.png" /&#62;





Other Projects ︎ </description>
		
	</item>
		
		
	<item>
		<title>2022: Another Space Odyssey</title>
				
		<link>https://roseziyuxu.cargo.site/2022-Another-Space-Odyssey</link>

		<pubDate>Wed, 09 Nov 2022 04:17:58 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/2022-Another-Space-Odyssey</guid>

		<description>
	
	

	2022: Another Space Odyssey&#38;nbsp;Research: January - May 2022

Tech:
# Python, Java, Processing
# computer vision

# live motion capture
# live video projection

Art and Science:

# math
# Voronoi diagram
# experimental performance art
# interactive media

	
Performance: May 12th, 13th, and 14th, 2022&#38;nbsp;
At&#38;nbsp;Richard B. Fisher Center for the Performing Arts,&#38;nbsp;Annandale-on-Hudson, NY

Choreographer: Rose Xu (徐子瑜)
Music: Bowsprit – Balmorhea
 Visual Designer and Engineer: Rose Xu (徐子瑜)
Performers: Leslie A. Morales, Antonia Salathé, Elsa WoodAdvisor and Concert Coordinator: Prof. Yebel Gallegos
Lighting Designer: Brian Aldous
Costume Designer: Liz PrinceVideo Supervisor: Kat Pagsolingan

Stage Manager: Laura Hirschberg
Special thanks to Prof. Keith O’Hara for his generous guidance on the interactive computer programming components of this project.
&#38;nbsp;
Many thanks to Prof. Yebel Gallegos for his wise advice during my creative process and his unwavering support as I move through difficult times.

 Much appreciation to Olivia Buzzelle, Jacinta Creel, Gaby Sabogal, and Isabella Spagnuolo for their contribution and dedication in the choreography of this piece.
Invited to perform at Gibney Dance Company, New York City, May 2023.

︎︎︎ Suggest viewing this page on desktop/laptop in fullscreen :)&#38;nbsp;


	
&#60;img width="2306" height="1441" width_o="2306" height_o="1441" data-src="https://freight.cargo.site/t/original/i/abcb8fc94d037def6ff6b310770084dadde95849fffbefa339232e00880ac853/5ppl.jpg" data-mid="160764306" border="0"  src="https://freight.cargo.site/w/1000/i/abcb8fc94d037def6ff6b310770084dadde95849fffbefa339232e00880ac853/5ppl.jpg" /&#62;
&#60;img width="1440" height="1083" width_o="1440" height_o="1083" data-src="https://freight.cargo.site/t/original/i/2d6b2f7b14192e1e79658ae209fefd5e95a885df21d2b293a6b4b08a25480a4d/BC77A30C-C7F3-49EB-9B80-A4A1E5013C37.JPG" data-mid="160301059" border="0"  src="https://freight.cargo.site/w/1000/i/2d6b2f7b14192e1e79658ae209fefd5e95a885df21d2b293a6b4b08a25480a4d/BC77A30C-C7F3-49EB-9B80-A4A1E5013C37.JPG" /&#62;
&#60;img width="4032" height="2688" width_o="4032" height_o="2688" data-src="https://freight.cargo.site/t/original/i/62422ee62c7833e1d09d5b3bff4a820fec4bf7f373139e6c76a37a3a965f29be/IMG_7123.JPG" data-mid="160300985" border="0"  src="https://freight.cargo.site/w/1000/i/62422ee62c7833e1d09d5b3bff4a820fec4bf7f373139e6c76a37a3a965f29be/IMG_7123.JPG" /&#62;
&#60;img width="2672" height="1796" width_o="2672" height_o="1796" data-src="https://freight.cargo.site/t/original/i/fc9cdeb3d05413293fc777fecdeab1c3da7cbee04c13d377444284c82ce7ff57/Odyssey1small.png" data-mid="160299814" border="0"  src="https://freight.cargo.site/w/1000/i/fc9cdeb3d05413293fc777fecdeab1c3da7cbee04c13d377444284c82ce7ff57/Odyssey1small.png" /&#62;
&#60;img width="2398" height="1796" width_o="2398" height_o="1796" data-src="https://freight.cargo.site/t/original/i/bcc985b8f2c0f23d0ee8ea63061eb5409571746fa52cc65fdad4ba4cad8d9db3/Screen-Shot-2022-11-27-at-3.52.37-PM.png" data-mid="160301195" border="0"  src="https://freight.cargo.site/w/1000/i/bcc985b8f2c0f23d0ee8ea63061eb5409571746fa52cc65fdad4ba4cad8d9db3/Screen-Shot-2022-11-27-at-3.52.37-PM.png" /&#62;
&#60;img width="4032" height="2688" width_o="4032" height_o="2688" data-src="https://freight.cargo.site/t/original/i/a8d88bb4afcd88ffb1ee948cc7a47291ba8d50a5494a5f932fcc5db9f2bdbdcf/IMG_7121.JPG" data-mid="160300982" border="0"  src="https://freight.cargo.site/w/1000/i/a8d88bb4afcd88ffb1ee948cc7a47291ba8d50a5494a5f932fcc5db9f2bdbdcf/IMG_7121.JPG" /&#62;
&#60;img width="647" height="1000" width_o="647" height_o="1000" data-src="https://freight.cargo.site/t/original/i/48a11d7df131de879fab6312544c98be5326bcea84ceeee988e6d6c69c054246/IMG_4963.PNG" data-mid="160300979" border="0"  src="https://freight.cargo.site/w/647/i/48a11d7df131de879fab6312544c98be5326bcea84ceeee988e6d6c69c054246/IMG_4963.PNG" /&#62;
&#60;img width="6300" height="4200" width_o="6300" height_o="4200" data-src="https://freight.cargo.site/t/original/i/ab162e38467d86cee7432ee691bb5839759e4b31374bbb96f5d0bac74b143f27/IMG_5206.JPG" data-mid="160300980" border="0"  src="https://freight.cargo.site/w/1000/i/ab162e38467d86cee7432ee691bb5839759e4b31374bbb96f5d0bac74b143f27/IMG_5206.JPG" /&#62;


	Photo credits: Chris Kayden, Dwayne Tang; Costume design:&#38;nbsp;Liz Prince, Rose Xu

    







	Artist’s Notes
American liberal arts education made me realize that a lot of human’s ideas are essentially the same, but are just investigated through different disciplines – just like one sentence can be said in multiple languages. Fascinated, I believe one discipline can be an inspiring perspective for pushing another discipline forward, if there is enough creativity and understanding about the two disciplines. Trained both as an artist and a scientist, I investigate how math and dance can speak for one idea with the help of technology, in this Dance Senior Thesis Project (part 1*). I hope this piece facilitates more understanding between humans in this world.

*part 2 is a completely different project, entitled “Tao, 1, 2, 3, ∞”



	 ︎ Click on figures ︎︎︎&#60;img width="938" height="610" width_o="938" height_o="610" data-src="https://freight.cargo.site/t/original/i/b80b3a0a23d97611ca99a7a6e75066a470bc167a088737afa36b82d2a30cf8ad/Math.png" data-mid="160311167" border="0"  src="https://freight.cargo.site/w/938/i/b80b3a0a23d97611ca99a7a6e75066a470bc167a088737afa36b82d2a30cf8ad/Math.png" /&#62;
Fig. 1: Voronoi dual graph from Discrete and Computational Geometry (Devadoss-O’Rourke)
&#60;img width="1494" height="1166" width_o="1494" height_o="1166" data-src="https://freight.cargo.site/t/original/i/ef78e2be3ac082d6e877aa98c4ef8746a8b8c5cff5d22d51c70e0098c0bde7bb/Math1.1.png" data-mid="160563387" border="0"  src="https://freight.cargo.site/w/1000/i/ef78e2be3ac082d6e877aa98c4ef8746a8b8c5cff5d22d51c70e0098c0bde7bb/Math1.1.png" /&#62;
Fig. 2: Partial code for plotting Voronoi Diagram (VD) preciesely using Delaunay Triangulation

&#60;img width="1052" height="1114" width_o="1052" height_o="1114" data-src="https://freight.cargo.site/t/original/i/2321e8e733aa3e0014c669ddf0b9ab1e063d937db9f6d00e3bff3fad7d16d0a1/Math1.png" data-mid="160563386" border="0"  src="https://freight.cargo.site/w/1000/i/2321e8e733aa3e0014c669ddf0b9ab1e063d937db9f6d00e3bff3fad7d16d0a1/Math1.png" /&#62;
Fig. 3: Example VD produced using the algorithm in Fig. 2
&#60;img width="1720" height="1000" width_o="1720" height_o="1000" data-src="https://freight.cargo.site/t/original/i/9a24f2b2c8d94e44733516ca564414067b537064d260a05ca58adb085c00ce66/translation.png" data-mid="160664645" border="0"  src="https://freight.cargo.site/w/1000/i/9a24f2b2c8d94e44733516ca564414067b537064d260a05ca58adb085c00ce66/translation.png" /&#62;Fig. 4: “Translating” math into dance

&#60;img width="1280" height="720" width_o="1280" height_o="720" data-src="https://freight.cargo.site/t/original/i/df3c1f09e0c58f45d5e06dee686b9fec43b0ee2ceb701b756097067e302fd7cf/Bkg.gif" data-mid="160310089" border="0"  src="https://freight.cargo.site/w/1000/i/df3c1f09e0c58f45d5e06dee686b9fec43b0ee2ceb701b756097067e302fd7cf/Bkg.gif" /&#62;
Fig. 5: Motion capture - detect movement

&#60;img width="1280" height="720" width_o="1280" height_o="720" data-src="https://freight.cargo.site/t/original/i/b72fe7110141a2b51f0e0575329a404252ec5a6f37a89d975787ef794f306c7c/Centers.gif" data-mid="160310096" border="0"  src="https://freight.cargo.site/w/1000/i/b72fe7110141a2b51f0e0575329a404252ec5a6f37a89d975787ef794f306c7c/Centers.gif" /&#62;Fig. 6:&#38;nbsp;Motion capture - locate movement
&#60;img width="1388" height="834" width_o="1388" height_o="834" data-src="https://freight.cargo.site/t/original/i/0ec14c358afe8984829c12ef7d73d4c9a43452822752d724257baa2ce883e82a/Math2.png" data-mid="160563888" border="0"  src="https://freight.cargo.site/w/1000/i/0ec14c358afe8984829c12ef7d73d4c9a43452822752d724257baa2ce883e82a/Math2.png" /&#62;Fig. 7: Plotting VD in motion

&#60;img width="2880" height="1800" width_o="2880" height_o="1800" data-src="https://freight.cargo.site/t/original/i/e7a7c588da7d8e01af6472fcadfd88bc18312b47235fc72559b436205f741f84/Math2.1.png" data-mid="160563889" border="0"  src="https://freight.cargo.site/w/1000/i/e7a7c588da7d8e01af6472fcadfd88bc18312b47235fc72559b436205f741f84/Math2.1.png" /&#62;Fig. 8: The key is approximiation&#38;nbsp;
&#60;img width="1280" height="720" width_o="1280" height_o="720" data-src="https://freight.cargo.site/t/original/i/c174e238781a9fcfa568aba2adfb2e96fbdde48632f5ba896a1af1244f293a59/ProbMotion.gif" data-mid="160302816" border="0"  src="https://freight.cargo.site/w/1000/i/c174e238781a9fcfa568aba2adfb2e96fbdde48632f5ba896a1af1244f293a59/ProbMotion.gif" /&#62;Fig. 9: Jaggy movement before applying “grabitational field” and motion average algorithms

&#60;img width="1280" height="720" width_o="1280" height_o="720" data-src="https://freight.cargo.site/t/original/i/835c4f1f87d8ebcbce3f8470a144a31974f0298c6101f9f9f3f0de8bde9eee17/MotionWork.gif" data-mid="160304198" border="0"  src="https://freight.cargo.site/w/1000/i/835c4f1f87d8ebcbce3f8470a144a31974f0298c6101f9f9f3f0de8bde9eee17/MotionWork.gif" /&#62;Fig. 10: Smooth movement after&#38;nbsp;optimizing the algorithm
&#60;img width="1280" height="720" width_o="1280" height_o="720" data-src="https://freight.cargo.site/t/original/i/9cf8c263488754bf00d6f7f31731767b74bf70298b4b7af0d160db871f92393d/VorWork.gif" data-mid="160309081" border="0"  src="https://freight.cargo.site/w/1000/i/9cf8c263488754bf00d6f7f31731767b74bf70298b4b7af0d160db871f92393d/VorWork.gif" /&#62;
Fig. 11: Mock visual rendering before going into the theatre

&#60;img width="854" height="480" width_o="854" height_o="480" data-src="https://freight.cargo.site/t/original/i/55180f17b2855c8325e5f62a034b97e856bcbb3578fea33a8bbc1a2af322f4c3/blackbox30.gif" data-mid="197214667" border="0"  src="https://freight.cargo.site/w/854/i/55180f17b2855c8325e5f62a034b97e856bcbb3578fea33a8bbc1a2af322f4c3/blackbox30.gif" /&#62;
Fig. 12: Tech working properly in theatre. (That third guy was outside camera’s view.)

&#60;img width="2226" height="1584" width_o="2226" height_o="1584" data-src="https://freight.cargo.site/t/original/i/30639ee52108f2995fcf2e375b83e0500ecd163857784fd590a10af6ad09728c/tech.png" data-mid="178876872" border="0"  src="https://freight.cargo.site/w/1000/i/30639ee52108f2995fcf2e375b83e0500ecd163857784fd590a10af6ad09728c/tech.png" /&#62;
Fig 13: Illustration for the technical set up of the camera-laptop-projector system in the theatre.
Credit:&#38;nbsp;
Katerina Pagsolingan @Fisher Center, Bard College.

	︎︎︎Final Product (Partially)︎︎︎&#38;nbsp;see full choreography at the bottom of this page

Work Process (Technically)︎︎︎
Inspiration &#38;amp; Math Background
In math, a Voronoi Diagram is an algorithm to break up a plane into regions, each of which includes all points that are closest to the Voronoi site (a designated point) in each region. It has many applications in computer science and in urban planning, for example in building air-raid shelters for a residential area.
In the textbook Discrete and Computational Geometry (Devadoss-O’Rourke), Corollary 4.25.2 states that,
“Let S ⊆ R2. Suppose that S is finite, that the sites in S are not all collinear, and that no four sites of S are cocircular, then the triangulation induced by the straight-line dual graph of Vor(S) is a triangulation of S.”
This is basically saying that we can mathematically prove that the Voronoi Diagram infers Delaunay Triangulation. As a student, I am curious if we can reverse that relationship. Though I could not find theorems to help me prove that Delaunay Triangulation infers a well-defined Voronoi Diagram, Prof. Ethan Bloch encouraged me to try plotting the Voronoi Diagram (VD) using the Delaunay Triangulation algorithm that we learned from the textbook. 

Shown in Fig. 2, we developed a original Python algorithm to plot a well-defined Voronoi Diagram via Delaunay Triangulation. This does not prove the inverse statement, but shows that it is true computationally. The code turned out taking up ~a thousand lines with runtime 1.4 seconds on my machine to plot VD with given sites as shown in Fig. 3.
As a choreographer, although I studied the VD in stillness, my logical brain knew that the diagram would look completely different if one point (Voronoi site) changes location - which totally fascinated my creative brain because it seemed to me like a dance. As Einstein says, science and art have come together in my head in aesthetics; but in order for other people to see that in physical form, new technology is needed. Therefore, I decided to program the VD in motion, and create a performance to let dancers move the diagram. 


Motion Detection

This project aimed to translate an idea conveyed in “math language” into “dance language”, and to put an integral product of science and art on stage with support of new technology. Specifically, I decomposed the fundamental ideas in the Voronoi Diagram, used dance vocabulary to recreate these ideas using the body and the space (see Fig. 4), and also created a series of programs to visualize the ideas in the live performances.

To embody this “translation” with technology, I first need to “teach” my computer to recognize moving dancers. For the performance, a infrared wide-angle camera was built above the stage to capture live motions from birds' view. I used the OpenCV for Processing package to find the contours of moving bodies (see Fig. 5). Then, I developed my original algorithm to find the center of each person, which later served as the Voronoi site (see white dots in Fig. 6).



Moving Voronoi Diagram

While trying to plot VD in motion using the centers of dancers as the generating Voronoi Sites, I soon realized that my one-thousand line VD algorithm was not great for rendering visuals for live performance because the run time was too long. So I studied Hoff’s algorithm on plotting approximations of VD (Hoff, Kenneth E., et al.). That algorithm, especially when written in the Processing environment, was very concise (see Fig. 7) - only took up 80 lines and so was convenient to generate VD in a loop. The more edges (characterized as variable n in codes in Fig. 7) the triangle_fans have, the closer the approximation is to a well-defined VD. See Fig. 8 for the number of rays defined by the variable n. 
Issues occurred when I generated VD according to the dancers’ locations frame by frame - the visual didn’t look smooth (see Fig. 9). To solve this problem, I wrote an algorithm letting the centers of dancers “attract” the Voronoi sites like a gravitational field. With motion averaging, the Voronoi regions would drift smoothly behind objects, as shown in Fig. 10.&#38;nbsp;


Real Life Challenge
At this point, I have created the new technology that I needed to help me augment the dance choreography and to embody the connection between art and science. I expected the projection to look similar to Fig 11 in the performance. However, life was more dramatic than drama - 4 out of 5 of my performers got tested positive for COVID two days before the show. Fortunately, I found two amazing student dancers to learn my choreography in one day and we enabled the four shows to go on as scheduled. Thus, the official performance was completed by three dancers instead of five with truncated choreography. 
Moving forward, I am planning to put on the five-person full version of the performance at the Gibney dance company in New York City that Bard College has collaborated with, in May 2023. Feeling grateful for the Bard community which enabled me to create this piece and being eager to let more people fall in love with breaking the boundary between art and science, I will lead a workshop at the Bard Dance Program in March 2023, to share my new tools with the community and let the choreographers apply and tweak my tools in serving their own works.


 ︎ THANK YOU FOR READING&#38;nbsp;︎

&#38;nbsp; See Full Performance Recording︎
Works Cited:1. Devadoss, Satyan L., and O'Rourke, Joseph.&#38;nbsp;Discrete and Computational Geometry. United States, Princeton University Press, 2011.
2. Hoff, Kenneth E., et al. “Fast Computation of Generalized Voronoi Diagrams Using Graphics Hardware.” Proceedings of the Sixteenth Annual Symposium on Computational Geometry  - SCG '00, 2000, https://doi.org/10.1145/336154.336226. 

Full choreography with the original crew (before the impact of covid-19*), recorded at a tech rehearsal. Note that the technology was still in the debugging stage in this recording.&#38;nbsp;

* We had to change the performers' crew last minute due to covid. Many thanks to the dancers who generously stood in for this work and to everyone who kept the performance going. The hard work of the original crew is deeply appreciated and remembered. 




Back to main page&#38;nbsp;︎︎︎
Other Projects ︎&#38;nbsp;</description>
		
	</item>
		
		
	<item>
		<title>The Armadillo </title>
				
		<link>https://roseziyuxu.cargo.site/The-Armadillo</link>

		<pubDate>Wed, 09 Nov 2022 04:17:58 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/The-Armadillo</guid>

		<description>
	The Armadillo&#38;nbsp;October 2022

# Live Performance# Multimedia Projection # Poetry
	In October 2022, I got a chance to learn from and collaborate with multimedia artists Masha Tsimring and Nicholas Hussong. We spent a week in the LUMA Theatre of the Fisher Center for Performing Arts to experiment with the ephemeral mediums like lighting and video. Putting together old and new technologies to work, our hands spread across House music to Bach to oil in water to sands…
I (Rose Xu) am the person wearing a mask and a navy turtle neck shirt&#38;nbsp;︎
 
&#60;img width="2692" height="1800" width_o="2692" height_o="1800" data-src="https://freight.cargo.site/t/original/i/37421787a3dc7f6f35ba80ab37dc766a9af8aeb25fbf32e057d623e9579f18ac/Armadillo3.png" data-mid="162778039" border="0"  src="https://freight.cargo.site/w/1000/i/37421787a3dc7f6f35ba80ab37dc766a9af8aeb25fbf32e057d623e9579f18ac/Armadillo3.png" /&#62;
&#60;img width="2698" height="1800" width_o="2698" height_o="1800" data-src="https://freight.cargo.site/t/original/i/7200d4b9d6e02a528c6feed849782e765214681e385a8a6a8b0efe511d7cac24/Armadillo6.png" data-mid="162778041" border="0"  src="https://freight.cargo.site/w/1000/i/7200d4b9d6e02a528c6feed849782e765214681e385a8a6a8b0efe511d7cac24/Armadillo6.png" /&#62;
&#60;img width="2176" height="1498" width_o="2176" height_o="1498" data-src="https://freight.cargo.site/t/original/i/767a374d85f1e15b6c0f01da010ddc020f731445851e55a2b336818e4291ba99/Armadillo5.png" data-mid="162778838" border="0"  src="https://freight.cargo.site/w/1000/i/767a374d85f1e15b6c0f01da010ddc020f731445851e55a2b336818e4291ba99/Armadillo5.png" /&#62;


	&#60;img width="1406" height="1800" width_o="1406" height_o="1800" data-src="https://freight.cargo.site/t/original/i/8e35e8d05fa6328c23d4b2351ab8b33c33eccd86d12cb0e470f63673c43f0b67/Armadillo4.png" data-mid="162778042" border="0" data-scale="100" src="https://freight.cargo.site/w/1000/i/8e35e8d05fa6328c23d4b2351ab8b33c33eccd86d12cb0e470f63673c43f0b67/Armadillo4.png" /&#62;
	&#60;img width="1202" height="1800" width_o="1202" height_o="1800" data-src="https://freight.cargo.site/t/original/i/e3d123616aa740aeab55f910e4b487c5c68aad3e94c0fb2a851385e021d5d201/Armadillo2.png" data-mid="160191178" border="0"  src="https://freight.cargo.site/w/1000/i/e3d123616aa740aeab55f910e4b487c5c68aad3e94c0fb2a851385e021d5d201/Armadillo2.png" /&#62;
	&#60;img width="1100" height="1800" width_o="1100" height_o="1800" data-src="https://freight.cargo.site/t/original/i/f7f770182e39413db2e1e7d4cb93cd09c0931276a34b2e2c3c6034152fe27d26/poem.png" data-mid="162794819" border="0" data-scale="93" src="https://freight.cargo.site/w/1000/i/f7f770182e39413db2e1e7d4cb93cd09c0931276a34b2e2c3c6034152fe27d26/poem.png" /&#62; 

	On the last day, I put together multiple new and old media to build a live performance along with other students across different disciplines, such as photography, film, and literature. I was able to apply what I learned from Masha and Nicholas into the show - theater lighting design, sands and overhead projector, LED projector and programmed video processed by QLab… Because everyone came in from vastly different backgrounds and I am the one with the most technical background, I learned to hear others' needs and program the visuals corresponding to a communal desire.

&#60;img width="2704" height="1800" width_o="2704" height_o="1800" data-src="https://freight.cargo.site/t/original/i/986e508f7d995c1dd03040596137a17eda52a49c66bc708135408394b0cf47fa/Armadillo1.png" data-mid="160191177" border="0"  src="https://freight.cargo.site/w/1000/i/986e508f7d995c1dd03040596137a17eda52a49c66bc708135408394b0cf47fa/Armadillo1.png" /&#62;
︎Rose Xu (me) on the most left, sharing programming ideas of video design with fellow professors and students.


	Prof. Hussong told us, "many of your ideas may grow better when transplanted into another people's mind." Indeed, I absorbed an idea from a literature student and created a live interaction between poem reciting and video projection. Realizing that my math and dance works could potentially inspire students from all disciplines, I am eager to lead more workshops to share my ideas and tools in my adventure of finding the coalescence of science and art as well as building the technical planform to enable that.



Projects ︎
</description>
		
	</item>
		
		
	<item>
		<title>Tao, 1, 2, 3, ∞</title>
				
		<link>https://roseziyuxu.cargo.site/Tao-1-2-3</link>

		<pubDate>Wed, 09 Nov 2022 04:17:59 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/Tao-1-2-3</guid>

		<description>
	Tao, 1, 2, 3, ∞September - December 2022

Tech:# programmed interactive media# live motion capture# live video projectionArt and Science:# live dance performance# Taoism philosophy&#38;nbsp;
	
Choreographer and performer: Rose Xu (徐子瑜)
Music: Untitled - Ana Roxanne, Venus - Ana Roxanne
Multimedia Design and Development: Rose Xu (徐子瑜)
Senior Thesis Advisor:&#38;nbsp;Yebel Gallegos
Production Manager: Jessica Myers
Costume Supervisor: Moe SchellVideo Supervisor: Kat Pagsolingan
 Faculty Advisor: Souleymane Badolo
Stage Manager: Daniel Nelson

With great thanks to: Prof. Yebel Gallegos, Prof. Keith O’Hara, and Prof. Tara
Lorenzen.
Special thanks to: Betty Wang, Laura MacDonald, Prof. Jean Churchill, Bard Dance
Program, and the Fisher Center Production Team.



	
Suggest viewing on desktop/laptop in fullscreen :)&#38;nbsp;


&#60;img width="1350" height="1800" width_o="1350" height_o="1800" data-src="https://freight.cargo.site/t/original/i/d976626f86d587c7d0f46306646aa1604e33150012c1d576ebed28245dc5e3d3/Screen-Shot-2022-12-01-at-7.49.08-PM.png" data-mid="162201340" border="0"  src="https://freight.cargo.site/w/1000/i/d976626f86d587c7d0f46306646aa1604e33150012c1d576ebed28245dc5e3d3/Screen-Shot-2022-12-01-at-7.49.08-PM.png" /&#62;
&#60;img width="2690" height="1800" width_o="2690" height_o="1800" data-src="https://freight.cargo.site/t/original/i/fa913e84f806d9b0438144e1949f42b73a8087e341cfe1c96a79739d941da8fc/dance1.png" data-mid="162201334" border="0"  src="https://freight.cargo.site/w/1000/i/fa913e84f806d9b0438144e1949f42b73a8087e341cfe1c96a79739d941da8fc/dance1.png" /&#62;
&#60;img width="1200" height="1800" width_o="1200" height_o="1800" data-src="https://freight.cargo.site/t/original/i/234058589b5977951848dbb61774a05488855282a7e330890fdfedc47e79d38b/poster2.png" data-mid="162824334" border="0"  src="https://freight.cargo.site/w/1000/i/234058589b5977951848dbb61774a05488855282a7e330890fdfedc47e79d38b/poster2.png" /&#62;
&#60;img width="2700" height="1800" width_o="2700" height_o="1800" data-src="https://freight.cargo.site/t/original/i/70745e087a88de5889db9bcc7080cbe7ad4ef3915bda069a8022c070e6003c32/Screen-Shot-2022-12-17-at-5.44.27-PM.png" data-mid="162210917" border="0"  src="https://freight.cargo.site/w/1000/i/70745e087a88de5889db9bcc7080cbe7ad4ef3915bda069a8022c070e6003c32/Screen-Shot-2022-12-17-at-5.44.27-PM.png" /&#62;
&#60;img width="2696" height="1800" width_o="2696" height_o="1800" data-src="https://freight.cargo.site/t/original/i/3ca31b1823d757fb6249038a1e05933e2e3be5a29d47471f2dc0f9ccfa0b2fa4/poster3.png" data-mid="162824338" border="0"  src="https://freight.cargo.site/w/1000/i/3ca31b1823d757fb6249038a1e05933e2e3be5a29d47471f2dc0f9ccfa0b2fa4/poster3.png" /&#62;
&#60;img width="2694" height="1798" width_o="2694" height_o="1798" data-src="https://freight.cargo.site/t/original/i/4fccb7b733c5f64730033124f9f3c29efa9bf403a719595e623cd6ab86948b12/dance2.png" data-mid="162201970" border="0"  src="https://freight.cargo.site/w/1000/i/4fccb7b733c5f64730033124f9f3c29efa9bf403a719595e623cd6ab86948b12/dance2.png" /&#62;

	Photo credit: Queenie Si,&#38;nbsp;Chris Kayden. Costume design: Moe Schell, Rose Xu.



	&#60;img width="2274" height="1176" width_o="2274" height_o="1176" data-src="https://freight.cargo.site/t/original/i/fea26376219feb60cf51152c63328c479c593de222abce25bf1057ef37957b25/Screen-Shot-2022-12-18-at-1.57.50-PM.png" data-mid="162258677" border="0"  src="https://freight.cargo.site/w/1000/i/fea26376219feb60cf51152c63328c479c593de222abce25bf1057ef37957b25/Screen-Shot-2022-12-18-at-1.57.50-PM.png" /&#62;Fig 1. Illustration of the two parallel Newtonian fluid worlds.&#60;img width="1944" height="1026" width_o="1944" height_o="1026" data-src="https://freight.cargo.site/t/original/i/30a41ca372c91972622e5d74feb7eca1830c0f05d907e081b1e37c6c27b49ca2/Screen-Shot-2022-12-18-at-9.53.35-PM.png" data-mid="162281728" border="0"  src="https://freight.cargo.site/w/1000/i/30a41ca372c91972622e5d74feb7eca1830c0f05d907e081b1e37c6c27b49ca2/Screen-Shot-2022-12-18-at-9.53.35-PM.png" /&#62;Fig 2. Illustration of the four abstract levels of Taoism cosmological notions.
&#60;img width="1318" height="1204" width_o="1318" height_o="1204" data-src="https://freight.cargo.site/t/original/i/cc34e69db8331cc530a04796029bc4df419262d98160e2a687c391077193e1cc/Screen-Shot-2022-12-18-at-9.51.41-PM.png" data-mid="162281692" border="0"  src="https://freight.cargo.site/w/1000/i/cc34e69db8331cc530a04796029bc4df419262d98160e2a687c391077193e1cc/Screen-Shot-2022-12-18-at-9.51.41-PM.png" /&#62;Fig 3. Four-phases acknowledging the in betweenness of the two-poles.

Tech Details&#38;nbsp;︎

&#60;img width="960" height="540" width_o="960" height_o="540" data-src="https://freight.cargo.site/t/original/i/87efc68447b1ee22cf8360bafa69dad0411d417df903baabbc6a5b0e0e36a1e2/Demosmall.gif" data-mid="162745266" border="0"  src="https://freight.cargo.site/w/960/i/87efc68447b1ee22cf8360bafa69dad0411d417df903baabbc6a5b0e0e36a1e2/Demosmall.gif" /&#62;Fig 4. Demo of water ripples on on a video of cloud.
&#60;img width="1280" height="720" width_o="1280" height_o="720" data-src="https://freight.cargo.site/t/original/i/12d77281417d3f1af0e4a2f26db044e272ed3a8c8657aaf1fd32a711b6b86ab4/motionDetection.gif" data-mid="162744645" border="0"  src="https://freight.cargo.site/w/1000/i/12d77281417d3f1af0e4a2f26db044e272ed3a8c8657aaf1fd32a711b6b86ab4/motionDetection.gif" /&#62;
Fig 5. Water ripples on a slow cloud video (to be projected in theatre), guided by human body.
Motion Detection

In Taoism, one element of our physical world described by the Eight-images (八卦) is the Ocean. Taoist describes the ocean to be a symbol of heaven because its vastness is able to reflect what is above the clouds. Inspired, I filmed the clouds above Bard College campus, where I lived for four years, and programmed water ripples on the sky - creating an ocean surface reflecting heaven (See Fig. 4). In the performance, the stage is built in the round with the audience sitting on each side of the square stage. Me as the solo performer would travel around the stage with the clouds projected on the floor and water ripples drifting behind me live. The code is developed in Processing. The algorithm is based on the 2D water simulation created by Neil Wallis and Rodrigo Amaya. The motion detection component is developed by myself utilizing the OpenCV package (See Fig. 5), similar to what I used in the project 2022: Another Space Odyssey Fig 6. Recordings of the performance are coming in a couple of weeks - please stay tuned! :)



	︎︎︎Scroll to bottom to see technical details on the left column please︎︎︎&#38;nbsp;


What about Water?
I have not traveled home since freshman year because of the global pandemic restrictions, but I turned out spending a lot of time staring at the beautiful sky above Bard College’s 1000-acre campus. From a physics class, I learned that a Newtonian fluid is any substance whose internal stresses are proportional to the rate of change of the substance’s speed. Furthermore, air and water are both Newtonian fluids. I was never able to comprehend this idea intuitively until watching the clouds this summer. I realized how similar the two worlds are: the space between the sky and the ocean, the world that we live in, and the space between the sea level and the seafloor, the world of water creatures (See Fig. 1). Although at first glance the air world and the water world seem entirely different, human and sea creatures all breathe and live in Newtonian fluid. At the bottom of the air world, humans found fire, invented light, and started our civilization; at the bottom of the water world, the sea creatures also found luminescent symbiotic bacteria to light up their ground. At the top of the water world, the ocean waves never stop raging; at the top of the air world, the rumbling clouds are also a huge existence, both constantly reminding us how tiny we humans are… After studying the water cycle in ecology - the transformation of ocean, clouds, and rain, I am more curious about what is behind the constant physical transformation of water, and what does the parallel worlds above and below the sea level inform me in a philosophical way. Through research, my focus of this project got narrowed down to the connotation of water in Chinese domestic philosophy and art.
Water as the Tao
Taoism is an influential Chinese domestic philosophy school and its root can be traced back to the 4th Century BCE. It defines the Tao (道) as the origin of everything and the ultimate principle underlying reality: Tao is the natural order and the intrinsic rules of all events. It turns out that in Taoism, the physical cycle of water is a symbolism for the notion of the Tao. Furthermore, Tao describes all energy in the world in constant cycle of transformation. In my personal interpretation, the entire Taoism philosophy is built up on one sentence, 

“道生一，一生二，二生三，三生万物” which is the first sentence in Chapter 42 of the classical text Tao Te Ching by the philosopher Lao Zi. Directly translating this sentence would give us,

“Tao begat one; One begat two; Two begat three; Three begat all things”.
Using the Yin-Yang Theory, I would interpret this sentence as,

“The Tao has produced Wu Ji (Non-Pole); Wuji has produced Liang Yi (Two-Poles or Yin-Yang); Liang Yi has produced Si Xiang (Four-Phases); Si Xiang has produced Ba Gua (Eight-Images)”.
Each phrase represents a step of understanding the real world from a metaphysical perspective down to the physical perspective. Based on each phrase, I choreographed a dance to represent this system of understanding the universe; the piece is an attempt to embody the Taoism thoughts, as well as a product of my experiencing a Taoist’s thought process of creating the system and their life of embodying the Taoism thoughts.Trying not to include too many details here, I just want to elaborate on the third step which particularly intrigues me. It says “Two begat three” (“二生三”), where the “two” refers to the Yin-Yang Poles, and “three” refers to the Four-Phases. The Four-Phases almost deny the previous level of understanding — the one-dimensional opposition of the "Two-Poles”. Instead, Taoists start to describe the world’s forces and transformation of energy in the two-dimension, recognizing the in-betweenness rather than just the two extremes of the Yin-Yang poles. The Yin and Yang poles are referred to in the Four-Phases as Full Yin (老阴) and Full Yang (老阳), as I would translate. It is noteworthy that none of the four phases is a stable equilibrium. All beings and energy constantly transform in this cycle defined by the Tao, like how water cycles in the natural world. In a sum, the four-phases argues that everything in the universe lies on the spectrum between the Yin-Yang Poles, the black and white, or any opposing extremes.
What about the Tao

Now after learning about Taoism and the inevitable disequilibrium described by the Four-phases, I knew that I was in the energetic Full Yang phase last semester while working on 2022: Another Space Odyssey, and so according to the natural cycle, it is impossible to stay in the peak for too long. Perfection is rarely possible. Without knowing this Tao, at the beginning of this semester, I was depressed about the previous imperfection and my low energy state, so I ended up getting closer and closer to the Full Yin phase, almost forgetting my passions for life. Now I have bounced off the bottom of the valley and gained some Yang energy from nature, from the people around me, and from the creative process for this dance. Next time I reach the highest point of the cycle, I will be more cautious of using my energy; and when I bounce off the ceiling, I will also try to descend with more peace and acceptance, like landing from a using a parachute jump with control of speed, rather than a uncontrolled free-fall like being kicked off a cliff. Such growth is what I gained from searching for Tao in dance.

 Recordings of the performances are coming in a couple of weeks!&#38;nbsp;

&#60;img width="4200" height="6300" width_o="4200" height_o="6300" data-src="https://freight.cargo.site/t/original/i/4f56daceac386aefe97c68b12317240479d1c7f2b3e1f305b90e1984f0dc51ff/Senior_Dance_Concert_2022_12_7_435.jpg" data-mid="165459327" border="0"  src="https://freight.cargo.site/w/1000/i/4f56daceac386aefe97c68b12317240479d1c7f2b3e1f305b90e1984f0dc51ff/Senior_Dance_Concert_2022_12_7_435.jpg" /&#62;
&#60;img width="2698" height="1800" width_o="2698" height_o="1800" data-src="https://freight.cargo.site/t/original/i/ab49e5be0ac87c1e94341feccb3afb39e7c16768877c60c0ab72e15ef31fd12d/projectiononStage.jpg" data-mid="165459366" border="0"  src="https://freight.cargo.site/w/1000/i/ab49e5be0ac87c1e94341feccb3afb39e7c16768877c60c0ab72e15ef31fd12d/projectiononStage.jpg" /&#62;
&#60;img width="6300" height="4200" width_o="6300" height_o="4200" data-src="https://freight.cargo.site/t/original/i/b23807e6ed9f4b575bc3f83732849ecf1bf31a22b97fa7660f1679b9379b5c4b/Senior_Dance_Concert_2022_12_7_380.jpg" data-mid="165459330" border="0"  src="https://freight.cargo.site/w/1000/i/b23807e6ed9f4b575bc3f83732849ecf1bf31a22b97fa7660f1679b9379b5c4b/Senior_Dance_Concert_2022_12_7_380.jpg" /&#62;
&#60;img width="6300" height="4200" width_o="6300" height_o="4200" data-src="https://freight.cargo.site/t/original/i/74d027e29efbe5c3bdce2504c817eb2ba48bdcf1bd5f7f5040eab5e59c8f72aa/Senior_Dance_Concert_2022_12_7_407.jpg" data-mid="165459328" border="0"  src="https://freight.cargo.site/w/1000/i/74d027e29efbe5c3bdce2504c817eb2ba48bdcf1bd5f7f5040eab5e59c8f72aa/Senior_Dance_Concert_2022_12_7_407.jpg" /&#62;
&#60;img width="6300" height="4200" width_o="6300" height_o="4200" data-src="https://freight.cargo.site/t/original/i/1090775e9d1c78fdfe865584bb0c77f8e00dd45a03ba4b4d1c3095a6d50cb72d/Senior_Dance_Concert_2022_12_7_397.jpg" data-mid="165459329" border="0"  src="https://freight.cargo.site/w/1000/i/1090775e9d1c78fdfe865584bb0c77f8e00dd45a03ba4b4d1c3095a6d50cb72d/Senior_Dance_Concert_2022_12_7_397.jpg" /&#62;
Projects ︎
</description>
		
	</item>
		
		
	<item>
		<title>Collaboration and Workshop</title>
				
		<link>https://roseziyuxu.cargo.site/Collaboration-and-Workshop</link>

		<pubDate>Wed, 09 Nov 2022 04:17:59 +0000</pubDate>

		<dc:creator>Rose's Portfolio</dc:creator>

		<guid isPermaLink="true">https://roseziyuxu.cargo.site/Collaboration-and-Workshop</guid>

		<description>
	Collaboration and WorkshopSept 2022 - May 2023&#38;nbsp;
# Multimedia Theatre# Motion Capture# Creative Coding# Dance Choreography 
# Experimentalism Pedagogy&#38;nbsp;

	
We think actively and act thoughtfully. We pursue research with purpose. Our work results in clear, tangible, meaningful applications. We teach in order to make better thinkers who transform ideas into reality &#38;nbsp; &#38;nbsp; &#38;nbsp; &#38;nbsp;

	&#60;img width="2440" height="1796" width_o="2440" height_o="1796" data-src="https://freight.cargo.site/t/original/i/26dc34f78530d10acfbb434c52f408f9a2a1457ceedbc09b996d90a643f03109/Screen-Shot-2022-12-01-at-7.57.57-PM.png" data-mid="160768594" border="0"  src="https://freight.cargo.site/w/1000/i/26dc34f78530d10acfbb434c52f408f9a2a1457ceedbc09b996d90a643f03109/Screen-Shot-2022-12-01-at-7.57.57-PM.png" /&#62;Fig 1. Luma Theatre of Fisher Center of Performing Arts - a great place for experiments and sharing ideas. 

	
Bard College Dance Workshop
The Bard College Dance Program holds the Dance Workshop every Tuesday during the school year - it is a time and place for all Bard Dance faculty and students as well as any enthusiasts to gather for a conversation on dance creation and performance. In March 2023, I (Rose Xu) am scheduled to lead a workshop session to share my ideas and tools in my adventure of finding the coalescence of science and art as well as building the technical planform to enable that. I plan to use two spaces - a camera-projector system will be set up in the Luma Theatre, allowing some choreographers to experience and experiment with the motion capture and visual rendering tools that I have developed for my past multimedia projects; in the Thorne Dance Studio, the other folks will have a conversation on any creative whim that they had in the past but couldn’t be realized using current technology, what tools are needed in order to bring their imagination to reality, impacts and consequences of introducing new technologies to the traditional dance world, and so forth. At the end of the session, we will go back to the Theatre to see what the choreographers create with the new technology provided. After the workshop, I will be happy to further discuss how to adapt my tools for any choreographer interested and develop new tools for their original needs.
 

	&#60;img width="1220" height="915" width_o="1220" height_o="915" data-src="https://freight.cargo.site/t/original/i/34787148f4c80cc13c8aa98d2a77f71d6807c1fa6175f1f3f4832898e74f9114/IMG_2969.jpg" data-mid="160768727" border="0"  src="https://freight.cargo.site/w/1000/i/34787148f4c80cc13c8aa98d2a77f71d6807c1fa6175f1f3f4832898e74f9114/IMG_2969.jpg" /&#62;Fig 2. (top center) Antonio Arvelo performing in Rose Xu’s dance piece Tārā.

	Antonio ArveloBard 23’ Computer Science Major Programmer, Dancer, Multimedia Designer 
Antonio and I (Rose Xu) have been collaborating on scientific research and crafting performances in the past years. As his senior thesis, Antonio is now leading a research project on the interdisciplinary possibilities of machine learning and dance performance. He is interested in using dance as both the input of his programs, as well as finding ways to generate dance as meaningful output of his original codes. 
I, as Antonio's collaborator, am involved in the programming component of the project as well as the final dance performance scheduled in April 2023. The performance is designed to involve motion sensors, supported by softwares such as TensorFlow and OpenCV in Processing.



	
Projects ︎
</description>
		
	</item>
		
	</channel>
</rss>