PROJECTS         ABOUT ME         CONTACT

Genesis


Tech:
# Emotion Recognition
# Machine Learning
# Computer Vision
# Python, TouchDesigner

Art and Science:
# Psychology
# Video Installation
# Performance Art
# Data-Driven Art




An exploration of artificial life through data-driven video installation, live performance, and system art

Dimensions: space: 2 meter x 2 meters; video display: 62cm x 38cm 

Dec 2nd - 4th 2024, DXARTS Gallery
Mar 4th - 6th 2025, School of Art Gallery 10D

Department of Digital Arts & Experimental Media (DXARTS)
University of Washington
Seattle, WA, USA

Project Description


Genesis is an interactive installation that bridges art and technology to explore artificial life through data-driven video, system art, and live performance. Integrating Python-based emotion recognition with TouchDesigner, the system captures and responds to real-time facial expressions.

Inspired by The Artist is Present (2024) by Marina Abramović and philosopher Emmanuel Levinas’s notion of the face as the most expressive site of the Other, Genesis creates a one-on-one encounter between the viewer and a screen displaying soundless portraits. Installed in an intimate corner of the DXARTS gallery, a webcam records the viewer’s face, feeding data into a machine learning system that selects response videos based on perceived emotion.

Rooted in definitions of artificial life from DXARTS 200 Machine Art Lecture II, Genesis simulates face perception, emotional inference, and introjection—where unconscious identification transfers affect between entities. This bidirectional exchange allows the system to ‘perform’ uniquely for each participant, creating ephemeral moments of connection.

With audience consent, the system can record novel expressions and add them to its growing archive, allowing the work to evolve over time—it is alive.


Prototype



Figure 1. Prototype flowchart for Genesis, an interactive emotion-based video system. This diagram outlines the system’s structure, including three main loops: Standby, Interaction, and Data Growth. Real-time facial analysis is performed using OpenCV and DeepFace, with emotion scores transmitted to TouchDesigner via OSC to trigger responsive video playback. With audience consent, new emotion clips are recorded and added to a growing archive, allowing the system to evolve over time.

Development



Figure 2. Excerpt of the data processing pipeline. The left panel shows a modular folder structure, including Python scripts for video handling and OSC communication. The right panels display key code components and terminal output, highlighting real-time emotion detection, recording status, and system feedback during performance.



Figure 3. Installation diagram of Genesis in DXARTS Gallery.  


References


From Totality and Infinity by Emmanuel Levinas, “The face opens the primordial discourse whose first word is obligation… It is that discourse that obliges the entering into discourse. The Other faces me and puts me in question and obliges me.”

From A Cyborg Manifesto by Donna Haraway, "The boundary between human and machine is thoroughly breached; we are living in a world where we are both and neither."

From The Language of New Media by Lev Manovich,“In new media, the content is an interface; the interface becomes content.”

From A Sea of Data: Apophenia and Pattern (Mis-)Recognition by Hito Steyerl, “In the age of machine vision, humans are no longer the measure of things… but rather, they are a data resource to be quantified and classified.”

From Atlas of AI by Kate Crawford, “Artificial intelligence is not artificial or intelligent. It is made from natural resources and human labor, embedded with histories, and entangled with the lives of all who encounter it.”



Photo Documentations



Figure 4. Demo of backstage facial analysis (left) and final video feed (right). The left panels show real-time emotion detection via Python, including scores and overlays; the right panels display the corresponding TouchDesigner output. The top row captures “standby” mode with no audience present, while the bottom shows active interaction, with a "sad" expression reflected in the visual response.


Figure 5. Close-up of the final video feed. The top panel shows a pre-recorded face from the original archive, representing the artificial life at launch; the bottom panel features a clip added post-show, capturing a participant and marking the system’s evolution as audiences become performers.


Figure 6. More photo documentations from the audience’s perspective mid-show. On the second day of installation (Dec. 4th), the orientation of the monitor was changed to vertical, thanks to Laura’s advice. The 28 inch 16x9 monitor has a length of 24.4 inches (62 cm), and so the head portrait videos are displayed at a scale very similar to humans.





























Other Projects ︎
Rose (Ziyu) Xu
zx303@cam.ac.uk        Cambridge,UK