Week 8: Combined the Real-time Audio and 3D Particle Cloud in TouchDesigner
Step 1: Building a Base for Real-Time Audio Input and Particle dynamic effect
In this stage, I focused on combining real-time audio data with the 3D particle system of the virtual plant model. My goal was to allow the plant’s biodata-driven sound to influence the shape, motion, and behavior of its digital counterpart.
To achieve this, I created a new Base
component that acted as a container for sound analysis. Within this base, I used a CHOP input to bring in waveform data in real-time. The incoming audio signals were processed through a series of math
, filter
, and logic
nodes. These steps helped smooth and scale the data to suit visual modulation. The processed values were then mapped to parameters such as position offset, particle spread, and visual noise.

Step 2: Creating a Visual Feedback Loop with Reset Control
To enhance the responsiveness and continuity of the 3D plant visualization, I added a feedback loop to allow the particle cloud to evolve over time. This created a visual memory effect, where the movement and form of the plant particles gradually changed rather than resetting abruptly.
I started with the in1
node, which receives the 3D geometry or particle cloud generated from the earlier steps. This output is passed through feedback1
, which stores the previous frame and combines it with the current frame. As a result, the plant model begins to develop a visual trail, making its transformation feel more fluid and organic.
To allow for control over this system, I used an LFO
and a trigger
connected to a keyboard
input. By pressing a Number 1 key, the I can reset the feedback loop. The final null2
node serves as the output of this loop and connects directly to the main render pipeline. This ensures that every visual element seen by the audience is the result of both the real-time biodata input.

Step 3: Processing Audio Channels to Drive Particle Behavior
To make the virtual plant visually respond to sound in real time, I built a signal processing network using CHOPs to handle incoming audio waveforms. This section of the system receives sound data and prepares it to control the particle cloud of the plant.
I began with the in3
node, which receives audio signals as two separate channels. These values were then passed through a logic
node that helped filter out unwanted spikes and define thresholds for triggering visual changes. The logic output was split and processed using several math
nodes (math2
, math3
, math4
) to scale and normalize the values into ranges suitable for visual control.
To ensure smoother transitions, I used filter1
and filter2
to apply temporal smoothing to the sound data. This reduced jittery movement and created more fluid visual responses. Finally, I connected the filtered outputs to null
nodes (null3
, null5
, null6
) as reference points for connecting with the 3D visual pipeline.
Each processed channel could then be routed into different aspects of the 3D particle system, such as position offset, spread, or color intensity. This setup allowed the virtual plant to exhibit real-time reactions to sound, enhancing the immersive experience with continuous visual feedback that felt both organic and expressive.

Step 4: Designing a Color Modulation System for the Plant Surface
To enhance the responsiveness of the visual system, I implemented a color change effect that allows the virtual plant’s surface to shift based on real-time signals. This not only increases visual variation but also communicates the plant’s internal activity more intuitively.
I started with a constant1
node, which provided a flat base color as the foundation for the final visual blend. This color signal was then sent into a sub1
node, where it was combined with incoming texture data. The subtraction operation introduced contrast and made room for modulation.
The result of this combination was passed through a math1
node to remap the values and then into multiply1
, where intensity could be adjusted dynamically. This path controlled the primary color modulation of the surface.
At the same time, I used an in2
node to import a second image source. This could be a depth map, alpha texture, or shader-driven visual. I processed this input using a chroma key
node to isolate specific tones, followed by a threshold
operation to further control what visual information is preserved or discarded.
Finally, both visual streams were merged using add2
. This gave me a composited output that allowed the base color to blend with dynamic input, creating a layered surface that reacts in real time. The result is a virtual plant whose color shifts and pulses according to changes in signal strength(sound).

Step 5: Adding Sound-Reactive Noise to Control Particle Cloud Dispersion
In this step, I added a sound-reactive noise system to control the dispersion behavior of the particle cloud. The goal was to allow the shape of the plant to be disrupted or expanded in real time based on incoming sound frequencies and volume levels.
I used two noise
nodes, noise1
and noise2
, each with different parameters for amplitude and frequency. These noises were mapped to control the position offsets of the plant particles. One of them produces a more subtle flow, while the other generates a more intense distortion.
To dynamically select between the two, I introduced a switch
node. This allowed the system to alternate between different noise behaviors based on conditions such as the intensity of the soundwave or predefined logic. For instance, when the amplitude of the soundwave increases, the system could switch to a more chaotic dispersion mode.
In this way, the period of the soundwave influences the speed of dispersion, and the amplitude controls the degree of distortion. This gives the plant a behavior that visually responds to its own sonic presence, reinforcing the impression that it is actively reacting and participating in its virtual environment.

Step 6: Outputting the Final Particle Cloud to the Visual System
After completing the audio input processing, visual feedback loop, and color modulation system, I connected all the visual outputs into a final rendering chain. This last step ensures that the transformed 3D particle system can be visualized in real time and output to a screen or projection environment.
I used an add1
node to combine multiple visual streams that had been modulated by biodata and sound. This merged result was passed into a pointTransform
node, which applied transformation values such as scaling, position shifts, and rotational movement to the point cloud. These transformations were controlled by the incoming signals from previous steps, allowing the particle structure to remain reactive and alive.
The transformed data was then passed through a null1
node for stability and clarity before being routed into out1
. This final output node made the fully processed visual data accessible to the rendering engine or external systems, completing the visual pipeline.
At this point, the entire digital plant becomes fully generative and responsive, shaped continuously by the flow of biodata, audio, and internal feedback. The system is now ready to be experienced as an immersive, real-time installation where the plant expresses itself through light, motion, and sound.

Test Outcome:
1. Without Real Time Audio input:


2. Real-Time Music input To Test the Particle Cloud Effect


Leave a Reply