Week 1 to Week 4: Idea Brainstorm & Research Process
1. Introduction:
People often perceive plants as static and passive beings, but in the research suggests plants have their own language. In my research, numeric studies was showing that under stress such as drought, injury, or environmental pressure, plants emit ultrasonic sounds and bioelectrical signal communicate through color changes and chemical signals. For instance, trees release chemical compounds to warn nearby peers of approaching pests, and flowers can even “hear” the sounds of pollinators, adjusting nectar concentration in response to their environment.
Plants are more than mere decoration, they regulate temperature, purify the air, relieve stress, and play a vital role in ecosystems and climate regulation. Yet, do we truly understand their expressions? This phenomenon led me to consider: If plants could “speak,” would human behavior change?
Incorporating this question into my project design, my intention is to collect and analyse plants’ bioelectrical signals, translating them into virtual forms that allow audiences to “listen” to plants up close. In the virtual space, plants are no longer passive ornaments but active “speakers” that respond to their surroundings.
2. Objective:
My project aims to create a dialogue between humans and nature, deepening our understanding of plants’ significance in ecological systems. By visualising plants and making their expressions perceptible, my project encourages the audience to reflect: If plant can talk, are we willing to listen? In modern society, have we overlooked beings whose voices we cannot hear, or species that communicate in a language different from ours? When plants’ expressions become perceptible, will we reconsider our relationship with nature and technology? This is not just about ecological balance; it also addresses how we perceive the “other” in our world.
3. Research Content:
In people‘s often thinking, plants are considered silent beings that endure harsh environments and severe damage without any apparent response. However, a study by Khait et al. (2023) discovered that plants such as tomatoes and tobacco emit sounds that can propagate through the air when under stress. The researchers compared tomatoes and tabacco in conditions of drought, stress, stem cutting, and normal cultivation, finding that plants subjected to damage produced significantly more sounds than those in the comparison group. Since these sounds fall within the ultrasonic range of 20–150 kHz, beyond the limits of human hearing, people cannot perceive the plant voice in daily life. However, animals such as mice and bats can detect these distress calls.
Base on the my other research finding, a study by Veits et al.(2019) demonstrated that plants can also respond to environmental sounds. For example, evening primrose can detect the wing vibrations of bees and respond by producing more sweeter nectar, potentially increasing the chances of attracting pollinators from a distance (Fig 1).
Plants are natural sensors, capable of detecting light, temperature, humidity, and chemical changes in their environment. Samuelsson (2016) studies have shown that plants can interact with the world through root systems, leaves, and volatile compounds—complex behaviours that can be seen as their “language.”

4. Possible methodology:
A) Arduino device Creation
During my research process, I revisited the questions I had previously explored and used them as a framework to investigate potential directions for project implementation. Along the way, I discovered a device called PlantWave, which collects plants’ bioelectrical signals and converts them into MIDI music offering a sound-based approach to expressing plant-human interaction.
Further into my research, I found inspiration from a YouTube video titled Plant Sound Installation, Synthetic Garden,Biodata sonification Kit. In the above video, Arduino devices and electrode pads are used to collect signals from plants. The voltage is amplified and sent into an Arduino to be digitised; the resulting numeric values are then passed to oscillators, producing a dynamic soundscape. These signals reflect the plant’s responses to environmental changes such as humidity, temperature, light, and touch.
This method appeared to be a feasible methodology for my project, especially considering my prior experience with Arduino in interactive installations. Motivated by this, I began researching how Arduino could be used to collect plant bioelectrical data. I found two practical tutorials one from Electricity of Progress and another from the AutoDesk Instructables website (Figure 2 and Figure 2b). Both provided clear, step-by-step guidance on how to build an Arduino-based device for capturing plant signal data. These two tutorials effectively translated my conceptual ideas into actionable steps, bridging the gap between abstract research and hands-on implementation. They offered not only circuit diagrams and code samples but also insights into how plant signals can be stabilised, amplified, and mapped to MIDI outputs, making the previously intangible aspects of plant activity accessible and expressible through digital media.


B) Python transfer into sound output
After closely studying the tutorials from Electricity of Progress and AutoDesk Instructables, I realised that the Arduino-based device is limited to converting plant signals into MIDI note outputs, it does not produce sound directly. This led me to explore how MIDI notes can be translated into actual sound output.
Since I had no prior experience with MIDI synthesis or how MIDI data is structured, I first took time to understand how MIDI functions. I learned that MIDI does not carry audio itself; rather, it sends digital instructions to a synthesiser or software instrument. A standard MIDI message includes key components such as Note On/Off, Control Change (CC), Channel, Note Number, and Velocity (Figure 3). These elements determine when a note begins and ends, what pitch it represents, how forcefully it’s played, and which MIDI channel it belongs to, which altogether forming a communication system between devices and sound generators.

This understanding helped me grasp the logic behind translating plant bio-signals into musical expressions. It also laid the foundation for my next step: finding a method to convert MIDI notes into audible sound using software synthesis tools.
Building on this understanding, I began exploring practical methods for converting MIDI note data into actual sound output. Through further research, I discovered that PySerial can be used to read real-time data from the Arduino through the Serial Monitor (Fig 4). This allowed me to capture the MIDI-related signals being generated.
At the same time, I came across FluidSynth, a real-time software synthesiser that renders MIDI note data into audible sound using SoundFont files (Fig 4b). These files contain collections of instrument samples that define how each MIDI note should sound. By combining PySerial with FluidSynth, I was able to design a workflow in which plant-generated signals are read from Arduino, processed in Python, and converted into musical tones in real-time. This became a crucial step in transforming invisible plant activity into an audible and interactive experience.


C) Blender 3D modelling visualisation Effect.
With the sound output pathway established, I shifted my focus to exploring the possibilities for visual output. My idea was to create a virtual plant model in Blender and import it into TouchDesigner to serve as the visual component of the installation.
Drawing on my prior experience with Blender modelling and deploying visuals in TouchDesigner, I began experimenting with how the sketch output might look and feel in Week 4 by using AI tools. This involved refining the aesthetic of the virtual plant and envisioning how its form and motion could dynamically respond to the real-time sound signals generated from plant bioactivity.
D) Touchdesigner Audiovisual Combination
At this stage, I began exploring how to link the sound output to the visual behavior of the virtual plant in TouchDesigner. My goal was to allow the audio (generated from plant bioelectrical signals) to drive dynamic changes in the shape, motion, or visual properties of the plant model.
To achieve this, I researched different audiovisual interaction techniques within TouchDesigner. I studied tutorials that demonstrated how audio waveforms and frequency data could be used to manipulate 3D geometry, particle systems, and shaders. These resources provided me with a variety of methods for mapping sound to visual transformation, such as using audio amplitude to trigger scale deformations or filtering frequencies to drive subtle oscillations in the plant’s form.
By experimenting with these techniques, I aim to design a system where the virtual plant becomes a living, responsive entity—visually echoing the invisible biological signals of the real plant in real time.
Reference:
https://en.wikipedia.org/wiki/Plant_bioacoustics
https://www.instructables.com/Send-and-Receive-MIDI-with-Arduino
https://wiki.python.org/moin/PythonInMusic
https://electricityforprogress.com
https://web.archive.org/web/20241231134905/https://electricityforprogress.com/biodata-breadboard-kit/
https://www.instructables.com/Biodata-Sonification
https://www.instructables.com/Simple-MIDI-Buzzer
https://www.audiolabs-erlangen.de/resources/MIR/FMP/C1/C1S2_MIDI.html
Leave a Reply