Living Symphonies

Daniel Jones and James Bulley . Next at Thetford Forest, 24–30 May 2014. www.livingsymphonies.com

Daniel Jones in Thetford Forest, UK, the first location for his installation Living Symphonies. Credit: Jade Hoffman/Nature

What is Living Symphonies?

It is an ever-changing piece of music that grows in the same way as a forest ecosystem, from the interactions of countless tiny elements. Each reflects the activity of an individual organism, ranging from moss and fungi to deer and birds of prey. The outcome is an organic, emergent symphony comprised of thousands of musical motifs, each portraying a different aspect of the ecosystem.

How do you build each forest installation?

We start with a survey of the animals and plants. We add behavioural insights from ecologists: what times of day is a blackbird active and foraging, how does it move, what are its preferred food sources? We feed this information into a computer model of the ecosystem (programmed in C++ with the Cinder visualization library) that simulates the second-by-second interactions between species. This model is linked to a custom piece of audio software that orchestrates music in real time from a vast array of motifs representing each species. The music is played through a network of speakers in the canopy and undergrowth.

How does the composition sound?

The motifs for each organism are drawn from fragments that we composed and recorded with orchestral musicians. Our goal is a work with such a great number of interdependent elements, in a nearly infinite combinatorial space, that even we are surprised by the patterns and permutations that arise. This 'emergent' approach runs through all of my work. Emergent phenomena pervade economics, ecology, linguistics and neuroscience; examples include the flocking of birds, and the interactions of neurons that give rise to cognition. By translating the dynamics of a forest ecosystem into music, Living Symphonies aims to heighten awareness of the adaptive and often creative behaviours of these complex systems.

Have any animals responded to the piece?

When we ran our first forest prototype last autumn, there was some worry that it would scare off the wildlife. Yet when we activated the speakers high in the trees, we were surprised to find that inquisitive wrens appeared, seemingly chirping along. We hope that the effects on wildlife are transient; each installation should be brief enough to have no lasting impact.

You have also created an installation about the weather. How does that work?

Variable 4 is an eight-speaker outdoor installation that translates weather conditions into musical patterns. It uses sensors to track temperature, humidity, wind, rain and sun. The weather acts as a kind of virtual conductor, with custom software using real-time data to generate harmonic structures. The installation has toured UK locations selected for their wild and unpredictable weather, including Dungeness in Kent, which has been designated a Site of Special Scientific Interest for its unusual geology and ecology. The next edition of Variable 4 can be heard from 5 to 14 September in Portland, Dorset.

What about your work with genetic transfer in bacteria?

That came out of time I spent with computational biologists at the National Institute for Medical Research in London. They were investigating how bacteria swap genetic information with their neighbours through a mechanism called plasmid exchange. In my 2011 installation Horizontal Transmission, sounds in the gallery are detected by a microphone and transformed into 'sonic chromosomes', which are assimilated into the behaviour of a virtual bacterial population. Visitors can navigate through the population using a three-dimensional control interface, exploring cellular dynamics and communication patterns.

How has Twitter inspired you?

The Listening Machine was a collaboration with cellist Peter Gregson, created with the Britten Sinfonia chamber orchestra. At its heart was an automated system that continuously generated music based on the real-time activity of a few hundred UK Twitter users. Linguistic software analysed their tweets for sentiment, rhythms of speech and subject matter, and translated them into musical patterns using orchestral fragments representing, for example, phonemes — distinct units of pronunciation. The resultant composition could be heard live through any web-connected device. It ran for nine months starting in May 2012, with a daily rhythm that reflected the real rhythms in a communicating society: peaks of musical density at rush hour and sparse, reflective periods late at night.