Sonify: music and sound art

Reviews 18.07.2023

Sonification - the use of non-verbal sound to convey information, or to make data perceptible, to use the most common definition - is now present just about everywhere, in science and technology as much as in artistic practices (sound and other). Its first uses, however, go back a long way (the principle of the Geiger counter was devised in 1913), and its use in music dates back to the 1960s. Starting with a few examples borrowed, for the most part, from sound art, we'll try to draw some threads in the labyrinth of sonification.   

Between January 5 and 31, 1969, American artist Robert Barry took part in a group show organized by Seth Siegelaub in two rooms of a New York apartment building. What he exhibited was neither visible nor audible. The catalog, available in one of the two rooms, listed the contents of his installation: a 40 kHz ultrasonic wave, two radio waves (AM and FM, radiated by transmitters hidden in a cupboard), radioactive waves produced by a sample of barium-133 buried in Central Park and a radio wave emitted from an apartment in the Bronx to a radio amateur in Luxembourg.

In 2003, German artist Christina Kubisch, after a long period working on electromagnetic induction devices, developed a wireless headset sensitive to electromagnetic waves (thanks to an induction coil). Since then, she has been organizing urban walks in which participants are invited to listen to the non-sound waves in which we are unknowingly immersed.

On May 5, 1965, at a concert organized by John Cage at Brandeis University's Rose Art Museum, Alvin Lucier presented Music for Solo Performer, for enormously amplified brain waves and percussion. The work consisted in "enormously" amplifying the alpha waves generated by the composer's brain, so that their vibrations, transmitted through loudspeakers, excited various percussion instruments arranged in the concert hall. While Alvin Lucier concentrated, electrodes on his temples, John Cage, playing the role of assistant, orchestrated the signals in space.

In October 2001, American artist Greg Niemeyer exhibited an installation entitled Oxygen Flute at the Kroeber Museum at the University of California, Berkeley. Visitors entered a bamboo-filled room, where four computer-modeled flutes could be heard playing through four loudspeakers. The sound of the flutes was dependent on the level of carbon dioxide present in the chamber, with sensors recording its concentration twice a second. The flutes reacted not only to the breathing of visitors in the chamber, which produced short-term fluctuations, but also to the slow evolution of carbon dioxide levels due to the joint presence of humans and bamboos. The increase in the level during visiting hours was matched by the daily cycle of photosynthesis. The flutes were modelled on bone instruments excavated from a Neolithic site in central China. The data collected by the sensors (the instantaneous level of carbon dioxide, its averages and its evolution over time) were translated into sounds via an algorithm comprising six parameters (breath, noise, embouchure, fingering, portamento, mute). In a nutshell, Oxygen Flute is a computer-controlled interactive musical environment that makes audible the exchange of gases between visitors and the atmosphere in which they are immersed: one of the installation's explicit aims was to make us aware of the effects our mere presence produces on our environment. As Greg Niemeyer puts it: " Our globe is nothing but a much larger container ". (1)

Sonification vs. audification

Robert Barry is undoubtedly a conceptual artist, but the works he exhibits could not be more material. The waves I've described are indeed detectable, provided you have the right equipment - radio wave receivers, ultrasound-sensitive microphones and Geiger counters. These devices sonify and make audible what is not audible. However, it is important to distinguish ultrasound from radio waves and radioactivity: the former is sonorous but inaudible to the human ear, while the latter is not. As a first approximation, we can say that the microphone audiifies and that the radio receiver and Geiger counter sonify (the latter was one of the first measuring instruments to use sonification).
Christina Kubisch 's augmented headphones sonify the electromagnetic waves that pass through them, whether of human origin (wi-fi, cell phones, electric fields, radar systems, street lighting, etc.) or non-human (waves from the ionosphere, such as those emitted by thunderstorms).

WAVE CATCHER - excerpt from Christina Kubisch on Vimeo.

Music for Solo Performer amplifies and transduces the very low-frequency waves (between 8 and 12 Hz) emitted by the composer-performer's brain, before passing on their vibrations to the instruments. They don't become audible as such, only through their mechanical effects, which is a way of preserving their inaudibility (and therefore their strangeness): Alvin Lucier sonifies but does not make this sonification directly audible. He also devoted two works to the electromagnetic waves produced by the earth's atmosphere: Whistlers (1966) and Sferics (1981). Both involve the sonification of radio antennas. Whistlers are very low-frequency radio waves, usually produced by lightning, which travel through the ionosphere at the speed of light.
OxygenFlute is based on sonification using atmospheric data. The aim is not to make a sound or soundless wave audible, but to translate a set of data into sound. This requires an algorithm and fine-tuning (as we have seen). Greg Niemeyer considers himself a "data artist". 


According to the most common definition, sonification is the use of non-verbal sound to convey information or make data perceptible (this is what we'll call sonification in the broad sense). However, the preceding examples invite us to distinguish between two forms: sonification (in the narrow sense) and audification. While the latter is a transduction, the former is more akin to translation.
Transduction can operate by degree (e.g. from an inaudible sound to an audible one, which presupposes a transducer device) or by kind (when one type of energy is converted into another, e.g. from an electromagnetic wave to a sound wave), but in all cases its operation is analogical, involving a proportional and continuous relationship between two quantities. (2)
Sonification, on the other hand, is the passage of an order within an order, of digital data into audible sounds. Its operation is of a symbolic nature, and presupposes a translation and therefore a work of interpretation and, in most cases, simplification.

American artist Andrea Polli prefers to use the term "geosonification" to describe the sonification of data (" data sonification ") from the "natural world ". (3) She sees this practice as a continuation of that of the soundscape, which involves recording and audification. One of Hildegard Westerkamp's most famous soundscapes, Kits Beach Soundwalk, which she composed in 1989, is based on the amplification of barely audible sounds, in this case those of barnacles (small crustaceans that live on coastal rocks) feeding. Geosonification enables Andrea Polli to reconstitute landscapes or phenomena on a human scale that would be impossible to embrace, such as a series of several hundred thunderstorms scattered along the East Coast of the United States(Atmospherics/Weather Works, 1999-2001, in collaboration with meteorologist Glenn Van Knowe) or the atmospheric conditions and climatic transformations at the North Pole between 2003 and 2006(N., 2007, in collaboration with Joe Gilmore). In Atmospherics/Weather Works, she used the digital data streams generated by the storm models to construct the shapes and curves of the sounds, rather than the sounds themselves. One of the benefits of this parameterization was to make audible fine temporal structures that meteorologists had previously only been able to imagine. (4) For Andrea Polli, geosonification is also a way of building a link with an environment or situation that is difficult to grasp, and of creating the conditions for political awareness. 

Music and black holes

Today, sonification has countless applications and uses, from astronomy and medicine to geology and automobiles. In 2022, NASA put online the sonification of the massive black hole at the center of the Perseus cluster - based on the ripples the black hole produces in the hot gas surrounding it. (5) One example among many. Astrophysicist Philippe Zarka, who has made a specialty of this, has entitled the page he devotes to his sonifications on the Laboratoire d'études spatiales website "Les chants du Cosmos" ("Songs of the Cosmos"). (6) Even if he doesn't seek to musicalize the radio signals he converts into sound, his practice is as literal as possible: the musical paradigm is inseparable from our imaginary Cosmos. And we may very well prefer a musicalizing parameterization, as is the case, for example, in Oxygen Flute. But sonification in the broadest sense (as translation and transduction) is not strictly speaking a musical process. Music doesn't sonify. It composes on the basis of an implicit, evolving order of scales, rhythms, meters, intensities and instrumental timbres. And while it goes without saying that she expresses and signifies, she does so from this order, which she never ceases to reinvent and which is the condition of her practice. As Christian Accaoui has convincingly shown in a recent book, the regime of musical imitation, dominant in the classical age, is that of analogy, halfway between copy and representation. (7) What matters is not so much the precision of the sound painting as the clarity of the reference. Even when music imitates the sounds of the world - thunder, wind and animal cries - its mimetics remain vague and deliberately approximate. To imitate is to refer. To sonify is to translate or transduce.

Le Noir de l'Étoile

One of the most obvious examples of this difference is Gérard Grisey's Le Noir de l'Étoile, for six percussionists and pulsar sounds. At its premiere on March 16, 1991 in Brussels, a sonification of pulsar 0359-54, whose radio signal was picked up by the Nançay radio telescope, was projected live into the concert hall on twelve loudspeakers arranged around the audience. The pulsation of this slow pulsar, like that of the pre-recorded Véla pulsar (both chosen for the slowness of their rotation: their sonification is a rhythm, not a frequency), remains until the end of the work a foreign element around which the percussion instruments tirelessly revolve. As Gérard Grisey writes in the program note about the pulsar sounds he discovered in 1985 at UC Berkeley: " What could I possibly do with them? (...) integrate them into a musical work without manipulating them; simply let them exist, as reference points within a music that would be, as it were, its setting or stage. The regime of our "modern" musical age is no longer that of analogy, it is that of pure music, i.e. music identified with form and sound insofar as they have ceased to refer to anything other than themselves. And it's precisely because music no longer imitates that it must make present the thing itself, in this case the pulsar. Le Noir de l'Étoile does not compose the musical analogy, but summons it in person and confronts it with music and its instruments, which it contaminates without ceasing to be itself, a stranger to the order that welcomes it.

Sound art

Music does not sonify, but as the examples at the beginning of this text show, sound art has made sonification in the extended sense one of its major processes. For, unlike music, sound art is not dependent on an implicit order that pre-differentiates sounds (and discretizes the sound field), but on the objects - phenomena, moments, places, landscapes, organs, etc. - that it chooses to adopt, and that it is able to use. -
There is no such thing as pure sound art. It is always in relation to a non-sound or non-audible that it must, in one way or another, make present - if possible without erasing its difference. Its regime of signification is not mimetic, expressive or formalist (the three modes of musical signification); it is "transductive" or "sonifying" and, in this respect, inseparable from the age of recording and the paradigm shift that accompanied it: the fact, historically situated between the middle and end of the nineteenth century, that sound is no longer thought of in terms of its production, but in terms of its reception by the ear (which converts the mechanical waves that vibrate the eardrum into auditory perception), as an audible effect. (8) From now on, sound is not what is composed, played or articulated, but what is heard, which includes, potentially, everything that can be converted into sound: electromagnetic waves, places, storms, climate change, brain activity, the level of carbon dioxide in the air, and so on. The order of sound art is none other than the disorder of the world.

Bird & Renoult - Two Tower (extract 1) from Birdy on Vimeo.

Bastien Gallet

(1) See the following two pages: and
(2) I borrow this distinction from Douglas Kahn, Earth Sound Earth Signal. Energy and Earth Magnitude in the Arts, University of California Press, 2013, p. 55.
(3) See his article "Soundwalking, Sonification, and Activism", in The Routledge companion to sounding art, M. Cobussen, V. Meelberg and B. Truax (eds.), Routledge, 2016.
(4) See
(5) See
(6) See
(7) Music speaks, music paints. Les voies de l'imitation et de la référence dans l'art des sons, Tome 1. Histoire, Paris, Éditions du Conservatoire, 2023, p. 19-22.
(8) On this subject, see Jonathan Sterne, Une histoire de la modernité sonore, Maxime Boidy (trans.), La Découverte/Éd. de la Philharmonie, 2015.

Article photo © Hicham Berrada
Photos N Point © Andrea Polli


buy twitter accounts