An explosion of sound!

A gamma-ray burst is an incredible stellar event, in-fact it’s one the most powerful explosions in the universe! It occurs when a star’s core collapses into a black hole and matter is rushed outward at nearly the speed of light. In 2008 NASA recorded a gamma-ray burst of the greatest total energy, the fastest motions and the highest-energy initial emissions ever seen!

“We were waiting for this one,” said Peter Michelson, the principal investigator on Fermi’s Large Area Telescope at Stanford University. “Burst emissions at these energies are still poorly understood, and Fermi is giving us the tools to understand them.”

Realising the sonic potential in this data, Sylvia Zhu began converting the data to music. In this sonification she maps each photon to one of three instruments depending on how likely it was to be associated with the burst, each photon’s frequency is mapped to a comparable sound frequency, and then slows the play back rate down by five times. The result (below) is simply astonishing.

I think that this sonification not only accurately represents the data, but also sounds amazing! The use of instrumentation, and the mapping of the data make this a wonderful piece of music in it’s own right. What really makes this piece so brilliant though, is that I genuinely feel like I have a better perception of the data through this sonification than I do visually. For me, this is one of the finest sonifications I have come across so far in my research. The downside is that being a historic event, this is a fixed piece of music; unlike Wav4kst which is constantly shifting with new data!

What does a face sound like?

Although he studied engineering, Richard David James (or Aphex Twin) is an electronic musician who grew up in Cornwall. In 1999 he released an album titled ‘Windowlicker’. When describing the albums second track (commonly referred to as Equation), SputnikMusic said:

“Imagine giving a monkey some crack, and putting him in a room full of buttons where every button makes a different sound. That’s sort of what this sounds like, except with a beat behind it.”

Oddly, I mention this piece of music less because of how it sounds, and more because of how it looks. When played through a spectrogram this track shows some very interesting shapes, most notably, the composers own face! It would seem that Richard used a piece of software called metasynth to transform the data from an image, and insert it into the end of the track. I find this to be a strange kind of sonification because it’s sound doesn’t relate to how we view an image. When we usually see an image, we see it’s entirety in one instant. In the case of this sonification we are exposed to the image across it’s horizontal axis over a duration of time. This means that the cleverness of the sonification only becomes apparent to us if we visualize it through a spectrogram. So although this is interesting, it doesn’t really sound pleasant, and so it’s effectiveness as sonification is arguable.

What does space sound like?

Usually we consider outer space to be a quiet place, this because sound is a mechanical wave, and needs to propagate a physical medium (such as air) in order to be heard. Space, being largely a vacuum, is therefore considered to be silent to our human ears.

Marty Quinn from the University of New Hampshire is a composer who has worked on many sonification projects, from water data radio streams to interactive solar data exhibits. I’m particularly interested in his relatively recent collaborations with NASA to sonify data from their Lunar Reconnaissance Orbiter in real time. Although this project is no longer active, recordings are still available, along with a very detailed explanation of how the project works.

“Our minds love music, so this offers a pleasurable way to interface with the data. It also provides accessibility for people with visual impairments.” – Marty Quinn

Marty’s work is an amazing demonstration of how parameter mapping sonification can be used to create not only pleasant compositions, but also ones with incredibly complexity. This is achieved through the use of very high quality synthesisers and very complex algorithms. One of my favourite features of this project is the mapping of the data to the type of musical scale employed, switching between 5 different scale types.

Sound Over Sight

Have you ever tried to hear someone talking in a room whilst other people are also talking in the background? Although this can seem difficult sometimes, our remarkable ability to pick out words and sounds from a wash of background noise is actually highly complicated, and yet we are barely even aware of it.

Imagine you are walking around with you eyes closed, paying attention to all the sounds your feet make. Without looking, did you walk on tile, carpet, or stone? Is your space around you echoing, or are you wearing something soft enough to dampen the sound? Is the space large or small? Inside or Outside? We are constantly perceiving very detailed information about the world around us, just by using our ears.

Have you ever seen a blind person using audible ‘clicks’ to determine their surroundings? Some blind people claim that this echolocation is not only highly effective, but also in some ways superior to sight. One user of this technique, Daniel Kish, points out that people dependent on sight can only see what is directly in-front of them, whilst he can hear in all directions.

Back in 3500 BC, it is said that Egyptian auditors were commissioned by the Pharaoh to compare separate reports of the same stockpiles to check that no-one was stealing. If the auditors were constantly comparing data visually for hours on end, it would demand lots of their focus and likely contain many mistakes. Instead the auditors would sound out their reports at the same time, any discrepancies between the sounds could be easily heard, and the thieves would be spotted.

According to Scientific America, Bruce Walker (a professor of psychology and director of the Georgia Institute of Technology’s Sonification Lab) says:

“The auditory system is the best pattern-recognition device that we know of. If you’re looking through a data set and trying to understand what’s going on, it’s often easier and more efficient to listen to the sound of it rather than looking at a screen or a printed version.”

When wanting to know the weather forecast, we often head straight to a webpage or news report that shows us a visual display. Although doing so might be quick and accurate, perhaps on reflection of reading this, you might reconsider. I’d like to think Wav4kst offers a much more immersive experience. Perhaps the falling of notes as the temperature drops over time allows for more engagement and understanding of the forecast. I think it is important for us to seriously consider how we engage with data, and explore the possibilities of sonic representation.

Alvin Lucier: Music for Solo Performer

Back in 1965 Musical pioneer Alvin Lucier met the scientist Edmond Dewan who was then conducting research on alpha brainwaves, and the two quickly began sharing ideas. Later that year Alvin Lucier debuted his brain wave piece titled Music for Solo Performer.

The piece is centred around a solo performer who is attached to a device that measures alpha wave activity.  It’s important to know that Alpha waves are only produced when the eyes are closed, the mind is relaxed,  and there is no physical exertion or activity. The solo performer seems to be in a particularly unusual state for an entertainer.  The alpha wave readings are connected to clever mechanical devices (reminds me of this breakfast machine) that strike various percussion instruments which are dotted around the room (although earlier renditions simply involve speaker cones resonating inside percussion).

Graphic Score for Music for the solo performer . From Kahn D. Earth sound Earth signal: Energies and Earth magnitude in the arts. Los Angeles: University of California Press; 2013. p. 87. Image courtesy of Alvin Lucier.

This piece of music has often been heralded as an artistic approach to sonification. However, alpha waves usually cycle between 8 and 13 times per second, which if heard directly, would be far below the range of human hearing, and would offer such a narrow scope of pitch to compose with. That in mind, I find it interesting that this piece is so easily categorized as sonification, despite the huge amount of changes that must be made to the data before it is heard. In some recordings Lucier and his assistant would multiply the frequency of the data by as much as 32 times, which if kept consistent, would still be an accurate representation. The issue I have with this piece is that these alterations are constantly adjusted so that their is huge variance in the pitch and volume. Although I have no doubt that this makes for a much more interesting piece of music,  I can’t help but feel that what we hear is more of a composer toying with effects processors, than the sonification of alpha wave data.

One of the great things about Wav4kst is that there is no pre-defined score. By using the technique of parameter mapping sonification, accurate representations of the data are audible in the music. The temperate values directly correlate to pitches, and each change in temperature adjusts the pitch by the same interval, without any ‘composed’ alterations. I think working in this way allows for a much more accurate representation of the weather data, and in this sense, makes the concept more interesting than Alvin Lucier’s.