Zoom H2n Conversion Plugin

My colleague Brian Fallon created a First-Order Ambisonic Encoder Reaper plugin for the Zoom H2n portable recorder which you can download from the link below, or directly from Brian’s website, here.

H2n-FOA-Encoder-Package.zip

h2n-plugin
While the H2n is far from the best Ambisonic microphone available, it is certainly one of the most affordable and produces surprisingly usable results given its cost (although due to the geometry of the H2n’s microphone capsules, it is horizontal only). Zoom released a firmware update for the H2n earlier this year which allows for horizontal only Ambi-X audio to be recorded directly onto the recorder. However, it can sometimes to be useful to record in the original 4-channel mode (so you have access to the original stereo tracks) and convert to Ambisonics later. In addition, if you made 4-channel recordings with the H2n prior to the release of this Firmware update, then this plugin can also be used to convert these into Ambisonics.

Brian’s plugin is for the DAW Reaper and can be used to convert these H2n 4-channel recordings into horizontal B-format Ambisonics and also allows you to choose various output channel orders and normalization schemes (Furse-Malham, Ambi-X, etc.). The package includes a sample Reaper project and a manual with details on the recording and plugin setup.

Note, that if you own the older H2 recorder which has a slightly different microphone arrangement, then Daniel Courville’s VST and AU plugins can be used for conversion to B-format in a similar fashion.

 

Soundscapes, VR, & 360 Video

 

Over the past few months we’ve been busy presenting our work at the AES and ISSTA conferences, through masterclasses,  workshops, and public demonstrations such as Dublin Science Gallery’s event, ‘Probe: Research Uncovered‘ at Trinity College Dublin last month. In our research, we are continuing to evaluate different recording techniques and microphones and we have also recently acquired a new 360 camera system (a GoPro Omni to be exact) with a much simpler and faster capture and video stitching process compared to our experimental rig (although monoscopic only of course). We’ll have more information on that camera system in the coming weeks.

As a composer, one of the things I find most fascinating about VR and 360 video is its relationship to soundscape composition. Composers have been making music from field recordings for many decades from the electroacoustic nature photographs of Luc Ferrari, to the acoustic ecology of The World Soundscape Project (WSP) and composers such as Murray Schafer, Barry Truax, and Hildegard Westerkamp, and the music and documentary work of Chris Watson, to give just a few examples (the Ableton Blog has a nice article on the Art of Field Recording here).

Of course, in the world of VR and 360 video the soundscape serves an important functional role as a means to increase the sense of immersion and presence in the reproduced environment. In addition, the location of sounds can be used to direct visual attention towards notable elements in the scene. It has been said of cinema that “sound is half the picture” but this is perhaps even more true in VR!

The combination of these two areas is therefore deeply interesting to me, both in terms of how we might create music soundtracks for 360 videos and VR games that are created from the natural recorded soundscape and sound design, but also in terms of how we might use 360 video for the presentation of soundscape compositions.

Although it may seem somewhat counter-intuitive, this ability to control and perhaps also remove the visual component can be used to focus the attention on the audible soundscape in a potentially interesting way. While loudspeakers or headphones can provide an effective sense of envelopment within an audio scene, there is inevitably a conflict between the visual perception of the loudspeakers and/or reproduction environment, compared to the recorded soundscape. In the context of 360 video, the composer has in contrast complete control over both the visual and audible scene which opens up some interesting creative possibilities.

This type of environmental composition, which makes use of both 360 video and spatial soundscapes is the next focus of this project and we should have new work online in the coming months. However, in the meantime I’d like to recommend an award winning VR experience which has inspired my work in this area. Notes on Blindness is a documentary film based on the audio diaries of John Hull and his emotive descriptions of the sensory and psychological experience of losing sight and blindness. The accompanying VR presentation utilizes spatial audio and sparse, dimly lit 3D animations to represent this experience of blindness in a highly evocative manner. Released for the Samsung platform earlier this year, the VR experience is now available as a free app for iOS or Android and is highly recommended.

 

Spatial Audio & 360 Video

The first 360 video from the concert is now online, and in a nice piece of timing, Youtube have just released a new Android app that can playback 360 videos with spatial audio. So, if you happen to own a high spec Android smartphone like a Samsung Galaxy or Nexus (with Android v4.2 or higher), you can watch this video using a VR headset like Cardboard with matching spatial audio on headphones. Desktop browsers like Chrome, Firefox, Opera (but not Safari), and the YouTube iOS app will only playback a fixed stereo soundtrack for now, but this feature will presumably be added to these platforms in the near future.

This recording is of the first movement of From Within, From Without, as performed by Pedro López LópezTrinity Orchestra, and Cue Saxophone Quartet in the Exam Hall, Trinity College Dublin, April 8th, 2016. You can read more about the composition of this piece in an earlier post on this blog, and much gratitude to François Pitié for all his work on the video edit.

Apart from YouTube’s Android app, 360 video players that currently support matching 360 audio are thin on the ground, at least for now. Samsung’s Gear VR platform supports spatial audio in a similar manner to YouTube, although only if you have a Samsung smartphone and VR headset. Facebook’s 360 video platform does not support 360 audio right now, however, the recent release of a free Spatial Audio Workstation for Facebook 360 suggests that this wont be the case for long. The Workstation was developed by Two Big Ears and includes audio plugins for various DAWs, a 360 video player which can be synchronised to a DAW for audio playback, and various other authoring tools for 360 audio and video (although only for OSX at the moment).

The mono video stitch was created using VideoStitch Studio 2 which worked ok, but struggled a little with the non-standard camera configuration. My colleague François Pitié is currently investigating alternative stitching techniques which may produce better results.

The spatial audio mix is a combination of a main ambisonic microphone and additional spot microphones, mixed into a four channel ambisonic audio file (B-format/ 1st order, ACN channel ordering, SN3D normalization), as per YouTube’s 360 audio specifications. As you can see in the video, we had three different ambisonic microphones to choose from, a MH Acoustics Eigenmike, a Core Sound TetraMic, and a Zoom H2n. We used the TetraMic in the end as this produced the best tonal quality with reasonably good spatial accuracy.

As might be expected given the distance of the microphones from the instruments, and the highly reverberant acoustic, all of the microphones produced quite spatially diffuse results, and the spot microphones were most certainly needed to really pull instruments into position. The Eigenmike was seriously considered as this microphone did produce the best results in terms of directionality (which is unsurprising given it’s more complicated design). However the tonal quality of the Eigenmike was noticeably inferior to the TetraMic, and as the spot mics could be used to add back some of this missing directionality, this proved to be the deciding factor in the end.

The Zoom H2n was in certain respects a back up to the other two microphones, as this inexpensive portable recorder can not really compete with dedicated microphones such as the TetraMic. However, despite its low cost it does work surprisingly well as a horizontal only ambisonic mic and was in fact used to capture the ambient sound in the opening part of the above video (our TetraMic picked up too much wind noise on that occasion so the right type of wind shield for this mic is strongly recommended for exterior recordings). While we used our own software to convert the raw four channel recording from the Zoom into a B-format, 1st order ambisonic audio file (this will be released as a plugin in the coming months), there is now a firmware update for the recorder that allows this format to be recorded directly from the device. This means you can record audio with the H2n, add it to your 360 video and upload it to YouTube without any further processing beyond adding some meta data (more on this below). So, although far from perfect (e.g. no vertical component in the recording), this is definitely the cheapest and easiest way to record spatial audio for 360 video.

The raw four channel TetraMic recording had to first be calibrated and then converted into a B-format ambisonic signal using the provided VVMic for TetraMic VST plugin. However, it should be noted that this B-format signal, like most ambisonic microphones (apart from the Zoom H2n), uses the traditional Furse-Malham channel order and normalization. So, it must be converted into the ACN, SN3D using another plugin such as the  AmbiX converter, or Bruce Wiggins‘ plugin, as shown in the screen shot below.

Screen Shot 2016-05-25 at 18.42.10

In addition to these ambisonic microphones positioned at the camera, we all also used a number of spot mics (AKG 414s), in particular for the brass and percussion on the sides and back of the hall. These were mixed into the main mic recording using an Ambi-X plugin to encode this mono recording into a four channel ambisonic audio signal and position it spatially, as shown below.

Screen Shot 2016-05-25 at 19.13.03

As these spot microphones were positioned closer to the instruments than the main microphone, and were directly mixed into this recording in this way, they provide much of the perceived directionality in the final mix, with the main microphone providing a more diffuse, room impression. This traditional close mics + room mic approach was needed in this particular case as it was a live performance with an audience and musicians distributed around the hall. However, this is quite different from how spatial audio recordings (such as for a 5.1 recording) are usually created. These types of recordings tend to emphasize the main microphone recording with minimal or even no use of spot microphones. However, to do this we have be able to place our main microphone arrangement quite close to the musicians (above the conductors head for example) so that we can capture a good balance of the direct signal (which gives us a sense of direction), and the diffuse room reverberation (which gives us a sense of spaciousness and envelopment). Often this is achieved by splitting the main microphone configuration into two arrangements, one positioned quite close to the musicians, and another further away. However, this is much harder to do using a single microphone such as the TetraMic, particularly when an audience is present and the musicians are distributed all around the room. This is one of the things we will be exploring in our next recording, which will be of a quartet (including such fine musicians as Kate Ellis, Nick Roth, and Lina Andonovska), and without an audience. This will allow us to position the musicians much closer to the central recording position and so capture a better sense of directionality using the main microphone, with less use of spot microphones.

Google have released a template project for the DAW Reaper which demonstrates how to do all of the above, and also includes a binaural decoder and rotation plugin that simulates the decoding performed by the Android app. For installation instructions and download links for the decoder presets and Reaper Template, see this link. This can also be implemented using the same plugins in Adobe Premiere, as shown here. Bruce Wiggins has a nice analysis of the Head Related Transfer Functions used to convert the ambisonic mix to a binaural mix for headphones on his blog which you can read here.

Finally, once you’ve created a four channel ambisonic audio file for your 360 video, you then need to add some metadata so YouTube knows that this is a spatial audio signal, as shown here. My colleague François Pitié describes how to do this on his blog, as well using the command line converter FFmpeg to combine the final 360 video and audio files. That article also demonstrates how to use FFmpeg to prepare a 360 video file for direct playback from an Android phone using Google’s Jump Inspector app.

 

After the Concert

So, we have all the footage transferred from the (many) SD cards and backed up; we have about 660 Gb of audio and video material in total, so plenty to work with!

Here is a little timelapse video of François Pitié and Sean Dooney putting the final touches to our 360 camera rig before the concert.

We’ve just started working on stitching together and synchronizing our video footage from both the 360 camera rig, and the three 3D pairs (16 GoPros in total). Although others have reported occasional problems with the GoPro wifi remote and the odd corrupted file, we didn’t encounter any problems in that regard. We did find that the cameras would sometimes go immediately out of sync when started with the remote, however, this was always immediately noticeable (and very obvious) and could be easily fixed by stopping and starting record again. Of course, the GoPro wifi remote only ensures synchronization within a few frames and much more accurate time alignment is required to create the full 360 stitch. This synchronization is often done using the audio track captured by each individual GoPro, and while we also have this option, we did try a different method that looks promising. As well as the standard clapper board, we also used a camera flash to create a visual reference for the alignment measurements, and while much work needs to be done, this definitely looks like a viable approach which may potentially allow for more accurate alignment than audio alone.

We’ll be working on stitching the 360 video over the coming months, so we’ll have more technical details on that in the near future. In terms of the audio, we recorded the concert using three different main microphones, namely a MH Acoustics Eigenmike, a Core Sound TetraMic, and a Zoom H2n, as well as a number of mono spot microphones. One of the aspects of this recording I’ll be looking at in the coming months is the production process for combining these B-format recordings and spot microphones, however, we’ll also be comparing the relative performance of these three microphones, particularly when combined with the matching 360 video.

In the meantime, here are some photos of the concert (with thanks to François Pitié).

DSC_4250

DSC_4258

DSC_4324

DSC_4247

DSC_4331

DSC_4333

DSC_4340

DSC_4353

You also can hear the third movement of From Within, From Without at the Spatial Music Collective‘s programme at the Ideopreneurial Entrephonics II festival, this coming Saturday (April 23rd) at the Freemasons’ Hall, in Dublin. Also featured on the programme will be works by Jonathan Nangle, Brian Bridges, Massimo Davi, and others, and there are lots of other interesting performances and events happening over the course of the festival too.

You can hear a short excerpt of this soundscape composition entitled Of Town and Gown below, although of course this has been reduced to stereo from the original mix for eight loudspeakers.

Here’s the programme note;

Upon entering the campus of Trinity College Dublin it is striking how much the character of the ambient sound changes, as the din of traffic and pedestrians which dominates outside the college walls recedes into the background as you emerge from the narrow passageway of the front gate. This sense of liminality and of passing through a threshold into an entirely different space is the focus of this tape piece entitled Of Town and Gown. This soundscape composition is constructed from field recordings made around the outskirts of the campus, which are then manipulated, processed and combined with a sound from the heart of the university, namely the commons bell located in the campanile. In this way the piece explores the relationship between the university and the rest of the city (between town and gown) through the blending and relating of these different sounds from both inside and outside the college walls.

Finally, I would like to say thanks to a great many people who helped make the concert happen, in particular the wonderful performers of Trinity Orchestra, Cue Saxophone Quartet, Pedro López López, and Miriam Ingram, and also;

The Provost Patrick Prendergast, The ADAPT Centre, Science Foundation Ireland, Christina Reynolds, Brian Cass, Valerie Francis, Francis Boland, Naomi Harte, Dermot Furlong, Jenny Kirkwood and all the staff and students of the Music & Media Technologies programme, François Pitié, John Squires, Conor Nolan, Sean O’Callaghan, Hugh O’Dwyer, Luke Ferguson, Sean Dooney, Stephen Roddy, Aifric Dennison, Bill Coleman, Albert Baker, Jonathan Nangle, Stephen O’Brien, the Spatial Music Collective, Richard Duckworth and all the staff of the Music Department, Sarah Dunne, John Balfe, Sara Doherty, Tom Merriman, Michael Murray, Noel McCann, Tony Dalton, Paul Bolger, Liam Reid, & Ciaran O’Rourke.

 

From Within, From Without

So due to the vagaries of Irish weather, tomorrows performance of From Within, From Without will now take place in the wonderful location of the Exam Hall, in Front Square. This means that there are now some additional tickets available (which can be booked here), and a small number of tickets should also be available on the door.

exam-hall-interior-tcd

The concert kicks off at 7pm sharp, beginning with the musicians of Trinity Orchestra, Cue Saxophone Quartet, and Pedro López López.

Our 360 camera rig is ready to go, and we’re looking forward to seeing you there!

———————

From Within, From Without

Enda Bates

I. From Without, From Within

Trinity Orchestra / Cue Saxophone Quartet / Pedro López López

II. The Silent Sister

Miriam Ingram / Eight channel electronics

III. Of Town and Gown

Eight channel electronics

7pm, Friday, April 8th, 2016. The Exam Hall, Front Square, Trinity College Dublin.

360 Audio in Practice

To date, much of the discussion of 360 content has focused on the visual side of things and the entirely new hardware and software required for this new medium of 360 video. In contrast surround sound has been in use for many decades in cinema, live performances and recordings of experimental and popular music, theatergaming, and art installations. So rather than requiring the invention of entirely new technologies, we can instead adapt existing techniques for the specific demands of Virtual Reality (VR), namely;

  1. that sounds should appear from around, above and below the listener
  2. that (headtracked) headphones must be used instead of loudspeakers
  3. and that the system is portable, practical and reasonably simple to use

While a great many surround sound recording techniques have been developed over the years, for practicalities sake these have often tended to disregard the vertical position of sounds (installing loudspeakers in the ceiling and floor is often challenging!). So we end up with surround systems such as the long defunct Quadraphonics (four loudspeakers arranged in a square), 5.1, 7.1, etc.

901px-5-1-surround-sound-svg
5.1 Surround Sound

The motivation for these developments, particularly in cinema surround sound, was often practical requirements such as clearer dialogue, a bigger dynamic range of loud explosions and quiet whispers, without hiss or distortion, and increased fidelity. These developments are nicely summarized in the following documentary by Filmmaker IQ on the History of Sound at the Movies.

Of course as John Hess points out, sound is more than simply a technical solution; it is “half the picture”, and the ability to position sounds all around the listener is important from an artistic standpoint too. For this reason cinema surround sound continues to develop with the biggest recent change being the incorporation of overhead loudspeakers in popular new systems such as Auro 3D and Dolby Atmos. Alfonso Cuarón’s Gravity from 2013 is an excellent example of a film which really took advantage of the capabilities of these new systems in many different ways, as discussed in this great interview with the films sound designer Glenn Freemantle and re-recording mixer Skip Lievsay.

This ability to put the listener inside the sound scene is important for cinema, but it is an absolute necessity for VR. However, the types of microphone techniques that work for cinema may not be the most appropriate for VR, particularly if we want to change from a loudspeaker system to headphones. So called dummy-head microphones have existed for many years, and when listened to on headphones these can do a pretty good job of simulating normal hearing in a way that is very different from normal stereo sound. When we use headphones the sounds we hear usually seem to be positioned inside our heads, which when you think about is very unnatural. In normal hearing, sounds are externalized and this is captured to a certain extent when we use these binaural microphones containing two capsules positioned on either side of a dummy-head. As we can see from the picture below, these microphones also try to replicate the folds and shape of the ear pinnae, however this is no easy task as, like fingerprints, the particular shape of our ears is unique to us. As a consequence, reproducing sounds directly in front or behind is particularly challenging with binaural techniques, as this type of perception depends largely on the specific way sounds are filtered by our own unique set of ears (for more information on how spatial hearing and binaural sound works, see this page on my website).

dummy-microphone

Here are some examples of binaural sound recorded using a dummy-head microphone (these should be listened to on headphones).

First off, lets say hello.

Now lets move around the microphone.

and here’s some examples of the many binaural recordings available online, namely a thunderstorm (source: Freesound.org), and a market (souce: Wikicommons.org).

So although not perfect, binaural will definitely be involved in the reproduction of 360 audio, however, this type of binaural microphone is perhaps not ideal for these types of recordings. You may have noticed in the previous examples that when you moved your head, all the sounds move too, which again is highly unnatural. In real life a static sound stays in the same point in space as we rotate or move our head, and this needs to happen in virtual reality too. Actually tracking the position and rotation of the head is pretty simple, but manipulating the audio so that sounds stay in the same place as our head and attached headphones move is more challenging. Early 360 presentations (such as Beck’s 360 recording of David Bowie’s Sound & Vision for example) attempted to solve this problem using dummy head microphones with multiple sets of ears, with somewhat monstrous looking results!

binaural_audio_recording_instrument

In effect, these microphones capture multiple, concurrent binaural recordings from different perspectives. On playback, the head-tracking system cross-fades between these different recordings as the listener’s head moves, so that sounds hold their position rather than rotating with the listener. While this can work, this solution is not without its problems. Firstly, these microphones are very idiosyncratic, non-standardized and quite bulky. More importantly, sounds which are located at positions in between the different angles optimally captured by the microphone may not be reproduced correctly, and smoothly rotating the sound to compensate for head movement is also difficult to achieve.

For these reasons, these charmingly freaky looking microphones are increasingly being replaced with a different approach based on the audio format known as Ambisonics. First developed in the 1970s by Michael Gerzon, Peter Fellgett, among others, Ambisonics has been used by experimental composers and sound designers for many years, but without much in the way of widespread, commercial use. However this is now changing as the microphone technique associated with this approach is very well suited to VR. The so-called Soundfield microphone can capture a three-dimensional soundfield using one compact arrangement of four microphone capsules in a standardized arrangement, which can later be decoded to different arrangements of loudspeakers, or indeed to binaural. In addition, the entire recorded soundfield can be smoothly rotated prior to this decoding, which makes head-tracking much easier to achieve. Finally, as this technique has been around for over four decades it is well understood and lots of existing ambisonic hardware and software is available, often for free.

So how does it actually work? Well to put it simply in Ambisonics a 3D soundfield is described using a four channels of audio labelled W, X, Y & Z, collectively referred to as a B-format signal. These four channels of audio correspond to the overall non-directional sound pressure level [W], and the front-to-back [X], side-to-side [Y], and up-to-down [Z] directional information.

wp-b-format

These four signals can be captured directly using an omni-directional microphone, and three bi-directional, or figure-of-8 microphones. However, it is not really possible to mount four microphones in the same point in space, so instead the microphone actually contains four cardioid or sub-cardioid  capsules mounted on the surface of a tetrahedron (soundfield mics are sometimes referred to as tetrahedral mics for this reason). The raw microphone recording (known as A-format) is then converted in hardware or more usually these days in software into B-format, before further processing. Existing mono or stereo recordings can also be positioned in space and encoded into B-format using a hardware or software panner.

It is important to note that these signals do not feed the loudspeakers directly but instead function as a description of a soundfield at a particular point in space. This means that a B-format ambisonics signal can be smoothly rotated around any axis (for head-tracking) and then decoded for different configurations of loudspeakers as needed (although in practice it works best with regular and symmetrical loudspeaker arrays). For VR, the individual loudspeaker signals are instead encoded in real-time into binaural signals, which are then mixed together to produce the final headphone mix. This virtual loudspeaker approach has been the focus of considerable research in recent years and is a very efficient and effective way of implemented head-tracked 360 audio.

Describing a soundfield in this way using a bare minimum of four channels is certainly efficient, but there’s plenty of research to shows that this efficiency comes at a cost. Errors or blurriness in the position of sounds can occur, and this is particularly true when the limitations of binaural are also taken into account. However, this is perhaps much less of an issue for VR, as in this type of presentation we can potentially also see the source of a particular sound as well as hear it.

All of this means that soundfield mics and “virtual reality microphones” are set to become synonymous, although the rate of development remains slow compared to 360 video, and indeed much of the 360 content currently available does not actually contain matching 360 audio. A number of microphones based around the traditional design are currently available however, such as the original Soundfield line now produced by TSL Products, Core Sound’s Tetramic, or the new Ambeo microphone from Sennheiser. Portable recorders containing multiple microphone capsules such as the Zoom H2n can also be modified or processed to produce a B-format signal (although horizontal only) and there are also a few more elaborate systems such as MH Acoustics’ Eigenmike. We have been investigating the precise capabilities of these different microphones as part of the Trinity 360 project and the Spatial Audio over Virtual and Irregular Arrays research group lead by Prof. Francis Boland. Over the summer we will be publishing the results of a series of experiments (shown below) which assessed a number of these microphones in terms of their directional accuracy and overall tone quality and fidelity. We are also planning on recording the concert performance on April 8th using a number of these microphones, so we can see how well they function in an actual location recording, and when combined with matching 360 video. We’ll publish the footage on this blog once we have it, and then you can judge the results yourself!

Picture1Untitled-1

 

Spectral Music & the Commencements Bell

With thanks to Michael Murray, Noel McCann and Tony Dalton.

Spectral music is based around the idea that “music is ultimately sound evolving in time”. Often this involves the computer aided analysis of a particular sound, a Bb on a clarinet for example, followed by the metaphorical re-synthesis of this sound using an orchestra. In the picture shown above, the image on the left is a spectrogram of a single strike of the commencements bell in the Campanile in Trinity College. One the right, is an early draft of an orchestral chord (just synthesized using samples and MIDI for now), which attempts to instrumentally recreate this unique timbre. In an earlier post I mentioned the call-and-response as the most fundamental forms of spatial music. So, as long as the temperamental gods of Irish weather are on our side, a call-and-response between this bell and the orchestra will open the 3rd and final movement of this new piece, entitled From Within, From Without.

While the bell’s spectrum informed the harmonic language of all three movements of this work, the rhythmic structure of the orchestral movement was inspired by the synthesis technique of granulation, and some other techniques developed by composers such as Earle Brown [1926-2002]and Henry Brant [1913-2008]. Brant was a prolific composer of spatial music, writing 76 spatial works (and 57 non-spatial works) over the course of his long career. For Brant, the only way to really exploit space was to ensure that each musician, at each distinct location in space, performed material that was as differentiated as possible from every other part. So, although the entrance of each musician or group of musicians might be cued, they would then proceed independently at their own speed, rhythm, and in their own key. This is very similar in lots of ways to the concept of blocks of music developed by Earle Brown, and later Henry Vega, in which the start and end point of each block are tightly synchronized, but the individual musical lines inside each block are left entirely unsynchronized. This technique results in an interestingly complex texture and neatly avoids the issue of maintaining synchronization between spatially distributed musicians. The opening of Brant’s 1954 composition Millenium II illustrates this approach as ten trombones and ten trumpets, positioned along the side walls of the hall, enter one-by-one, each playing different melodies, in different keys.

Brant-MilleniumII

Fig. 1 Stage Layout for Henry Brant’s Millenium II (1954) [Harley, 1997]

While Brant’s approach is very effective at highlighting the spatial distribution of the instruments, it does inevitably result in very dissonant harmonies and textures. With this new work I wanted to explore this type of approach, but with melodic lines that are rhythmically independent, but much closer in terms of the harmonic language. As such, the independent lines overlap in ways that are sometimes consonant, and sometimes dissonant. In some respects this results in a texture that is reminiscent of granulation; the electronic processing technique in which many fragments (or grains) of the original sound are layered over each other, and particularly when long grain durations are used. This is particularly prominent in the middle section of From Without, From Within as five trumpets and two trombones, all of which are distributed around the audience, play very similar melodies that are deliberately desynchronized, resulting in a complex texture that shifts between consonance and dissonance and results in some interesting spatial effects.

If you’d like to know more about the specific details (“let’s talk quartertones!”) of this movement, and indeed the other two movements of this work, I’ll be giving a  free, public talk on the project as part of the Music at Trinity Series in the Long Room Hub, March 21st, at 6.15 pm.

Also, if you’d like to come to the performance on April 8th, tickets are now available from this link (this is a free event, but tickets are required to reserve a seat).

[Harley, 1997] Harley, M. A., “An American in Space: Henry Brant’s “Spatial
Music”, American Music, Vol. 15(1), pp. 70-92, 1997.

Soundscapes of Town and Gown

One of things I wanted to explore in this piece is the relationship between Trinity College and the rest of the city, both from a sociohistorical perspective, and in terms of the sonic attributes of these two, quite different spaces. During its long history Trinity’s relationship with Dublin, and indeed with the rest of the country, has often been difficult, complex, and even out rightly combative at times. While questions of religion and national identity were a significant factor, tensions between liberal and conservative voices, both from within and without, have also played their part in this complex relationship between “town and gown”. It is perhaps not surprising therefore that the walls that surround Trinity were sometimes perceived as a barrier meant to keep people out! In Mary Muldowney’s fascinating book “Trinity and its Neighbours. An Oral History, many local residents employed by Trinity during the second half of the 20th century mention this perceived separateness of the university from the city (for many, their job interview was the first time they had been inside the college walls). As John McGahern puts it, “when I was young in this small country Trinity College was so far removed from our lives and expectations that it seems a complete elsewhere”.

While problems certainly remain, these days the Trinity campus is much more of a public space, and indeed is viewed by many as an oasis of calm and relative quiet in the midst of the bustling traffic and noise of Dublin city center. On a purely sensory level, the walls which surround the college could now perhaps therefore be viewed as more of a protective shell that provides some welcome respite from the incessant roar of city. From my own perspective, I have always been struck by a powerful, liminal transition from one distinct space into another whenever I pass through the narrow passageway of the front gate. This distinct sense of two different sonic spaces, and the specific sounds that reach and stretch across this notional and physical barrier, is the primary inspiration for this new composition of spatial music.

It is a fact that any outdoor performance in a city center will have to contend with the general ambiance of the city, police sirens and all! So, borrowing an (oblique) strategy from Brian Eno I decided to turn this unavoidable intrusion into a specific feature of the first movement of the piece, and deliberately project these outside city sounds inside, using multiple loudspeakers placed around the audience. This soundscape composition is constructed from field recordings made in, around and outside the boundaries of the university grounds. In this way sounds such as the traffic and roadworks on College Green, and the trains which bisect the north-eastern corner of the campus will be projected inside Front Square and manipulated and transformed into music. For the more technical among you, these field recordings were made using an interesting technique suggested by Augustine Leudar, based around multiple stereo recorders positioned to match the eventual placement of the reproducing loudspeakers.

Trinity-FieldRecording-1
Field Recording from in front of the Campanile, December, 2015

 

But what about the other direction? What kind of sounds does Trinity produce which might instead project outward? Well, a 30+ orchestra of brass, wind, percussion and electric guitars will be part of that, but the buildings too make their own sounds. Most obviously, the bells of the Campanile, which are sounded at different times during the year, and for different functions. The Campanile actually houses three bells in total; including the Commons bell which softly rings at noon, and an even softer, and little known bell which is used to mark the death of significant figures within the university. However, the largest and loudest of these three bells, and the one which will play the biggest part in this piece, is the commencements bell; rung on numerous occasions each year to summon graduating students to the commencements ceremony. There’s a beautiful, albeit somewhat mournful tone to this bell, and I decided pretty quickly that this would inform the harmonic language and writing for the entire piece.

This particular topic will be looked at in more detail in the next post.

So What Exactly is 360 Audio and Video Anyway?

Well, it’s pretty much exactly what it sounds like! Using special types of microphones and cameras you can record audio and video from all directions, at once. Surround Sound recording has of course been around for a long time but the ability to record matching 360 video is quite new. Perhaps the most familiar example of this is the Streetview mode of Google Maps, however, now this type of technology is being used to create 360 video as well as still images, and many new 360 cameras and recorders are starting to emerge. This includes both highly expensive camera rigs intended for film production, but also much more affordable devices too (expect this to built into many smartphones in coming years, although the question of where to put the selfie stick in a 360 image is yet to be resolved!).

The development of most of this new hardware is being driven by the emergence or perhaps more accurately the re-emergence of virtual reality (VR) in the past few years. While lots of fancy new VR headsets are set for release over the coming year, Google’s cheap but surprisingly effective Cardboard headset has sparked a lot of interest, particularly for 360 video (as distinct from VR gaming).Although little more than folded cardboard and some plastic lenses (and requires a pretty high-spec smartphone), this cheap viewer can be surprisingly effective and certainly whets the appetite for the more sophisticated developments to come (the Paul McCartney live performance by JAUNT is well worth a look, as are the most recent Google Jump demos).

jaunt-vr-paul-mccartney-virtual-reality-concert

Of course, as is often the case, the development of the audio side of things has lagged behind somewhat and many of the current demos do not actually contain matching 360 audio (the Paul McCartney track is a notable exception). While the basic technology to create 360 audio has been around for decades, the precise way in which this material is recorded, produced and delivered for VR applications is still very much in a state of flux. Such a state is good news for researchers of course, and later posts will look in detail at some of our work in this area here in Trinity.

All of these technological developments should be of particular interest to composers of spatial music, as this new form of media lends itself very well to this type of music. In fact, you could argue (and I will!) that traditional concert presentations with all of the musicians clustered in front is very unsuitable for VR (how many times do you really want to turn around and look at the audience?). In contrast, a work of spatial music in which the musicians are placed all around the audience is very well suited to this type of presentation in which the viewer can look around at will. In fact, many composers of spatial music have seated the audience in spiral or circular patterns to deliberately remove any emphasis on one direction over another (VR is perhaps even better in this regard as the viewer is not physically tied to a seat, or at least a ‘virtual’ seat anyway!).

universal_edition_large_40

This deliberate encouragement of different perspectives and viewpoints is a fundamental and indeed unavoidable aspect of spatial music which is very well matched to a virtual reality presentation (telling a story in VR is in contrast much more challenging).

So, given all of that, if we had a 360 camera and microphone rig, and we also had some 35 odd musicians, eight loudspeakers, a very large bell tower and a big open space to put them all in, what kind of spatial music can we create?

Well, that is the question, and an ongoing one! In later posts we’ll look at the logistics (which are many, and complex!) and setup of the orchestra for this piece, and we’ll talk about some of the music too (a little Henry Brant, and perhaps some spectral music too!).

So What Exactly is Spatial Music Anyway?

 

Spatial music is simply any form of music in which the placement or movement of sounds in space is a composed aspect of the work. While it is often associated with the development of electronic and electroacoustic music in the 20th century, spatial music is in fact much older. Call and-response patterns can be found throughout history in many different cultures and musical traditions, and this is of course by definition a rudimentary form of spatial music. In Europe, the antiphonal call-and-response of medieval church music developed into the increasingly elaborate polyphonic choral music of composers such as Adrian Willaert, Andrea and Giovanni Gabrieli, and Orazio Benevoli. One notable example of this type of spatial music is Thomas Tallis’ Spem in Aliumwhich was composed in c. 1570 for forty separate vocal parts divided among eight choirs (the Spatial Music Collective had lots of fun creating an electronic realization of this piece for eight loudspeakers at the 2009 Dublin Electronic Arts Festival!).

In the twentieth century, composers such as Charles Ives, Henry Brant and Karlheinz Stockhausen composed numerous works of spatial music involving multiple orchestras (Charles Ives – The Unanswered Question (1908), Karlheinz Stockhausen – Gruppen (1955-57)), large numbers of loudspeakers (Iannis Xenakis – Hibiki-hana-ma (1970)), and in one case, the entire city center of Amsterdam (Henry Brant – Fire on the Amstel (1984)). So what exactly is it that motivated these different composers to write this type of music?

Well, one of the advantages of spatially separating different groups of musicians or loudspeakers is that multiple, independent lines or musical layers can be more easily perceived and followed. Composers such as Charles Ives, Henry Brant and Karlheinz Stockhausen often used a spatial distribution of musicians or loudspeakers to facilitate both the performance and the audience’s perception of music containing many, simultaneous and complex layers of independent and sometimes quite dissonant material (described beautifully by John Cage as the “co-existence of dissimilars”). Of course this spatial separation is a practical necessity when musicians are performing entirely independent material at different tempos and in different keys, and these practical performance issues are another important aspect of this type of music (something which this blog will return to at a later date).

For electronic music, the use of more than two loudspeakers is also beneficial in all sorts of ways. Surrounding the listener with loudspeakers allows you to put the listener inside a recorded or synthesized sound scene in a way which cannot be easily achieved using simple stereo. In addition, the ability to dynamically move sounds around the listening space can be a hugely expressive aspect of electronic music, and can provide a really strong sense of physicality and gesture in the invisible or acousmatic music for loudspeakers alone. I can still remember the first time I encountered this type of music and being immediately struck by just how different this was to the electronic music I had encountered before. It’s been a source of fascination ever since and the focus of much of my work since both as a composer and academic.

 

 

So spatial music is definitely not a new idea, however, what is new is the ability to record matching audio and video from all directions using 360 audio/video hardware, and virtual reality. Something which we’ll look at in the next post!