A New Camera Rig, Distance Processing Experiments, and “Vortex Cannons”

The Music & Media Technologies programme in Trinity College Dublin recently celebrated its 20th birthday with a concert in the Samuel Beckett Theatre. So, we decided to use this opportunity to try out our new camera system and microphone, namely a GoPro Omni, and a Sennheiser Ambeo.

index_postera3-fancy2-webgraphic                       img_0286_1-small

The concert featured numerous performances by the Crash Ensemble and composers such as Miriam Ingram, Enda Bates, Natasa Paulberg, Neil O’Connor, Maura McDonnell, and Conor Walsh/Mark Hennessy. However, a couple of performances in particular seemed very suitable for a 360 video, particularly The Sense Ensemble / Study #2 by George Higgs for string quartet, vortex cannons, silent signing singer, and percussion.

George is currently pursuing a Ph.D. at Trinity College Dublin entitled ‘An Approach to Music Composition for the Deaf’ and here’s his programme note for the piece;

Music involves much more than hearing. All of our senses – arguably nine in number – are in fact collaborating in our musical experience as a kind of ‘sense ensemble’. This composition is the second in an experimental research series exploring approaches to music composition for deaf audiences; or more generally music that appeals to the multiple senses responsible for our appreciation of music. The performance features smoke ring cannons, two signing percussionists and string quartet. Many thanks to Neimhin Robinson (smoking signer), Dr Dermot Furlong (nonsmoking supervisor), Jessica Kennedy(choreographic consultant), and the Irish Research Council.

While the content of this performance was very well suited to the medium (those smoke cannons in particular), the conditions were highly challenging in terms of the video shoot and so this was a good test of the limitations of the GoPro Omni system.

The most notable feature of this rig is undoubtedly the synchronisation of the six GoPro’s, which so far has been very stable and trouble free. Once the firmware is updated, then all 6 cameras can be controlled from one master camera, which can also be matched to the standard GoPro remote control. If you purchase the rig-only Omni, then it should be noted that the power pack and stitching software needs to be purchased separately, however, we were just about able to snake our 6 USB power cables up along the tripod and into the cameras without too much difficulty.



To achieve this synchronisation, the cameras attach to the central brain of the rig, however, the extra space needed for this does mean the cameras are not positioned as close together as physically possible. As a consequence, the rig and stitching software does struggle when moving objects or people get too close to the camera, as can be seen in the above video when George and Neimhin approach the smoke machines to fill up the vortex cannons.

Stitching is implemented using a specific GoPro Omni Importer App and for simpler shots in which nothing is moving too close to the rig, this does a pretty good job. However, in general at least some touching up of the stitch is required using a combination of Autopano Video Pro, and Autopano Giga. Visible stitch lines on static objects or people are relatively easy to correct and simply require some adjustments with the masking tool in Autopano Giga. This tool allows Markers to be added to the reference panorama image so that stitch lines avoid certain areas and noticeable artefacts are removed (or at least reduced).

For static objects this can usually be achieved relatively easily, however, it is definitely worth considering how you orientate the camera rig with that in mind. By default the omni is mounted on one of the corners of the cube, however, it could be worth adding an attachment to the tripod so the rig is mounted flat, depending on the particular setup for the shoot (we may have had better results here using that orientation).

The particularly challenging aspect of the stitch of George’s piece was the movement of the two performers, and this required further processing using the timeline in Autopano Video. This is a similar process as again we use the masking tool in Autopano Giga to selectively maintain specific areas in the reference panorama. Now however we’re using the timeline in Autopano Video to move between different sets of markers as the person or object moves across a stitch line. This can be pretty effective once the objects are not too close, as the following tutorial from CV North America demonstrates. However, if the action is happening within a few metres of the rig, then stitching artefacts may be unavoidable or at least extremely time consuming to eliminate entirely (as can clearly be seen at times in the video of George’s piece).

The particular lighting needed for this piece also presented some challenges in the stitch. In order to light the smoke rings, two fairly powerful spot lights were directed over the audience and directly down onto the stage (and therefore also the camera rig), which resulted in exaggerated brightness and shine in the performers faces (and those white coats too!).

In contrast to the above, the stitching for the second video from this concert was much more straightforward. For this piece by Miriam Ingram, the musicians were all at a safe distance from the camera so only a few small touch ups were required, again just using the masking tool in Autopano Giga to select specific areas within the reference panorama.


The audio for both of these videos was recorded using a Sennheiser Ambeo microphone and Zoom H6 recorder, mounted just in front and below the camera rig. We will be publishing some specific analysis of this microphone in the second part of our Comparing Ambisonic Microphones Study early in 2017, however, more informally I’ve been very impressed by this microphone. The build quality feels very good, it outputs a high signal level that performs well with average quality mic preamps such as in the Zoom recorders, and the accompanying conversion plugin is very straightforward.

Although marketed as a “VR mic” this is actually an almost identical design to the original Soundfield microphone which has been use for many decades. The photo below on the left shows the capsule arrangement within the Ambeo which follows the same tetrahedral layout of the four microphones as in the original Soundfield microphone (which is shown on the right).


As is often the case in 360 audio mixes, the question of distance is a an important factor to consider. For acoustic performances, recordings such as this can often sound very distant, generally due to the placement of the microphone alongside the camera rig at a distance beyond what would be typical for an audio only recording. As a consequence, this can result in a lack of directionality in the recording and require the addition of additional spot microphones. However, for this shoot we had the opposite problem due to the close miking and amplification of the musicians through the venue PA. As the mic and camera rig were positioned in the front row of seating, the PA loudspeakers were positioned very wide relative to the mic, resulting in a overly close sound compared to the visual distance. This was particularly noticeable for the string quartet in George’s piece which sounded much too wide and close initially.

To correct this issue, some simple yet surprisingly effective distance processing was applied to the Ambeo Mic recording, namely the addition of some early reflections to the W channel. This was very much a quick experiment using just the early reflections component of a commercial reverb plugin, however, as the results were pretty good it made it into the final mix. As a demonstration, the audio samples below contain static (at 30 deg and -90 deg respectively) binaural mixes of an excerpt of the piece. Each sample begins with two bars of unprocessed material, then two bars with the additional reflections, and so on for another 4 bars.

This type of distance processing of both ambisonic recordings, and encoded mono sources such as spot microphones will be the focus of my research in the coming year, as there are still many unknowns in this whole area. For example, just how important is it that the pattern and timing of these early reflections match the physical dimensions and acoustic of the space? Alternatively, can better results be achieved using some general method, perhaps such as the distance panpot suggested back in 1992 by Peter Craven and Michael Gerzon [1]?

There is also some evidence that the optimal number and distribution of early reflections for distance processing without excessive timbral coloration is dependent on the nature of the source signal, which suggests that a one size fits all solution may not be the best approach. Lets just say this is definitely a topic we’ll be returning too in 2017, and now that we have our hardware and workflow all sorted, expect a lot more 360 videos over the coming year.

[1] M. A. Gerzon: The Design of Distance Panpots, Proc. 92nd AES Convention, Preprint No. 3308, Vienna, 1992.