Spatial Audio and Reverberation Systems for Live Events

How they evolved from the last decades up to the state-of-the-art

Text: Ron Bakker Photographs: Bernhard Lösener, Markus Thiel, Archive

Block 1

The audio recording and sound reinforcement industry is spending huge amounts of money on reverberation systems since the advent of the tape echo. Why is this? The answer lies in the profound impact on our hearing experiences.

A considerable part of the brain stem and the auditory cortex in our brains is dedicated to positioning incoming sounds. Direct sound, of course, as this allows humans to ascertain from which direction a sound comes – an evolutionary skill developed to anticipate approaching predators so we can decide to fight or run. But also indirect sound has a huge impact: it tells the brain if there are surrounding walls, floors, ceilings – in other words: whether you are protected from rain (the ceiling) and from wind and dangerous animals (the walls).

Psycho-acoustics of reverberation

Feeling protected is a strong positive emotion, and it’s triggered by hearing reflections from sounds that you make yourself. An equally strong positive emotion is triggered when another person makes a sound and the reflections tell you that you are in the same room – in other words: you are together. These two emotions directly follow the physiological base of Maslow’s hierarchy of human needs: Safety and Belonging. Apply this to music, having its own strong positive effect on our well-being, and it becomes clear that acoustics is a massive factor in the enjoyment of music. It’s also the reason we started to build halls that could accommodate an orchestra and an audience with a suitable size and acoustic absorption to generate the best sounding reflections. And, as a consequence, music genres started to develop and adapt to the acoustics of these halls. Last, but not least: good acoustics support the person performing the music: the room ‘talks back’ to the performer and makes it easier to play, while at the same time hearing the other musicians in a duo, ensemble, or orchestra causes an intimate togetherness which is a strong motivator to create beautiful music. As a result, good acoustics has a double effect for the listener: the music performance is better, and the sound is better – the listener is the overall winner.

Reverberation machines: from magnetic-mechanical to digital

Leo Beranek’s book “100 concert halls and opera houses” is dedicated to ranking music performance spaces by their acoustics. We all can enjoy the new year’s concert every 1st of January from the nr. 1 concert hall according to orchestra conductors: the golden hall of the Musikverein in Vienna. But what if one is not in the golden hall? For example, in an absorptive drama theatre or a small club, or when a music piece is recorded with close miking, not recording the acoustics, with possibly some electrical or digital musical instruments that exist only in voltages or bits? Here’s where technology comes to the rescue. Since the beginning of music recording and broadcast, some studios placed loudspeakers and microphones in a reverberation chamber – something that is difficult in a live situation. As an alternative, mechanical plate and spring reverb units came in use. In the seventies, electro-magnetic effect machines were designed – one of the first commercially available effect machines in the 70s- the Roland RE201 Space Echo comes to mind: an endless tape loop which generated a decaying echo. It’s not very realistic as a reverb, but it’s a start. The real breakthrough in reverberation for recordings and live sound was the Digital Signal Processor becoming commercially available in the eighties – with the innovative yet expensive EMT250 and the Lexicon 480 pioneering the field in the late seventies. The more economic Lexicon PCM60 and Yamaha SPX90 became popular in the early eighties and soon became standard tools in a live sound system rack. All these processors featured a Feedback Delay Network (FDN): a set of digital delays with feedback loops which, carefully designed and tuned, could mimic a natural reverberation tail quite well.

The convolution revolution

The early machines featured a network of a limited number of feedback delays – with later machines developing to much larger and more complex FDN configurations made possible by the increase in the processing power of chips in the nineties. At the end of the nineties, the processing power was finally big enough to do something that happened already a decade earlier in the recording world: processing speed and memory size of computer systems allowed for sampling, manipulating sound at the highest quality in real time. When we apply this to reverberation, we end up with a technology that is called convolution: sampling the impulse response of an existing space, calculating the resulting transfer function of the space, and then multiplying it in real time with an audio stream. This is a heavy processing task, which was realized in commercial “sampling reverb” hardware effect machines at the turn of the century by two of the planet’s audio technology giants: Sony (with the 777) and Yamaha (with the SREV1). It was a milestone achievement in hardware processing, but the ever-increasing processing power of computers rapidly overtook the two designs — first utilized by Audio Ease, a small company in the Netherlands. They realized in the late nineties that the Altivec arithmetic core in the brand new Motorola PowerPC G4 processor allowed convolution to be carried out by a commercially available and therefore relatively inexpensive PowerMac 4G. Their product was a piece of software named Altiverb, paying homage to the Altivec core that made it possible. Soon this software “plug-in” technology was developed further by many other companies to a state where there are almost no hardware reverberation units any more, only plug-ins that can be loaded into Digital Audio Workstations and mixing consoles on any computing platform.

The spatial revolution

But this is by no means the end of the development of artificial reverberation. On the contrary, we’ve just started – and this has to do with another technology made possible by superfast computers: object based mixing. This started off after the turn of the century with research activities by institutes such as the German Fraunhofer Institute, joined soon by the world’s sound reinforcement manufacturers. Generally, object based mixing is done by a dedicated computer – a renderer – running software that takes in many audio streams (objects), mixing them to outputs (channels) according to metadata that is part of the audio stream (object coordinates) representing at which coordinates the audio stream should be played back (object position). The trick is that the software knows where all the speakers in the sound reinforcement system are, and based on the object coordinates it uses an algorithm to send each individual audio stream to the appropriate speakers so that the listeners perceive the object at the intended object position.

Block 6

This is a whole new chapter in sound reinforcement, adding “immersive rendering” as a new processing function between the mixing console and the speaker system. The result is breathtaking, increasing the quality of sound production to the next level. Up to now, immersive audio technology was quite expensive, affordable only for large, high-cost productions. Recently, the software and computer prices dropped to a level which makes the technology also suitable for mid-size and even small size productions, with the cost of the immersive processing and user interfaces being less relevant than the cost of the involved loudspeakers.

The thing is… up to now, the reverberation for a mono or stereo production is designed to be reproduced by a frontal system with sounds coming from the front, with the effect send to a reverberation unit being a mono signal for most of the time. If we do the same in an immersive system, we would spoil the quality: the reverberation stays the same when an object position changes – the magic is gone. To match the quality of object based mixing, we have to introduce object-based reverberation: the reflections change when an object’s position changes. This is a dramatic change in reverberation processing: every single object in an immersive system has its effect send, and it’s not mono – but immersive with multiple independent reflection processing algorithms. And it makes a huge difference: the artificial reverberation can do what a real concert hall can do: every listener in the hall hears the correct reflections in relation to the position of the direct sound from each of the objects. Remember that objects represent sounds, and sound represents someone playing an instrument or singing: human beings. Remember also that our brain stem and our auditory cortex are constantly analysing direct sound and reflections to form perceptions like togetherness, clarity, warmth, safety — directly driving the limbic system which takes care of our emotions. The effect is intense.

Back to acoustics

Are we done yet? The answer is a definite no. The perception and enjoyment of reverberation is imprinted in our brain functions based on eons of human development, and we’ve just managed to get the technology to tap into it. Some manufacturers spend a lot of research and development on the technology of measuring, modelling and regenerating reflections exactly as they occur in a real concert hall, others spend a lot of time focusing on psycho-acoustic modelling, trying out algorithms with listening panels to figure out which ones are the best. Most manufacturers do both, some also design sound reinforcement systems and some also design musical instruments – completing the full circle. Speaking of which… what if a music performance includes both recorded and digital sound sources — which need sound reinforcement — as well as classical acoustic instrument and vocal sound sources where amplification is not appropriate? Well, this is being covered already since the late eighties by manufacturers of acoustic enhancement systems, coincidentally using the same technologies as for digital reverberation and object based mixing: fast computers and a good understanding of acoustics and psycho acoustics.

This technology is based on a method that doesn’t use digital reverberation at all: regenerating reflections in an existing acoustic space with merely microphones and loudspeakers, increasing sound intensity of the diffuse field just as decreasing acoustic absorption of the room’s walls and ceiling would. Invented by Philips in the late seventies, this method extends the existing diffuse reverberation field in an absolutely natural way. One decade later, Yamaha added digital reverberation processing technology to this concept, introducing the “hybrid regenerative” method, bending the rules of acoustics a little by using advanced processing, but still providing a stunningly natural result. This method is now used by most manufacturers of acoustic enhancement systems, and is accepted by orchestras, their conductors and their audiences as high-quality enough to be applied in many world-class opera houses and concert halls. The combination of both object based mixing and hybrid regenerative acoustic enhancement is a step further in creating exciting performances. As said, we’re not done yet, there’s still a world to discover.

Block 9

Ron Bakker is a marketing manager at Yamaha Music Europe and involved in the design and support of Yamaha’s SoundXR immersive systems.