Sound for theatre

in Blog

2 December 2013

Using Ableton Live for sound cues in theatre productions.

Having just moved back up North with some free time for a couple of weeks, I volunteered my services as a sound engineer to a local theatre. I haven’t much experience working in straight theatre, so I thought this would be a good opportunity to learn something and help out a local amateur dramatics group at the same time. It was quite a learning process, so I am writing up the main things I learned since had I known then what I know now, I would have saved myself quite a lot of time and trouble.

The theatre was looking for extra help exactly when I contacted them, so I was immediately engaged to work on their latest production that happened to require more sound than they would normally use. There are atmospheric sounds running in the background, incidental music, music during scene changes and sound FX. Much of the sound design had been completed so I was taking over a project mid-stream.

Their in-house system is Show Cue System on a Windows laptop, stereo headphone out to a small format analog mixer. The laptop is moved between the rehearsal space and the main auditorium, and is taken home to work on the sound design. I adopted this system to begin with and really liked SCS. It is very similar to Qlab which I’ve used a few times and felt immediately at home with it. For straightforward projects SCS and Qlab are perfect for their simplicity. But this kind of software has a flat, non-hierarchical cue list which gets very messy when there are several layers of sound fading in and out. After a few dozen cues had been prepared, the cue list was getting more and more complicated and confusing and in the end I gave up with SCS and decided to move to Ableton Live which I hoped would simplify the insertion of new cues within the cue list, moving sound clips between tracks and generally provide a more sophisticated mixing platform.

Ableton Live allows similar cue list triggering to SCS and Qlab thanks to the way it automatically selects the next scene when the current scene is triggered, so that the sound operator just hits the enter key repeatedly, or whatever MIDI controller is assigned to the scene launch button, during a performance. So far so good. However, Live initially resists usage in theatre due to the lack fade outs in Session view. Fade ins are easy – just add a volume envelope to the clip. But fading out a clip on demand without stopping the clip seems impossible, at first. The solution is actually quite simple – give each audio track a sister track that takes audio from the first track and applies automation envelopes on demand, without stopping the clip.

The FX1 track in the above image contains one audio clip. This track has no audio output. FX1A is the automation track for FX1. It’s input is set to FX1 output and monitoring is turned on to allow audio from FX1 to pass through it. The two clips in track FX1A are dummy clips of silence – there to provide somewhere in which to place automation envelopes, such as track volume, panning, sends etc. The first clip in FX1A has just one envelope point at 0dB on the track volume envelope. This clip is there just to make sure the track volume is turned up on FX1A when that scene is launched. I gave this clip the name “^” to indicate that it just puts up the fader immediately. The second clip in FX1A is a fade out. When scene “3B Applause off” is launched, clip “>” fades out to silence without stopping the audio clip in FX1 because the clip stop button has been removed from the second slot in the FX1 track.

In this project, I had two FX tracks (each with a corresponding automation track) so that I could cross-fade between effects, or have more than one sound effect at the same time. I also had tracks for music, background music, atmosphere and radio. The audio output from the automation tracks then went to four further tracks that functioned as output mix busses, so that I could balance the different types of sounds on the mixing console during the performance.

Whilst an annoyance at first, having to separate fades from audio clips actually works very well – a bank of ‘preset’ fades can be copied to other clips quickly and it just takes a quick glance to see what is going on.

I used a Korg Kaoss Pad 3+ to trigger scenes. I mapped buttons to play scene, next scene, previous scene and stop all. This allowed me to move through scenes easily in case cues needed to be skipped or played again. Mapping a knob to scene select allowed for very quick scrolling through scenes which helped a lot during rehearsals where there was a lot of jumping about. After loading the project in Live, I don’t need to touch the computer again during the performance. I used a MIDI thru box to provide two MIDI outputs so that I could control both the main computer and a backup computer that followed the performance so should the main computer fail, I could unmute the backup computer tracks on the mixing console for near-seamless resumption of sound.

Overall, I felt very safe running the sound this way. Live is completely stable, CPU efficient, glitch free and gives confidence when it simply must not crash. This system works very well – it’s quick to implement, straight forward to adjust and provides an immediate visual representation of the production.

The potential of using Live in this way is enormous – clips and live sound sources can be faded in/out, spatialised and passed through effects at will. A MIDI fader controller could be mapped to special tracks that take audio from different sources to provide a kind of DCA control so that the operator can adjust the levels of whichever eight tracks are relevant at a particular time. Automation could be recorded on the fly. A timecode track could sync video. MIDI clips can be used to automate other software and hardware devices including a digital mixing console. And now that Max4Live exists, Ableton Live can control, or be controlled, by almost any hardware or software.

I normally use Max/MSP for anything other than very simple projects that cannot be implemented using Qlab or Kontakt. But now I have learned how to automate on demand in Ableton Live it is hard to justify spending so much time programming Max to do the same when I could spend that time concentrating on the quality of sound rather than the quality of programming.

When you are simultaneosly listening to the actors, sound effects and the stage manager, and looking at the stage, script, mixer and computer screen, you need whatever software you are using to be as clear as possible. It doesn’t have to be simple, but merely present only the information you need to see. The flexibility of Live’s user interface allows for this.

Back to Blog