Tag: Sonic Solutions

Cherry Docs

My first official sound effects gig was on a drama called Cherry Docs, written by David Gow, directed by Damir Andrei. Cherry Docs was originally a stage play, and is about a liberal Jewish lawyer defending a neo-nazi skinhead from a murder charge. Or rather, it’s about the journey these two men take together as they confront one another’s prejudices and their own. Or rather, it was about me learning how to make sound effects for a radio play.

David Gow

Playwright David Gow

Because the truth is, I remember virtually nothing about Cherry Docs itself. I had to look up the plot. This has nothing to do with the quality of the play, which is quite well regarded. It has to do with the fact that we recorded it a long time ago, and as we were making it, I wasn’t thinking about the story as much as I was thinking about how sound effects could help tell that story.

I had been schooled in the basics of the craft. I knew to comb the script to figure out what sound effects were required. I knew to divvy them up into three categories: sound effects that I would perform live with the actors, sound effects that I would create and record separately, and sound effects that I would source from CDs.

Damir Andrei

Director Damir Andrei

On the first day of recording with the cast, my very first sound effect was lighting a match for the main character, a foul-mouthed, violent neo-nazi skinhead played by Randy Hughson. Hughson’s character was supposed to be smoking a cigarette.

Why was I required to light the match? Couldn’t Randy have lit the match himself? For that matter, couldn’t Randy have performed all the sound effects himself? It’s true, Randy could have lit the match. But he probably wouldn’t have known where to light the match in proximity to the microphone. Lighting the match too close or too far away could have ruined a perfectly good take.

Also, lighting a match is simple, but it’s just one example. Sound effects sequences could be a lot more complex. Sometimes several sound effects were required during a single take. We preferred actors to concentrate on their performances rather than having to clink glasses, light matches, pretend to tromp around on snow and so on.

And then there was the business of how to create the sound effect to begin with. It wasn’t always exactly an intuitive process. Lighting a match is pretty straightforward. Lots of other sound effects aren’t. There are tricks, such as waving a thin stick in front of a mic to create the whoosh of an arrow, or touching a rag to a hot surface to create the sound of frying. We had an entire room full of bizarre contraptions and knick-knacks capable of making all sorts of weird sounds. Devices for making wind, doorbells, screen doors, the sound of someone getting hanged, or their head chopped off. It was useful to have someone around who knew where all these contraptions were, and how to make them work.

Actor Randy Hughson

Actor Randy Hughson

Anyway, there I was, the alleged sound effects specialist about to perform my very first professional sound effect. On the first take, at the appropriate point in the script, I dutifully lit the match, and promptly dropped the lit match in Randy’s hair. Fortunately, I was able to blow the match out before any damage could be done, but I was mortified. Thank God Randy wasn’t actually the foul-mouthed, violent neo-nazi skinhead he so effectively portrayed!

(I actually did see someone’s head burst into flames once. Fellow recording engineer Wayne Richards invited me to a party at his house at which he opted for candles over electric lighting. Joram Kalfa and I were in the kitchen talking to a young woman with long red hair when she stood too close to one of the candles. Her hair caught fire with a great whoosh. Within seconds her head was a great ball of flame. It was something to behold. Rather than admire it, quick-thinking Joram stepped forward, took a deep breath, and blew the woman’s head out as though it were a giant birthday cake candle. Her hair was slightly singed but she was fine.)

I mentioned that the main character of Cherry Docs was an intensely hostile neo-nazi. This set the stage for a slightly surreal moment when Damir, the director, instructed the actors to “just take it down to stupid f***ing paki at the bottom of page twelve.” Everyone laughed at Damir’s apparent obliviousness to the extremely offensive nature of what he’d just said (reflecting sentiments which, I hasten to add, no one present endorsed).

Shortly after my inadvertent attempt to set Randy on fire, the fire alarm in the Broadcast Centre went off. This was a complete coincidence, having nothing to do with my incident with the match. Moments later, standing on John Street alongside the rest of the occupants of the Broadcast Centre waiting to get back inside, Randy turned to me and asked, “So how long have you been doing sound effects?”

I looked at my watch. “About fifteen minutes,” I said, much to the amusement of recording engineer Greg DeClute.

Back in the studio, I recorded as many sound effects as I could with Randy and the rest of the cast. Recording sound effects with the actors is usually a good idea. Not only does it ensure that the sound effects are recorded in the right ambient space, it enhances performances as actors respond to the sound effects in the moment. It also makes for less work in post.

Still, it wasn’t something I particularly enjoyed. I always felt slightly embarrassed doing sound effects with actors. Sometimes the sound effects felt silly, such as using a knife and fork to eat an invisible breakfast on an empty plate. Or I’d make a stupid mistake, such as almost setting Randy Hughson’s hair on fire. We had two dedicated sound effects specialists on staff, Matt Willcott and Anton Szabo, guys who actually knew what they were doing. Me, I was just a dilettante. I never forgot that. Still, whenever called upon to perform live sound effects, I always did the best that I could.

SFX in Studio 212

SFX in Studio 212

Once I was finished with the cast, I turned my attention to recording wild sound effects, a process called “foley” after Jack Donovan Foley, a pioneer in the field of film sound effects. Foley is the process of recording sound effects in isolation. They’re mixed into sound tracks afterwards. I was a lot more comfortable doing foley than performing sound effects with actors.

Foley can be recorded anywhere. I recorded most of the sound effects I needed for Cherry Docs on the floor of Studio 212. Over the years my colleagues and I recorded car doors, squeaky doors, jail cells, elevators, breaking plates, baths, showers, decapitations, hangings, sword fights, fist fights, even gunshots in various parts of 212. For Cherry Docs, some of the action took place in a car, so I spent one afternoon recording myself driving my Pontiac Sunbird, speeding up, slowing down, turning, using the windshield wipers, buckling the seatbelt, and so on. We often talked about preserving and cataloguing the sound effects we created ourselves, to save time on future productions, but nobody ever got around to it.

Any sound effects that I didn’t record with the cast or as foley I sourced from CD. We had quite an elaborate sound effects collection. Thousands if not tens of thousands of sound effects, collections from Canada, Britain, the US, with names like Sounds of a Different Realm, Evil FX, Hollywood Edge, Top Secret, Wacky World of Robots, Widgets and Gizmos, Star Trek, Sound Ideas, and so on. Despite the breadth of our collection, it didn’t have everything, which is why we often had to create our own sound effects.

While I was busy recording and gathering sound effects, recording engineer Greg DeClute created the dialogue edit, choosing all the best performances from the actors and making a single continuous dialogue track. When he finished this to the director’s satisfaction, he handed it over to me to do the sound effects assembly.

When it came time to do the sound effects assembly, I was always grateful that I’d already recorded as many sound effects as possible with the actors. Anything that I hadn’t recorded (the foley sound effects and anything sourced from CD) needed to be loaded into my workstation (in those days a Mac G4) and then placed on separate tracks using our digital audio editing software, Sonic Solutions (we would move to Pro Tools a few years later). The sound effects usually took up a lot of tracks, layered on top of one another. A scene with characters arguing in a car might include a track of them arguing, another track with the sound of their car, yet another of passing traffic, several spot tracks of blinkers, wipers, seatbelts and so on, and maybe a music track as well.

Once I finished the sound effects assembly it was time to mix the show. In those days we almost always mixed big shows in Studio 212 with the cast long gone and the studio floor mostly empty. Cherry Docs was no exception. Greg sat on the left and I sat on the right before the Neve Capricorn console in the control room. Damir, the director, sat behind us.

Mixes were usually a collaborative process, although that depended on the director. For Cherry Docs, we followed Damir’s direction, but everybody provided input into what sounded best. As the mix progressed, we moved dialogue, sound effects and music around that weren’t quite in the right places. We added electronic processing where required (e.g., if a little reverb was required here and there). Greg equalized the dialogue track of a character who was supposed to sound like he was on a telephone. The Capricorn console remembered every move we made on the various faders and dials, and played it all back afterward just the way we mixed it.

Once we were happy with the mix, it was time to print it. We turned down the lights, launched the CD burner and DAT backups, pressed play on the console, sat back in our chairs and listened, hoping to God that we hadn’t made any mistakes. If we did, we stopped, fixed them, and started the print over again with a fresh CD.

I loved the Neve Capricorn, but it wasn’t perfect. Every now and then one of us would notice that it had fallen out of automation. When it did, we leapt out of our chairs cussing and swearing, trying to re-engage the automation before it missed any of our carefully programmed moves. If we caught it in time, we were fine. Usually, though, it was too late, and we were forced to start the print all over again.

Once the show was successfully printed, we turned up the lights and handed the finished CD and backup DATS to Damir, who (hopefully) checked it one more time before presenting the finished product for broadcast.

And Greg and I moved on to our next projects.

Tools of the Trade

I felt as though I had been tailor made for Radio Drama. As though all my experience in radio from the age of sixteen, all the writing I had ever done, my stint in community theatre, my interest in music, all of it had conspired to prepare me for making radio plays. I had even written and produced a radio play before, as a student at Ryerson. Still, I had an awful lot to learn.

John McCarthy set about teaching me.

Up until this point, John had been an enigmatic figure to me, part of what I imagined to be an elite cadre of high-end recording engineers, well beyond anything I could ever aspire to be. Tall, bearded and bespectacled, from a distance he appeared aloof and serious. As I got to know him, I realized that he certainly wasn’t aloof, and although the jobs he occupied demanded a certain degree of seriousness and thoughtfulness—qualities that come naturally to John—you could not have a conversation with him without plenty of laughter.

A certain wizard

A certain wizard

There is something about John that has always put me in mind of a certain wizard. A staff in one hand and a conical hat and he would not be entirely out of place in a Tolkien novel. It is his bearing, his comportment. Like Gandalf, John is a counsellor, an advisor, a mentor. He was responsible for the two most pivotal moments of my career: inviting me into the radio drama department, and ultimately promoting me into management. Although he has never performed any actual magic that I’m aware of, I’m fairly certain he could kick Sauron’s ass.

On my first day in the drama department, John sat me down in a suite called Dialogue Edit and launched a piece of high-end audio editing software called Sonic Solutions. I had used similar software before, two programs in particular: D-Cart, also used by the Australian Broadcasting Corporation at the time, and Dalet, a version of which we still use today, but Sonic Solutions was considerably more powerful than either of these.

John showed me the basics, and then made a special point of showing me hot-keys—keystroke combinations that I could use instead of a mouse. He told me cautionary tales of people who had relied on “mousing” only to wind up with carpal tunnel syndrome. I heeded his words and learned every possible hot key combination. Not only did this make me a fast editor, I never suffered from carpal tunnel syndrome.

John gave me an edit of a radio play to practice on, an adaptation of Alias Grace by Margaret Atwood. I spent several hours replacing the existing sound effects with completely ludicrous ones, turning a serious dramatic work into something completely ridiculous. I was quite proud of the result.

“What have you done to my beautiful radio play?” John exclaimed in mock outrage when I played it back for him.

Once I was up to speed on Sonic Solutions, it was time to tackle the Neve Capricorn console in Studio 212. This was a rather more daunting task.

Recording Engineer Greg DeClute spent a few days teaching the console to me and a handful of my colleagues. On the morning of the first day, Greg challenged us to get tone up on the board. The purpose of tone, you might recall, is to line up audio equipment and establish continuity. Getting tone up on the board is the first thing I always do when confronted with a new console. I had never failed to get tone up on a board before. It’s pretty easy to get tone up on analog consoles.

Naturally, nobody who didn’t already know how to do it could get tone up on the Capricorn. On a digital console like the Capricorn it’s not exactly an intuitive process. After showing us how, Greg told us about a producer who was asked by a writer what would happen if everyone showed up to a recording session except the recording engineer. Would the producer be able to operate the Capricorn and record the show?

“Of course,” the producer told the writer confidently.

The truth is he wouldn’t have stood a chance. With all due respect to the producer in question, without training, he wouldn’t even have been able to get tone up.

I wasn’t sure I was up to the task myself. Did I have the kind of brain capable of adequately understanding something as complicated as a Neve Capricorn in an environment as complex as Studio 212?

This was nineteen ninety-nine, the year before my children were born. After taking Greg’s course, I had the freedom to come in on weekends to experiment. My goal was to make sure that I was able to record from every possible source, play it back through Sonic Solutions, route tracks through the various outboard processing gear, and mix it all using the Capricorn’s automation. This was the bare minimum I needed to know to make a radio play.

During his course, Greg had encouraged us to learn more than the bare minimum. “Be super-users,” he told us. “Seek to understand as much as possible about the gear you’re using. Don’t run to someone else for help every time you run into trouble. Figure it out for yourselves. Be the one that other people run to.”

Those are his exact words.

(No they’re not. It was a long time ago. And Greg doesn’t use words like “seek.” But it was something like that.)

I also needed to master Studio 212 itself. I needed to understand how to accurately translate the written word into sonic reality; how to get the most out of the acoustic spaces available to me. Doing so wasn’t necessarily straightforward.

On a conventional radio show, you position a microphone in front of the host and guests and make sure their levels are good. Sometimes it’s a little more involved, such as when you want to have a band in the studio or someone wants to cook something or practice Tai Chi live on air (I’ve dealt with both). Everything has to sound “on mic” all the time. This is presentational radio, where radio shows present content to listeners in a straightforward, unambiguous manner.

Radio drama, on the other hand, is representational. Much of what goes into a radio play represents something other than what it actually is. The trick is convincing listeners to accept the reality that is being represented. Actors represent characters that they’re not. Sounds represent sounds that they’re not—for instance, squeezing a box of corn starch wrapped in duct tape to represent a character walking on snow.

Few people I know actually think in terms of presentational versus representational radio. It’s not necessary to be conscious of the distinction unless you happen to be mixing the two, in which case you risk confusing your listeners, the way Orson Welles inadvertently did with his live broadcast of The War of the Worlds. When you move into the realm of representational radio it’s usually a good idea to let your listeners know that you’re doing so, though if done responsibly it can be fun to toe the line. The show This is That, currently airing on Radio One and Two, is a good example of this.

The challenge for those working in representational radio is how to make listeners believe that what they’re hearing is what you want them to think they’re hearing. For instance, take the sound of a nobleman getting his head chopped off by a guillotine. How do you create that sound without actually chopping off someone’s head? Even if you did chop off someone’s head (which I would advise against), listeners might not understand what they’re hearing without visual cues to make it clear what’s going on. It might be necessary to produce a sound that conveys the idea of someone getting their head chopped off that sounds even more like someone getting their head chopped off than the sound of someone actually getting their head chopped off, if you catch my drift.

I once recorded a scene from Romeo and Juliet with a novice director. Juliet was supposed to be on the balcony with Romeo on the ground. The director suggested that we place Juliet on a chair to convey that she was higher than Romeo. I explained to the director that height wouldn’t “read” on the radio. Placing Juliet on a chair wouldn’t convey to the listening audience that she was on a balcony. Listeners at home wouldn’t be able to see that she was higher.

What we needed to do was record the scene from Romeo’s point of view, with that actor close to the microphone, and place the actor playing Juliet an appropriate distance away from the microphone. Not so far away that the actor couldn’t be heard, but far enough away to convey the idea that the two characters were a fair distance apart. That Juliet was on a balcony would be clear from the context of the play. We just needed to nudge listeners’ perceptions in that direction. “Theatre of the mind” would do the rest.

I don’t mean to suggest that any of this is rocket science. But I did need to understand it all before I could get to work.

© 2017

Theme by Anders NorenUp ↑