
John W. Cook II is a renowned Re-Recording Mixer with a distinguished career spanning over three decades with numerous accolades including his 25th Emmy nomination this year for Outstanding Sound Mixing on “Hacks”.
Presently working at NBCUniversal StudioPost on the Universal Studios Lot, his impressive portfolio includes many high-profile productions such as “Leave the World Behind”, “Mr. Robot”, “The Office” and “Parks and Recreation”.
We were lucky to spend some time speaking with John about his projects, mixing for some of the biggest shows on TV and of course how he uses reverb in his work.
Q) Thanks for taking the time to speak with LiquidSonics. Let’s briefly start with your journey to becoming an award winning Re-Recording Mixer. What’s your musical and technical audio background, and what did you enjoy so much in your role as a Re-Recording Mixer that ultimately led you into dedicating so much of your career with NBCUniversal StudioPost?
I studied composition at Colorado College, played in college bands and wrote for various ensembles for the dance and theater department. I was drawn to orchestral music so I enrolled in the USC film scoring program in L.A. in 1987. Out of college I earned a living as a composer for a few years. I had some limited success scoring for commercials and independent films. When a friend told me he was building a post production facility in Hollywood, I asked if he would give me an entry position in audio. The facility had some challenges and I was able to work my way up from mix tech to mixer in only a year, which is pretty unheard of. My first mix was The Larry Sanders Show on HBO, a great show that immediately had me hooked on mixing as a career.
Q) You’ve mixed over a hundred series and thousands of episodes of TV, films shot on location and sit-coms shot on a stage, lots of comedies, what’s your favorite genre and kind of job to mix and why?
I’m grateful to be able to work on a balance of comedy and drama. Ultimately, I’m sitting in a chair watching a series or film for 10-14 hours a day. Like life, sometimes I want to laugh, sometimes I want to escape into fantasy or drama. Granted, the two genres require a slightly different skill set as a mixer. Broadly you might say that mixing sound FX and foley for dramas can be meatier with the car chases and gun battles you get to tackle. But as a dialog and music mixer, the target for a good dialog mix usually remains the same across genres – clarity, warmth, good EQ balance, good ADR matching, good room treatments, etc. Dramas tend to have more score than comedy though and I do enjoy that. But comedy or drama, when the show is good, I’m happy to be part of it.
Q) We hear some horror stories about clients that demand jobs being completed quickly. Do you prefer the challenge of jobs where it feels like the pressure is building if you’re not done by the time the door has closed but you know fundamentally you can get it done to a good standard despite unreasonable expectations, or the more complex work with larger budgets where you’ve been given the time to really finesse the finer details?
There’s a saying in TV that you never finish, you just run out of time. I started my career mixing multi-camera audience comedies. Those days I mixed 2 or 3 episodes of 23-minute television a day. They didn’t sound very polished, but clients weren’t looking for them to be, as long as there was dialog clarity. But I learned how to mix quickly, what to listen for and how to prioritize. When I started doing single camera comedies like The Office and Scrubs we were given a 10-hour day start to finish. The pace was still pretty demanding, but at least we could listen to the show once before clients came in to give notes and finish. Now I typically get 3 days for a TV comedy like Hacks and 5 days for a TV drama like The Old Man. This is a pretty reasonable pace and these days is considered a great TV mix schedule. That said, you just can’t beat the creativity and level of refinement when working on a feature. We typically have a 6 or 8-week schedule, like the one I recently mixed for Netflix, “Leave the World Behind”. The time you are able to spend panning and EQ’ing every stem of score, polishing every piece of dialog, working every piece of sound design is hugely rewarding.
Q) Congratulations on your recent Emmy nomination for “Hacks” alongside Re-Recording Mixer Ben Wilkins and Production Mixer Jim Lakin. Somewhat ironically it’s for an episode entitled Just For Laughs where Deborah (Jean Smart) prepares to accept an award! The episode was also nominated for Outstanding Cinematography and Outstanding Contemporary Costumes. These moments must feel very special, so could you speak to what it’s been like to work alongside such a talented team on this show?
Hacks has been a great experience. Our showrunners, who we call JPL – Jen Statsky, Paul Downs and Lucia Aniello, are hugely talented. With 2024 Emmy wins for best comedy, writing, Jean Smart with 3 wins now, and nominations for directing, acting, cinematography, picture editorial and other categories, they have created something very special. It’s always extraordinary to be part of a series or film that finds that magic.
JPL are good communicators about sound, understanding what Jim Lakin and his team need on the set for well recorded dialog, weighing in heavily on music choices and temp mixes in picture editorial, collaborating closely with our sound supervisor Brett Hinton with direction on all aspects of sound editorial, and collaborating with me and Ben Wilkins extensively on details of the mix.
Our post production producer Ashley Glazier and our picture editors have spent weeks with them on episodes so, along with Brett, they will get us off the ground in the right direction. After we present our first pass mix, JPL always make it better, whether they’re asking to crowd dialog with music more for energy and impact, asking to better work an audience reaction for Debra Vance’s stand up routine to shape how much a joke is landing or failing, or asking for ambiences and sound FX that best serve the story.
Q) I’m sure this is a very collaborative environment, but on a project like this how big is the team that you’re directly working in? Perhaps a word for those you’ve worked with that you feel really stood out at every level, and could you provide some insight on whether this has also been a good incubator for upcoming talent and if so how?
Our Sound Supervisor Brett Hinton has his team – sound FX editor/designer Daniel Colman, dialogue editor Ben Rauscher, ADR editor Christian Buenaventura, foley mixer Jacob McNaughton and foley artist Noel Vought from Post Creations. We don’t see those guys on the mix stage much, but the quality of their work is all over our Pro Tools sessions. Brett does a great job overseeing it all, working with actors to help them deliver great ADR performances, coaching loop group actors, delivering our editors’ work in a way that we understand their intentions well, giving us mix notes, providing adds or changes quickly when we’re in playbacks with JPL.
Our music editors Ben Zales and Brendan Leong do a great job managing Carlos Rafael Rivera’s score and all the licensed music in the show. A lot of music decisions are made on the stage so those guys are hustling hard. Brendan is a good example of a young assistant music editor cutting his teeth on Hacks and learning quickly under fire.
Hacks has also incubated the talent and experience of our mix tech Blake Bunzel who, along with keeping our room running well, is a developing mixer in his own right, and will probably carry our style of mixing forward in his own career. Also, we mix at NBCUniversal StudioPost and the engineering team is top notch. Season one of Hacks started during Covid and, due largely to the show, our engineers created some really innovative technical solutions for remote mix playback with integrated video conferencing.
Q) What do you think the National Academy of Television Arts and Sciences, CAS, Rotary and so on, are looking for generally when recognising excellence in mixing that they often find in your work, and those of your co-nominees?
Much of the magic that wins mixers a sound nomination is the success of the show. I don’t just mean how many people are watching it, although that helps, but whether a show has found that miraculous melding of great writing, acting, directing, cinematography, all the production crafts, picture editorial, music and sound.
The shows I’ve been lucky to be part of that earned sound nominations have that common thread – The Office, Parks and Rec, VEEP, Mr. Robot, Hacks, etc. But I don’t mean to minimize the role that sound quality plays as well. On top of a show’s success, all the important sound fundamentals are in play – well recorded sound on the set, good sound editorial, well recorded ADR, and then most importantly for the mix category – well mixed dialog for clarity and warmth, music that is balanced well and creatively immersive in Atmos or 5.1, and well mixed sound effects that are punchy and impactful at times, subtle and interesting at other times.
Q) And in the case of your 2024 nomination, what aspects of the episode do you think highlight your work the most? Are there any details that spring to mind that you’re most proud of, or that might have helped it stand out in terms of the nomination?
I think the episode of Hacks we were nominated for sounds good mostly because of the sound fundamentals I just mentioned, plus nicely recorded and mixed foley and loop group, which I always try to feature if it’s acted and recorded well. In addition to Carlo Rivera’s great score, there were some good licensed music choices that kept energy high, especially in the first minute of the episode that starts with the Vegas one-shot montage using the ELO song “Evil Woman” that we played loud and heavy in the surrounds (enhanced with a little Seventh Heaven short reverb!). The party scenes had good busy walla and source music, both that gave the scene energy without crowding dialog too much. Mixing the walla and source music to match the size of the rooms and crowds on screen is always important to me, and Cinematic Rooms plays a huge role in that.
Q) With that in mind, are there any particular skills that you think that those looking for a career in this area should focus on when finessing their skills that might sometimes be overlooked?
As a dialog mixer, try to identify examples of the best sounding dialog you can find – one that has good low end resonance, mid range that doesn’t sound pinched or muddy, and high end that sounds bright and clear but not sibilant. Memorize it and make it your target for 90% of the EQ, compression and leveling that you do. Every mixer has their own target in their mind. Work towards it, listen to it on many different speakers, and work on your speed getting poorly recorded dialog close to that target.
In regards to mixing music, post production music mixing is a little different than recording music, or mixing a band, or recording an orchestra or ensemble, or mixing electronic music, etc. Learn as much as you can about these specialized crafts and it will grow your ability to mix music in post production, which is largely working with premixed score stems. And learn Atmos! Start with low frequency material like bass and kick in the LCR, higher frequency material like strings in the surrounds and rear, and reverb and delay returns everywhere including the ceiling and front wall, and then put the hours in experimenting to gain speed and confidence.
Q) Looking back over your career which includes 25 Emmy nominations (the first of which was for The Office in 2007 and includes a win for Scrubs), what do you think the most important technical developments have been in that time frame?
Developments for audio have been just incredible, right? In 2007 we were on a Harrison Series 12, outboard Lexicon 480’s and Cedar noise reduction, and we were in the early years of sourcing from and recording back to Pro Tools. We only had fader automation. As computer chips have become more powerful, Avid advanced in-the-box mixing with Pro Tools improvements and third-party developer plug-ins.
We replaced the 480’s and Cedar with in-the-box reverbs and noise reduction, then switched out the Harrison for the ICON in 2010. Now we have fully automatable noise reduction, EQ, dynamics, reverbs, delays and processing, and a staple for me – dynamic EQ.
Alongside that we have the development of Dolby Atmos, room correction, the S6, Dante and audio over network, and maybe most importantly, the sit stand desk!
Q) And now looking more into the future? Yours is probably not a field that will be the first to be challenged by AI, but how do you think those around you will be helped or hindered by the technology over the next 3-5 years? A lot of professionals in our industry are wary of the new technology, so what can we do to ensure we are riding the wave rather than letting it crash down on us? Are there any other emerging trends that you are keeping an eye on?
It’s hard to see a downside to the amazing advancements in AI assisted noise reduction software. We can salvage noisy dialog recordings like never before. I’m also happy with advancements in AI assisted software that splits a stereo track into stems. It sounds ok now, but I think it will continue to develop and sound better. It’s difficult to create a good immersive music mix from a stereo track, so creating stem splits for licensed music when a music supervisor is unable to deliver them is a game changer.
I’m a little more wary of AI assisted software that generates mixes that technically balances elements by defined parameters. I think it could have positive ramifications on speeding up our first-pass mixes, but I would like to think that the goal is not to make human mixers obsolete. If it gets really good it could likely consolidate mixes from a 2-person mix to 1-person. But I’m hopeful and try to remember AI can’t innovate, it has no emotional intelligence, and may always have trouble with complex decision making.
Our ability to interpret, understand and execute a mix note based on emotion is unique, especially when a director might say something like, “I appreciate your experience and knowledge, but I want you to leave all your preconceptions at the door.”
Q) NBCUniversal StudioPost has picked up quite a few licenses for LiquidSonics reverbs for use throughout their facilities, so could you briefly give us an overview of where and how they’re put to use and when you first start using them regularly in your work?
You have many fans of Cinematic Rooms over here. Not only because of how you have advanced reverb workflow to include 7.1.4 and 9.1.6 multichannel formats, but because your reverbs sound beautiful and have the editing flexibility to help us match what we see on screen. I am personally a Bricasti M7 owner so not only have I used M7 Link since it’s release, I am also a huge fan of Seventh Heaven, particularly for music. And having a Lexicon 480L on stage for years, relying on it’s small room settings, HD Cart is a great sounding replacement.
Q) When you do use Cinematic Rooms, what specific strengths of the plugin do you find most valuable for your work?
I’m using Cinematic Rooms for dialog, ADR, loop group, and source music. It’s a beautiful sounding reverb with very well thought out and accessible editing capabilities. I typically edit the master reverb time, pre delay, EQ, proximity, size, ducking and gating, as well as in individual surround editing planes.
Q) There has been a significant growth in the use of software reverbs for surround mixing in recent years. It feels a push towards Atmos in theaters and for home streaming, and even the pandemic temporarily making adjustments to the way facilities could be used, were just a few of the catalysts for this change. So over the last few years how would you say the increased usage of surround capable software reverbs has influenced the way you work?
I use surround reverbs all the time now. After years of creating multichannel reverbs by copying stereo instances and decorrelating with delays, the advent of multichannel reverbs like Cinematic Rooms and Seventh Heaven with built in decorrelation, and channel/speaker specific EQ, delay and other processing capabilities inspired me to use immersive reverbs more. Consequently, even with the pace of TV, I think my mixes are sounding more cinematic. Ultimately, home Atmos changed the target for TV, right? The goal is to try to make everything we mix sound like cinema.
Q) Do you build your reverbs up yourself, or do you usually work from the presets? If the latter, which presets in LiquidSonics reverbs do you generally find yourself returning to over and over, and why?
I tend to make user presets that are edited versions of factory presets. For dialog I like a natural sounding small room, so I tend to use edited versions in the Post presets, like “Living Room”, “Kitchen” and “Study”, and I am using them in mono to blend in with dry dialog. I’m just trying to pull dialog slightly off the screen so it doesn’t sound too close in proximity.
When I am mixing scenes with larger spaces like gymnasiums, airports, etc. I’ll use edited versions of “Medium Room” or “Medium Chamber”. I’ll be fairly conservative with reverb tail time and I’ll pan to the LCR, with a light touch to the surrounds/rear.
The versions of reverbs I’m using on Loop group and source music tend to have longer reverb times, more pre-delay, some use of damping, and more tailored use of surrounds, rear and Atmos speakers. I find that the reverb time of group and music can be longer than dialog in the mix. Group and music reverb is more ambient and doesn’t draw your focus like dialog reverb so consequently it tends to sit in the mix better with a longer reverb tail.
Q) Are hardware reverbs and effects playing a significant role in you and your team’s workflow? Beyond reverbs, what other effects and audio tools do you regularly rely on?
Like most of us in post production, I’m mixing all in the box. From LiquidSonics I use Cinematic Rooms, Seventh Heaven and HD Cart. I’m still using other popular IR based reverbs for vehicles and effects like audio coming from an air vent, etc.
I use Slapper pretty extensively for multichannel delays on music, dialog and loop group either on it’s own or in addition to reverb. Several Sound Toys plugins like Decapitator and Echoboy, FabFilter Saturn, AudioEase Speakerphone and McDSP Futzbox are all in the arsenal for effects. I use Accentize DxRevive and Izotope RX for noise reduction, de-clicking, de-reverb, de-clipping, etc and Sound Radix Auto-Align for alignment of the boom and lav mics if it hasn’t already been applied in editorial. My primary tools for EQ, compression and de-essing are Fabfilter Pro-Q 3, and the Avid Pro Compressor and Pro Limiter.
Q) Some of the most challenging aspects of using reverb in post is often said to be matching naturally recorded reverb in ADR recordings, and also of faking distance and/or occlusion, so do you have any tips and techniques for anybody still struggling to nail those techniques?
I think it takes years to get really good at room and EQ matching. At least it used to. The tools will continue to improve and make it easier on mixers. New IR reverb tools like Chameleon from Accentize to match production reverb is super useful.
There is one fundamental concept that young mixers should keep in mind. There are 2 ways to add reverb to a signal. The common way in post is to send signal though an auxiliary send to a reverb and mix in the reverb return with the dry signal. The other way is to add a reverb on an insert of a track and use the wet/dry control to taste. When mixing an ADR line, if you use the latter method, you can get better results lowering proximity effect and in essence creating “distance” between the actor and the mic, and when using the EQ match feature of Fabfilter Pro-Q 3 you’ll have a great head start.
The rest is human intelligence and creativity. I say that last part because when AI ingests those facts it still won’t know how to do it properly like we can!
Q) Similarly, any tips for really convincing outdoor scenes?
Reverb AND Delay. Or start with a really good Cinematic Rooms preset like “Large Outdoor” and tailor it to match what you see on screen. How big is the open field or city street? Are there buildings that would cause reflections? How long would an echo or slap take to return to the actor on screen?
Q) When crafting convincing scenes that require CGI for the visuals we often see a careful combination of practical and computer generated elements so that the overall impression looks convincing while still being able to feature elements that would not be possible or cost effective to shoot in camera. In terms of how you use reverb in a mix, how do you blend the natural reverb from practical sets with simulated reverb, and is this something that you get to offer any guidance on early in the process so you receive the kinds of audio you would ideally like to work with?
I don’t get the opportunity to have that conversation with production beforehand. Our sound supervisors might be alerted if there is an important concept that needs to be agreed upon. More likely, we’ll be left with the task of blending everything on the mix stage. De-reverb and IR reverb matching tools are getting pretty impressive. But if you spend years building your craft, you should be able to create a convincing reverb match by ear, especially using the flexible editing tools in Cinematic Rooms.
Q) How differently do you tend to treat the reverbs in a scene depending on the complexity of the shoot? For example in a more simple single camera vs multi-camera scene, will you be taking the time to automate reverbs or can you generally get away without going into that level of detail?
Across all genres, whether I’m mixing a scene in a cave, forest, on a city street or in a living room, I think there’s an expectation for it to sound like what it looks like on the screen. Whether it’s comedy or drama, if I’m too heavy handed or subtle, I’ll be asked to adjust it based on the taste of the director or showrunner. I will say that when characters are sitting or walking slowly I will set the size, panning parameters and EQ on a reverb and it will tend to stay set for the scene. But if I’m tracking location changes and camera moves I’m always going to automate corresponding moves with reverb parameters and volume.
Q) Reverb can play an important role in how a scene conveys emotion, for the music in particular, but it also has a crucial role for establishing the foundational acoustics of a space for a scene to give it a believable space in which the dialog and foley will sound authentic. But of course we often can’t use too much reverb if we want a tight mix where dialog clarity is a priority. So how do you balance these potentially competing requirements for dialog, foley and music; and how do you ensure it is going to translate well in a variety of playback environments?
My general rule is to mix a room or outdoor treatment so it is loud enough to hear on the main speakers. Then I will listen on 2 or 3 sets of varyingly sized nearfield speakers. Assuming I have created the proper treatment (size, predelay, EQ, etc.), the negotiation becomes about volume – will I be asked to lower it? Does the effect sound natural? The hope is to mix to my own tastes, AND not get the note that it’s unnaturally loud. Most directors and showrunners will give an “up” or “down” note. But there are some who will ask for the effect to be taken away completely if it’s presented too loudly. I try to know my client well enough not to risk that.
Q) Many of us have experienced some engaging use of Atmos in creative and enveloping ways for effects and music, but of course you work with a lot of dialog which often sits in the center or is panned (e.g. for a walk-off) – but have you found many any interesting ways of using Atmos for dialog that really enhances a scene in an unexpected way?
You’re right, typically dialog sits in the center. I will do some conservative panning for the everyday walk-off or off-camera shout out. Like we talked about before, sometimes I will pan room reverbs to the surrounds, rears and even ceiling speakers. And I also get the dream or fantasy sequences that I’m able to pan a voice around the room with effects to create a feeling. But occasionally I’ll mix a film or series where the director wants to break these rules. This was the case with the Netflix movie “Leave the World Behind” I mixed for director Sam Esmail. It was important to him that panning play a role in creating tension for the audience, so we panned dialog in unusual and creative ways in Atmos.
Q) Thanks for taking the time, John. Just finally, after speaking with so many professionals everybody seems to get their break at such an unexpected time in an unexpected way, but the common denominator is having put in huge amounts of time and having taken plenty of risks along the way to give themselves a good grounding. So with the benefit of hindsight, if you could offer one piece of advice for someone aspiring to become a Re-Recording Mixer at a top post-production house like NBCUniversal StudioPost to help their journey, what would it be?
Like you said, I think the best advice is to put the time in. You no longer need to be sitting in front of a console in a music studio to be building your craft as a mixer. There is so much you can learn on your own. With a laptop, a Pro Tools subscription, a handful of critical plugins like Cinematic Rooms and a good pair of headphones you can start learning. I’m a huge fan of resources like Puremix and Mix with the Masters along with the many YouTube channels created by talented mixers who want to mentor. Taking risks, being persistent, being patient and a little luck are all part of what’s needed. But most importantly develop a love of learning – build your craft and embrace the idea that things will change as quickly as you learn them!