Skip to main content
Artist interview

Mikko Raita on Mixing The Cinematic Score of Oscar Winning “Flow”

By 9th March 2025

Mikko Raita is a Finnish audio engineer known for his immersive sound mixing on film scores and music projects. 

His recent portfolio includes mixing the score for multi-award winning independent animated film Flow (2024, Dream Well Studio) which recently won the Oscar for Best Animated Feature Film. The film contains no dialogue, so the score plays a huge role in communicating director and co-composer Gints Zilbalodis’ narrative and creative vision through music. 

Dolby Atmos is of course the industry standard for big budget releases, but it is playing a growing role in indie film production as well. We caught up with Mikko to learn more about the technical process behind mixing Flow in Atmos, the creative decisions involved, and how he navigated the challenges of delivering a top-tier immersive experience on a lower-budget project​.

Post-production and film score engineers at every level will appreciate his insights on workflow, use of advanced reverb tools, and the growing importance Atmos can play in score production even in cases where a film is not presented in Atmos.


Flow (2024) Courtesy of Gints Zilbalodis’ Dream Well Studio

Q) Hi Mikko, thanks for taking some time to speak with LiquidSonics. First, could you give us a little background about your musical journey, early inspirations and ambitions, and share a little about your recent work in film and music mixing?

A) I started as a guitar player, first classical guitar in a music school and then switching to electric doing all of the blues, grunge, metal, prog and even rock-jazz fusion phases, but I also started my studio journey already when I was 12 years old, recording my own noodlings and song snippets with a Fostex cassette 4-track that I had. I also had an Akai 12-bit reverb unit and a drum machine in the form of my MS-DOS home computer running the Scream Tracker 3 software, playing 8-bit samples in the Amiga tracker style. 

Later, in the music-focused Sibelius High School I got access to a proper reel to reel (and later ADAT) 16-track studio. Around that same time I faced the decision if I wanted to try and make music a profession, and if it should be through focusing on guitar, or on studio work. I ended up choosing the latter, continuing to study at the newly formed Music Technology department at the Sibelius Academy music university, now part of the University of the Arts Helsinki. 

At the music university, I got quite free access to a studio running Pro Tools 4.3 and TDM hardware with few other users, which resulted in me recording and mixing more than attending the required classes – but it did mean that in just a few years, I was doing album work professionally in mostly pop and rock and sometimes metal genres, having maybe even a bit of a “wunderkind” status for a while. We had a group of young peers who were the first Finnish mixer/recordist generation who had the help of online phpBB-based audio forums like DUC, Recpit and GS, where we had access to much more information in early phases of our careers than older generations of local engineers had access to. I ended up also building my still current Helsinki studio, Studio Kekkonen in 2006 with Julius Mauranen and Janne Riionheimo from these circles, after freelancing in other studios for a few years.

But as things go, after some years the music business changed its focus away from the band-style music I was specialising in at the time. Luckily I had already focused on recording acoustic ensembles as well, especially in jazz and various folk and world music genres, so that became my next focus. I’m still active in this field, working with artists like Verneri Pohjola, Aki Rissanen and Alexi Tuomarila, with Dolby Atmos Music releases now out with all of them. 

I had also gotten a glimpse into the classical world and classical ensemble recording through my university studies, so when the call came to mix my first orchestral film score in 5.1, with a real live eastern european orchestra in 2016, I was somewhat prepared for the task at hand, having also worked on some 5.1 music releases and stereo film/TV work by then. It seems I had the right idea on how to do it, as film scores soon became a big part of my work. I was also the first engineer in Finland to mix music released in Dolby Atmos, for 2018’s feature Happier Times, Grump with composers Juri and Miska Seppä, as well as the first engineer to have a Dolby Atmos Music album released with Dantchev Domain’s Lions We Are. I have worked on several scores with composer Panu Aaltio, including the US drama feature 5000 Blankets, the children’s superhero film Super Furball Saves the Future with the Tampere Philharmonic Orchestra and the very soon releasing Little Siberia, which is the first Finnish feature film produced for Netflix.

In recent years, one of my most active collaborations has been with composer Lauri Porra, with whom I have mixed, recorded and edited several projects. These include the scores for the most watched film in the Nordics in 2024, Stormskerry Maja with the Lahti Symphony Orchestra and the most streamed TV series ever for our national broadcaster YLE, Queen of Fucking Everything. I also edited and mixed his epic crossover orchestral concept album Matter and Time with the Vantaa Orchestra featuring narration by Sir Stephen Fry. All of these have Dolby Atmos Music releases available as well. At the end of last year I worked on Porra’s Ääniä, Finland’s “Official Soundtrack”, commissioned by the state. I also mixed, edited and mastered part of conductor Dalia Stasevska’s Dalia’s Mixtape album in stereo and Dolby Atmos, performed by the BBC Symphony Orchestra.

Besides mixing I have also been active in teaching, giving lectures on Dolby Atmos and score work for both professional groups like our music producers’ association Aux Ry members, and sound design teams at the Finnish National Opera and the Helsinki City Theatre, as well as in various schools, most recently for Aalto University and Metropolia University of Applied Sciences film sound and music production students.

Q) How did you connect with the creative team behind Flow which of course led to you being chosen to mix Gints Zilbalodis’ and Rihards Zalupe’s score for this wonderful independent film?

A) Rihards Zalupe (pictured) asked me for the score mix. I have mixed several of his scores in the past, like The Pagan King, Dawn of War and 1906 so we already had a solid working relationship by that point, and I already knew that he was a stellar composer. We originally connected for The Pagan King in 2017 through some mutual acquaintances (Finnish producer Jonas Olsson and the Latvian band DAGAMBA, whom I also mixed later on) when that higher-profile Latvian film was searching for a music mixer that could accomplish a Hollywood-ready sound.

For more info about the Flow composition itself I’d highly recommend checking out Rihards’ recent interview in Collider where he talks in detail about how he and Gints approached it. 

It’s really interesting how they selected instruments to match the personalities and behaviours of the animals on screen to help with crafting the themes and sounds, a lot of thought went into little details like this because in the film the animals make familiar animal noises, but they don’t actually speak so many facets are communicated in non-verbal ways.

Q) It’s refreshing for a film about animals to have the courage to choose to stay true to the non-verbal nature of the animal world, and extremely unusual to have a film that features no dialogue at all. That presents lots of new opportunities as it gives the music unusually broad latitude for communicating the creative vision through the score at all stages of its creation. Could you talk more about the challenges and opportunities that presented for you on the mix, and how you worked with Gints and Rihards to tell so much of the story through music? 

A) One thing I instinctively gravitated towards was a quite surround-heavy mix style to complement the amazing emotional core of the film and the music. I tried to make the music really occupy the space in the film theatre, as in the absence of dialogue so much space was available, also in emotional terms. The score features a lot of very grand and sweeping synth and string orchestra textures and slow melodies that come and go in a wave-like fashion, and also lots of minimalism-tinged pitched percussion for some of the themes, and those all felt really at home being quite wide and enveloping! 

The synths were stereo tracks, but I used a combination of up-mixers and decorrelated 9.1.6 reverbs instead of just panning the tracks into the room to create an immersive and enveloping soundfield that is also very stereo downmix friendly. I also extended the live strings hall reverb with some 9.1.6 tail, as the recording space was of moderate size, with a lush sound but not a super long RT60 naturally.

My main contact in the creative process was Rihards, as Gints was very busy overseeing the finalization of all the other aspects of the film at the same time, including the delightful sound design by sound supervisor Gurwal Coïc-Gallas and the team. Rihards was also very intimate with Gints’ vision for the whole film and had worked on the whole score, so he was the “main guardian” for the music as well as providing me the tracks to be mixed, but Gints did also listen to my mix versions before they formulated their notes and I did revisions. It was not a super heavy-handed process and they mostly focused on some particular sounds and key moments in their feedback – luckily they were quite happy with the broad strokes of my mix from the start!

Q) Can you share anything about where and with whom the score was recorded – what was the general acoustic aesthetic of the space that you were working with? I’m sure you will have been given some direction on whether the composers and director were looking for that big Hollywood vibe or something more intimate or nuanced – so was the reverb only needed to gently finesse the sound of the room or does it play a more dramatic role in defining the space we hear in the final mix? 

A) It is a hybrid score featuring synths, composer Rihards Zalupe’s own percussion and woodwind soloist recordings as well as Sinfonietta Riga musicians, a triple-tracked 35-piece string section recorded expertly by Normunds Šnē, in consultation with me on how to capture the hall with Dolby Atmos in mind. This meant that for the live strings I had “real” Atmos ambience mic tracks to work with from the moderately reverberant scoring stage (a former small/midsize church), but the percussion and woodwind soloist recordings were mostly stereo or even mono in a dry smaller room, so using ample reverb was a key factor in unifying these diverse sources and enhancing the emotional intent of the music.

The composers’ rough production mixes also employed super-long demo reverbs on some of the elements, which they requested for me to replicate, or at least approximate. I still mixed from dry tracks and just had a demo mix and plugin settings references of their reverbs, as we agreed that using dedicated surround reverbs would be vastly superior to the stereo reverbs they were using. At the same time I toned the reverb times down a little on some of the parts for more clarity and control as they had used a super long blanket preset, but I still had some reverbs in the 10+ seconds range, quite demanding in 9.1.6!

Q) There are a lot of differing approaches to mic positioning and set-up for Atmos recording with it being a relatively new field, could you elaborate on your technique and the equipment used?

Normunds made the calls onsite as the recordist, but for the room he went quite closely with the setup I suggested. It’s based on a LCR Decca Tree with Outriggers panned to the wides, and omni mics for the ear level surround delivery channels with several meters of distance (3-4m or even more, depending on hall size and aesthetic aims) for the side surrounds from the main mics, and a similar amount more for the rear channels.

Cardioid mics pointing upwards were used for the 6x height mics, also separated by 1m up from the ear level, each height pair positioned on top of the ear level mics, for a resultant “mic per speaker” 9.1.6 soundstage. I used a similar approach on the Stormskerry Maja recordings in Lahti’s Sibelius Hall, and I think it works especially well for cinematic reproduction, providing a defined frontal image and clear time of arrival differentiation (“size”) for the surrounds – although it is definitely dependant on the quality of the hall as well. In a smaller studio or for different aesthetics I might very well do (and have done) something different, like a tree with mics closer to each other (similar to the 2L Cube).

Q) Does the final score mix feature a blend of orchestral recording and virtual instruments? If so, which halls were those libraries recorded in, and how did you use reverbs to blend those together into a cohesive space for the mix?

A) Again, there is more about this in Rihards’ Collider interview, but in short we had a thirty-five person orchestra and 3 separate passes, sometimes doubling / expanding the same musical idea and sometimes orchestrated to play separate things, but the end result was a really huge sound in the big musical moments.

The score uses virtual instrument string elements as well from Orchestral Tool Sounds, but not really in the common way of doubling or enhancing the live players all the time.

Instead, the VI strings were mostly of a special effects variety, often treated with additional effects, so the blending of the live orchestra vs the virtual elements did not need to match “realistically”. I did however employ quite a bit of 9.1.6 reverb on the virtual elements to match the real 9.1.6 mic-based live strings mix, along with some up-mixers.

Q) More generally can you give us any insight into your typical mixing template for Atmos with regards to how you are using reverbs? When and why do you lean into a LiquidSonics reverb, which LiquidSonics plug-ins do you use, and do you have any favourite presets that you reach for time and again?

A) I actually employ very few templates except for a main Atmos master section with static objects and a 9.1.6 mix buss and linked bed+object processing master out tracks, downmix tracks with dedicated outputs to my monitor controller, and rough mix / mix print tracks with another dedicated output, for easy referencing of the live mix against downmixes and previous printed mix versions or the rough mix. I select these on my monitor controller, so 2 sources at 9.1.6 (live Pro Tools and print tracks) and one at stereo (stereo down-mix or binaural), and often switch between additional print source and rough mix tracks via VCA mutes on my controller.

Instead of a larger template, I usually build somewhat unique session logistics for each project as they go. Pro Tools is so fast these days with features like Route to Track / New Track and a one-click + typing search function for plug-ins and routings that building the session as I work makes a lot of sense, as my projects vary so much from each other in their composition and track width & stem delivery requirements. In a workflow like this, it’s important to set the session default output to a “dummy” (not connected) path though, so every routing decision needs to be checked separately and things get routed to the correct print busses.

As far as Liquidsonics reverbs use, Flow was quite typical in that Cinematic Rooms Pro was my most used reverb. I very often just grab the default preset and start tweaking parameters, although I might go preset-hopping in special situations. In Atmos and surround, my favourite things to tweak are the plane customization options and I often create a “room position” and enhanced realism in the reverb via modifying reverb time, levels and even predelay or EQ across the directional planes. I also use the different Surround Propagation modes depending on the situation, especially if I’m using CRP just to sweeten and lengthen an existing reverberant space, and then using the plugin often in a true 9.1.6 to 9.1.6 configuration.

In addition to CRP, I reach out to all of the 9.1.6 capable LiquidSonics reverbs really. My 2nd most used one is Seventh Heaven Professional and I love the initial preset “Large Chamber” on it! That one was also used on some snare drum on Flow (stereo, panned to wides) but I often use various Halls on it as well. Lustrous Plates Surround is definitely the 3rd – I have a real EMT 140 plate, but it gets turned on extremely rarely these days, especially as so much of my work is in surround. Tai Chi and HD Cart are also used situationally, for the specific colours they can provide. On Flow, all but one stem had Cinematic Rooms Pro as their main reverb; in some places we also have a little Fabfilter Pro-R2 which was inherited from the production demo reverb.

Q) What matters most when working under tight time and budget constraints? How do you make sure you can mix efficiently and deliver a top-notch result despite the constraints?

A) Stable and solid operation and a fast and intuitive user interface are two really important factors for efficiency – but on the other hand a good and usable sound right out of the box via general high quality and a usable starting preset (or a self-customized one) is of course worth its weight in gold. In that department I have been very happy with LiquidSonics.

Q) The 9.1.6 monitoring environment at Studio Kekkonen must be where you’re making the majority of big mix decisions, but nowadays binaural is also a key part of the Atmos creative workflow. It can enable new possibilities for collaboration and additional insight into the end-user listening experience since spatial audio is becoming an increasingly common consumption target. Can you speak about how you made use of binaural during the process and whether this has an impact on your perception of the reverb in a mix? Of course binaural is also trying to create an environment for the listener, so did you find you needed to make any adjustments or allowances for that?

A) On Flow, we actually employed both Apple Spatial Audio and the Dolby Atmos Binaural re-render for the composers’ listening, as the mix was done totally remotely. I had met and worked with Rihards in person before but I only met Gints at the Riga premiere! Dolby Atmos provided a perfect vehicle in this regard, as I was mixing surround-heavy so the spatial and immersive aspects were a big part of the mix aesthetic. I delivered both an Atmos .mp4 file using the DD+JOC codec derived from the Dolby Atmos Renderer, and Binaural re-renders along a regular 2.0 stereo re-render to the composers, and they commented on how much it helped them hear the “in-room” and spatial tonality of the mixes. 

I think Gints mostly listened to the Apple Spatial Audio version on Apple devices, while Rihards listened to all of the versions. In fact, this workflow was such a success, that sound supervisor Gurwal told me that he adopted the same idea (it is also possible to add video to the Atmos .mp4 exports), as they too were working remotely with Gints for some of the time, residing in different countries for some of the sound post production period. 

As far as reverb perception in binaural, in this case I didn’t find that to be a concern, as the main mix environment we aimed for was still a cinema, and binaural was used more as a tool to approximate that environment. On the other hand, on a Dolby Atmos Music targeted release this could definitely be a different case.

Q) While Flow leveraged Atmos for the mix, it was ultimately delivered in 7.1 surround and the score has been released in stereo. Did knowing the final format would be 7.1 influence your mixing process in Atmos, and did Atmos enable you to efficiently target different delivery formats? In practice, was there a lot of adjustment needed to ensure that nothing important was lost when folding the Atmos mix down to 7.1, or did the Atmos mix translate fairly naturally? 

A) When we started the score mix, the final delivery format for either the film or the soundtrack was not yet set in stone, so I opted to build the mix around Atmos and my preferred 9.1.6-width based mix layout, that I have found works very well for cinematic reproduction, using a combination of a 7.1.2 bed and static objects for the wides and the top rear & front. However, quite soon into the process came the decision to deliver the film in 7.1, but at that stage I had already employed the binaural reference files, so it made sense for me to continue running the mix in Atmos while also building switchable 7.1 paths into the session. 

In the end, I did all my stem deliveries in 7.1 only, but kept a local workable 9.1.6-based Atmos print active as well. I ended up employing Cargo Cult’s Spanner plugin for the 7.1 stem downmixes, but employing their Atmos presets that are accurate to what an “official” Atmos 7.1 re-render would do with the Auto downmix coefficients. The heights and wides that I was missing from 9.1.6 folded down beautifully to 7.1, also because of the well decorrelated reverb choices I employed. I could have also used the Dolby Atmos Renderer for multiple simultaneous 7.1 stem group re-renders at the same time, but I decided on this Pro Tools bus-based workflow, as I was already using bus processing on the stem print outputs, and preferred the efficiency of switching between the printing/monitoring widths via VCA mutes straight in my Pro Tools session. The re-render method would have been necessary if I had been using any non-bussed discrete objects, however.

The Atmos version of the score still sounds even more spacious and enveloping, but the 7.1 does retain the “core” of the mix very well. In my experience, 7.1 is definitely a huge step up from 5.1 and already includes lots of the upsides of Atmos, and retains the Atmos mix tonality and intention quite well. Of course having the “native” Atmos version of the score mix session readily available and stored in the state that we delivered it in is a very good thing, as any potential future desire to move to Atmos for subsequent distribution runs is now possible, especially in light of the film’s massive success.

Q) Did you find yourself making use of Atmos creatively, for instance employing motion via objects, or was it more simply for enhancing the depth and immersion available on the sound stage since we have much greater precision in breadth and depth of position plus height?

A) As I got the 7.1 info relatively early into the mix process, I refrained from using much Atmos-specific mix techniques that would include discrete moving objects. As far as moving things via panners overall, this was not that type of a mix, even though I have done those kinds of Atmos mixes as well! But as guided by the nature of the score, this was all about enhancing the immersion, envelopment and emotions of the music.

I am always looking for interesting motion across the soundstage when mixing, but it does not necessarily have to be panner moves – spatial movement via dry sound location and reverb delay, or pre-delayed reverb can be just as effective. It’s good to keep in mind also, that as a score mixer one of my most important jobs is to provide a pleasing surround experience, but without inducing unnecessary head movement, i.e. the focus should always stay on the story and the screen – and the mix should not detract from that focus, at least in the absence of a clear and intentional motivation or directorial decision to enhance the film and its story by doing so. This is in my mind one of the key aspects that also separates Atmos and surround score mixing from music mixing for Atmos Music release.

Q) Surround reverb development presents particular challenges as providing a clean and transparent fold-down is critical to avoid unexpected changes in level or undesired phase artefacts. Ensuring a consistent tail density and acoustic presentation for the early reflections is a priority at LiquidSonics – how did you find the reverbs performed in this critical area?

A) I have appreciated the evenness and build-up-free downmixing using LiquidSonics reverbs very much. Nearly every project I work on needs to work in multiple reproduction widths, and I would argue that any Atmos mix and mix tool should be designed with this in mind, also given the playback system width agnostic aspirations of the format.

Q) Reverb can be one of the most challenging aspects of a mix to get right. What advice would you give to up-and-coming engineers who are struggling to get their reverbs to sit well in a mix, especially in immersive formats like Atmos?

A) I would definitely suggest investing in and learning good true surround reverbs like the LiquidSonics line-up – but also learning when not to use them, or when not to use all the channels available. 

Sometimes it can be more powerful to “segment” a mix so that not everything is “washed out”, especially when talking about non acoustic or non faux-acoustic genres. What if a dry sound was in the front, and it had reverb only in the rear and top rear, while other elements filled in the sides? That being said, I love the sense of being in a real acoustic space which good “real” multichannel reverbs can provide used to their full potential, and do employ that reverb aesthetic very often as a starting point.

As far as learning general reverb use, I would suggest the same thing – having good reverbs really helps. In the beginning phases of my career, I always thought that I was slightly lacking in my reverb use skills and often employed a more in your face compressed mix also because of that, but I have come to think that some of that might have been due to lack of quality tools, as I rarely had access to something like a Lexicon 480L or similar “really good” reverb, that was back then still confined mostly to physical units. 

These days I would argue that plugins have completely bridged that gap, which does make sense as the “magic boxes” were also just running code, so now is a great time for making good sounds! I would also suggest really learning the common parameters of algorithmic reverbs like predelay, density, HF damping etc, as being knowledgeable enough to tweak the parameters to suit the material at hand is a very powerful mix tool.

Q) Can you explain some of the differences when it comes to handing off the score mix to the next phase of the post production process for an indie production? Presumably no localisation was required on Flow given the lack of any dialogue, but what dub house stages will a score mix typically go through on an indie production and how are costs kept to a minimum? Do you know if any LiquidSonics reverbs were used during that process for Flow?

A) The smaller European film industries work I think quite similarly to each other in this regard, at least our local Finnish way of doing things was very similar to what was done with Flow in France and Belgium. I sent stereo and binaural full mix versions of the score for review by the composers, and by the time we were further into the process I sent 7.1 full mixes to Gurwal for referencing as well. 

By the time we were finished, I exported 7x 7.1 stems for Gurwal and final mixer Philippe Charbonnel, who employed Gurwal’s premix with my full mix as a starting point for the stems-based final mix. Philippe mentioned using some situational sweetening reverbs on the score, but after I compared the final 7.1 DCP printmaster and stems to what I had sent, I was outright surprised how little the “broad strokes” sound and balances had changed from the music mix phase – this is probably because Rihards and Gints had designed the music so carefully in conjunction with creating the picture and things were already very well thought out by the time I received the tracks. This also meant that there really was not much music editing to speak of after the score mix, and the picture didn’t have re-cuts either at this stage. 

I think the biggest change that was done to the score, or most apparent to myself was on the Dog Chase cue, where the music had double bass “barking dogs” that were toned down from the stems in the film mix, to make room for real barking dogs – the original version can be heard on the soundtrack release, however. But I need to stress that this was unusual also from an European indie film making perspective, we do get our share of re-cuts and music editing on usual projects especially these days, but it always comes down to each directors’ way of working as well. As far as final mix reverbs, Philippe did mention using Cinematic Rooms on some sound elements, but also employing a wide array of other tools he has learned to love along his career.

Q) That’s interesting, did Philippe mention any specific scenes he used the reverb for? Also do you know what other reverbs and effect he was using, I know TC 6000s are still really popular in some post houses for instance, AltiVerb is typically a reliable partner for many professionals especially now that’s supporting Apple Silicon and Atmos too – there’s usually a really interesting and eclectic mix of hardware and software in many professional workflows.  

A) Philippe mentioned using the VSS3 TC Electronic plugin, especially inspired by his love of the 6k unit, and Exponential Audio’s PhoenixVerb on some of the music to enhance specific spots, to match the overall sound design there. On other elements he also mentioned using Altiverb, Slapper and also Cinematic Rooms, specifically at least in the falling boat towards the end of the film.

Q) Coming back to your own workflow, let’s talk a little about the studio set-up you’re running. What does the hardware rig you’re using right now look like? Of course reverbs are often some of the more demanding plugins on memory and processor, so how many channels of reverb are you able to squeeze out of it on a typical mix? 

A) I have switched over to mixing on Windows 11 actually, after 2 decades on Apple computers. At the moment I made the switch (a bit over 3 years ago), Apple was not selling a powerful enough computer at any price, and even after the introduction of the Ultra Studios, however good Apple Silicon is – and I know it is very good, especially in overall power use / thermal performance – the value proposition on the Intel side is just much more sensible. 

At the time of mixing Flow, I had upgraded my first studio PC from 2021 originally built with the i9-12900K CPU to run the then-latest Intel i9-14900K model, but I recently built a completely new machine with the new Intel Core Ultra 9 285K chip, leaving the old and still monstrously powerful machine as a spare. 

On the current machine, I can run over 50 instances of Cinematic Rooms Pro in 9.1.6 output width at 48kHz, so it is usually more than enough, but I have definitely struggled with this on my older systems on some more massive projects, especially when running higher sampling rates and delivering a big amount of stems which all need their dedicated reverbs in Pro Tools. I often need to work at 96kHz or even 192kHz due to client/label demands as well, although I personally think 96kHz is very much “enough” and often my own starting point. However, I still think 48kHz can be quite acceptable as well, if plugin oversampling is available on all critical nonlinear processing.

I have also found that memory performance can affect reverb plugin counts quite a lot. The new machine is running ultra-fast, cutting edge 8400MHz CU-DIMM DDR5 RAM in XMP which has a significant measurable effect over the “stock” recommended Intel RAM speeds. (It is actually interesting, as in more “general” DSP efficiency comparisons using compressors etc, the RAM speed has not had such a significant effect, but for reverbs – both convolution-based and algorithmic ones – it seems to indeed matter a lot.)

Q) Well this makes sense, reverb is incredibly memory hungry so the faster we can access it the better. Apple Silicon also has incredible memory bandwidth, so we are really hitting our stride in what we can achieve in native plugin development on the reverb side these days. What about the hardware side in your studio?

Yes! And don’t get me started on Apple Silicon vs DDR5 memory performance, ASI is indeed amazing but the competition isn’t as clear-cut as some marketing claims would have us believe, especially for what a single core can address. But all in all, it’s a great time for in the box performance on both main platforms! As far as audio hardware, I’m still using 2x Avid HDX cards, but mostly as just I/O these days as far as mixing goes, using the Pro Tools Hybrid Engine. I connect to an Avid MTRX Studio which acts as a central hub and monitor controller combined with the DAD MOM controller unit, with auxiliary I/O for rack gear etc. via an Avid HD I/O and some older Apogee AD16X/DA16X’s with the X-HD card. I’m using all of the analog I/O on the MTRX Studio, 16 outs for the main 9.1.6 Atmos speakers and 2 outs for a nearfield stereo/mono set, and using the 16 inputs to monitor HDMI-based consumer Atmos via an Emotiva XMC-2 AV preamp that has 16 line XLR outs.

The MTRX Studio SPQ monitor correction EQ is also saturated, so all of my 16 speaker channels have corrective EQ on them (switched over to the nearfield stereo speakers when they are used), as well as delay and level correction. I’m using a combination of Sonarworks’ SoundID Multichannel, which can export SPQ, level and delay settings for the MTRX Studio, and manual adjustments via REW measurements to tune the system.

For the 9.1.6 monitoring, I employ 9x of my trusty Dynaudio BM15A’s with 6x “vintage” BM6 passives and 3x Dynaudio subs ganged to the one free output, as well as a stereo BM10 passive set with a mono Avantone MixCube. For headphones I prefer the Sennheiser HD650’s with low end extension via the MTRX Studio SPQ EQ, based on their SoundID profile. I mix on 3x Avid S1 faders and a Dock, with 2x Avid Control Android tablets & the Avid Control Desktop app on my main 2560×1600 monitor tilted just above the central S1’s, to allow the center speaker to be heard fairly uncoloured. I really enjoy the ergonomics of this setup!

Q) Looking back on the Flow project as a whole, what lessons did you learn that you’ll carry into future film and music mixes? Conversely, is there anything you might do differently next time?

A) I learned immensely about filmmaking just by working on the music. The film and the music is so well thought out that many of the cues and the way they were crafted had some new revelations to present, a delight really.

I wouldn’t do anything differently, though if I had known what a massive success it was destined to be – I did realise it was really good when working on it, but maybe not just *quite* how good, also as a unified cinematic experience (I was mostly focusing on the individual music scenes after all) – I might have obsessed over some details even more, despite the mix budget not being massive…

But hey, no one can say if that kind of additional finessing would have made for a better mix or film, and I do think we did okay already! In the end I’m really happy and proud of the work we did, and of the score mix as a snapshot of my mixing about a year ago.

Q) You must all be absolutely thrilled with the Oscars win – this is the latest in a hugely successful winning streak at many film industry award ceremonies including Best Motion Picture (Animated) at the Golden Globes, Best Animated Feature at the European Film Awards, Best Animation at the Cesar awards, Best International Film at the Spirit awards and Best Animated Independent Feature at the Annie awards. What does being a part of the team that have achieved so much recognition from your peers in the industry represent for you personally, beyond professional achievement?

A) It is absolutely amazing! From the start of my career my aspirations have been to be the best that I can in the craft and not just “good locally”, and also to have international projects and even international success, however that is measured. At the same time I have wanted to live and raise a family in my (rightly idyllic to some) Finnish hometown, with access to my family’s seaside “mökki” summer cabin and being close to nature in both, so thus far I have had to be realistic about the kinds of projects and successes I could expect. However, I think the world has changed quite a lot from the start of my career, and things like remote working during and after covid and also the successes of independent and non-hub based projects like Flow are probably a big part of that change. I’m definitely looking forward to what kinds of new opportunities and contacts the uplift from Flow and all the awards and recognition we have gotten might bring about!

Q) It’s really inspiring to see the success you’ve had with Flow – so for anyone else with a passion for film score that isn’t in one of the prime locations like London and LA, would you have any advice on how to break into the industry via independent film? 

A) I think passion for film and music but also for just sound is vital, as is associating with like-minded people, both as a student of the craft but especially when transitioning to a working professional. Obviously network, network, network, doubly so if not working at a natural hub location. I have been extremely lucky to be able to work with some truly inspiring composers, musicians and filmmakers already myself, but I think one key to that has also been the conscious decision of trying to maintain an attitude of treating each project as the most important one up to that point.

A huge thanks to Mikko Raita for sharing his experience working on the incredible Oscar winning movie Flow with us which is available to watch now on Amazon Prime, Apple TV+ and Max

Mikko Raita’s recent work roster:
YouTube | Spotify

Mikko Raita’s web and social links:
mikkoraita.com | IMDB | Facebook | IG | LinkedIn | Bluesky

Mikko Raita on Studio Kekkonen:
studiokekkonen.com

Photography credits:
Mikko Raita photography: Sakari Röyskö
Orchestral recording photography: Gints Zilbalodis
Rihards Zalupe photography: Janis Porietis

Flow still shots:
Copyright Dream Well Studio