
Video games present a unique challenge and opportunity for composers. Every player’s journey is different, requiring music that can seamlessly adapt to a constantly changing environment, pacing, and player actions.
So unlike the strictly linear progression of film and television where music can be carefully synchronised to predetermined visuals, games often demand a dynamic score presenting many interesting technical and musical challenges and opportunities.
In our latest interview, we delve into the intricacies of this dynamic medium with Emperia Sound and Music co-founder Jeff Rona, whose work on titles like The Callisto Protocol and Marvel vs Capcom: Infinite showcases the unique approach and innovative spirit need for game audio composition.
Q) Firstly thanks for taking the time to talk with LiquidSonics about your work, let’s begin with how Emperia Sound & Music came to be. What’s your background and that of co-founder Cody Matthew Johnson? How did you meet and what made you decide to set up a business focused on interactive media? How did you set about growing the team to where it is today delivering music for some of the biggest AAA games in the industry?
And thank you so much for inviting me here. Our team at Emperia Sound and Music and I are big admirers of your work, and it’s become a part of our workflow here.
My partner Cody Matthew Johnson and I met nearly 10 years ago when Cody moved to Los Angeles to finish his studies in music production and composing. I’d already been busy for many years scoring film, and television, and one or two video games at that point. He came on board to run the studio here and assist in the production of a number of film and TV scores. During that time it became very apparent that Cody‘s first musical love was in video games. I had already done a couple of bigger games, such as God of War 3, some additional music for Far Cry 4, Marvel vs Capcom Infinite, and a handful of smaller games.
About that time Capcom reached out to me about scoring two major battle themes for the upcoming Devil May Cry 5. One would be more electronic, and the other one more heavy metal. I knew at that point Cody could absolutely slay a heavy rock theme like this and I recommended him to Capcom to write that theme while I did the other. Once Capcom heard some of his work they fully agreed. Devil May Cry 5 went on to be a smash hit, and over time we felt that there were opportunities available to us from starting our own team with Cody as the lead Audio Director and composer, while I continued to compose and help build out the team. From that was born Emperia Sound and Music!
Since then our team has grown, we’ve developed partnerships with a number of major and smaller game companies, we’ve scored a substantial number of games from AAA console to small mobile games. We’re also writing and producing a lot of songs for games. We’ve already won several prestigious awards. We’ve also been providing sound design, trailer and promo music and production to some of our “dev” clients. More recently we’ve started a video game music-focused record label – Emperia Records. All of this is from our desire to create the best possible game player (and music listener) experiences. Our team is made up of people who love games, are knowledgeable and ardent fans of game soundtracks, and whose compositional and production skills are absolutely exceptional.
Q) What do you find are the biggest challenges when composing and delivering music for interactive media when comparing to the work you have done for more traditional TV and film clients? Which of those are the most enjoyable to tackle, and are any of them a little frustrating?
We can look at the challenges three different ways.
First, is taking your compositional and production skills and learning to write music that is fully “interactive”, meaning it is entirely in the control of the player, while still feeling completely natural and musical. That means a very different approach to writing music.
The second challenge is production quality. Delivering music for video games is actually far more technical than delivery of music for film and television. While both require substantial use of “stems“, the needs of delivering music that is “in-game” can become extraordinarily complicated at times.
And the third challenge, is understanding and adapting to a unique workflow. When you work on a film or episodic TV series, it is typically a race to the finish line. You are up against the clock and it can be relentless. While video games still have challenging deadlines, you are often put in the position of working in small batches, each one getting revisions for artistic reasons, but then after copious testing by the developers, you may be asked to do rewrites solely on the way the music feels at that point in the game, which you often aren’t able to see while you are actually writing.
So there is far more back-and-forth in the development of a video game score than in a film or TV score. When a developer comes back and says “we love this music, but it doesn’t work here because…” It can be a little frustrating, but you have to trust in the process that a particular style of production or a particular complexity or simplicity in a musical choice doesn’t fit.
Q) Back in the days of MIDI, iMUSE (Interactive Music Streaming Engine) was used to great effect in classic LucasArts games. It was able to provide some truly engaging musical experiences in Monkey Island 2, Star Wars Tie Fighter and the recently remastered Star Wars Dark Forces by synchronizing music with the visual action using a collection of MIDI passages that were designed with dynamic transitions in mind. In the modern era we have generally moved on from MIDI in games, so how do you go about creating different themes and cues for a complex soundtrack that responds organically to the environments and action on screen now that music is mostly pre-recorded and mixed?
You are very correct, the use of synthesis and MIDI inside of a video game is a thing of the past. Video game scores are fully audio, fully interactive, and rely on the sophistication of the current generation of “middleware” audio engines, which we not only rely on, but have learned how to make best use of to create the most engaging an immersive experience for the players.
Q) How closely do you collaborate with sound designers to ensure a cohesive audio experience? Can you describe the process of integrating music and sound effects so they sound like they are rooted in the same world?
We have sound designers in-house, and we have also collaborated with sound designers from clients and marketing teams (for trailers). We are pretty careful to avoid sounds in the score that seem like they could be Sound Design. We wouldn’t use a gun shot for a snare drum! So, by keeping the music very differentiated from the Sound Design, it integrates much more easily. The flipside is also true. When we do Sound Design, we make sure nothing sounds like it could be part of a score. We avoid material that sounds too tonal or musical.
Q) You have taken on a diverse range of projects in multiple genres including driving, fighting, shooting and dancing games. What do the initial stages of the creative process look like, and where do you take your inspiration from?
People often don’t understand just how eclectic music for video games can be. Just like in film and television, there is really a broad range of styles and genres that game developers seek.
We get approached by various clients, asking us to work on a certain project. Along with that request comes some kind of “brief” which is an example of what they’re looking for. Sometimes it’s a YouTube link, or a link to some album, video, or previous video game soundtrack. It starts with understanding why they liked that piece. What is it they are trying to achieve in the new project that makes them interested in that piece, and from there we can start to draw a certain conclusions as to what we are doing and why.
Within our team, we have writers and producers who can hit the mark on most everything that comes our way. We are very lucky in that way. So over the years we’ve been around, we’ve had a shot at a really wide and eclectic range of styles, so now, when people ask us to hear something we’ve done in an unusual or highly specific genre, we have pieces in our catalog that we can refer to.
Q) In terms of hours of content needed, what is the range of requirements you see from a small to large game, and how long will it usually take to produce it? Do you find your delivery timescales vary much from those in TV/film?
We’ve worked on projects that are anywhere from 2 minutes to 2 hours of music! Some projects are looking for a single source for an entire score. And those projects can range, but need to cover a lot of ground from menus to boss fights.
However, there are a number of developers who are very used to the idea of hiring different sources for specific individual pieces. In those cases, we may get a request for a two minute piece, but once we deliver that and they are satisfied, they will come back and ask us for another two or three minute piece, and so we work in these small batches. But it really varies and we’ve, been lucky to be the exclusive or predominant creator of many many scores
Q) From a technical standpoint, what tools are you regularly using in terms of DAWs and plug-ins, and are you using any classic effects hardware and synths in your studio?
Most of our team are working in Logic Pro, although we have a couple of Cubase users. I use Logic myself. We sometimes are required to finish projects, especially sound design work, in ProTools.
We are all plug-in junkies, and we are working with plug-ins from every major music and audio company, as well as smaller experimental developers. When something interesting comes along we are often first in line to give it a test drive and kick the tires. The only hardware we routinely rely on, is our audio signal path for recording, live, musicians, and singers. I have a number of vintage synthesizers, which I use on occasion, but none on a regular basis. I do have it set up that any time I want a hardware instrument in my sequence, I can call it up right away – none of us are using hardware EQs or compressors for mixing. We do use amps and pedals for recording, guitars, or bass.
Q) Do you all work from the same location, or is work from home a big part of how you go about your daily business? If so, how do you manage project collaborations at various stages from the initial composition right up to the final mix?
We have members of our team in Los Angeles, New York, Paris, Austin, and we occasionally work with freelancers in other parts of the world. We record orchestras in Los Angeles, Nashville, Bulgaria, Prague, Budapest, and London. Generally, we do those remotely, though we have on several occasions gone to be there in person. As a result, we rarely are in the same room together.
Naturally, we love to see each other, but we’ve become really comfortable with our day-to-day virtual working style. We have daily meetings, and we use zoom, slack, Google, docs, dropbox, and other technologies that allow us to interact and share our projects at every stage.
Q) For orchestral music on larger projects, are you typically just using sample libraries for mock-ups or do they play a major role in the final delivery as well?
While everything starts in our DAWs, we in fact, do record orchestra on a number of occasions. When we do record orchestra, we routinely will keep the samples in the mix and blend them with the live orchestra. Not for soloists, but just to thicken up the ensemble itself. Also, we often leave percussion, and a few other instruments as sampled and not replaced by live players. This is because of the style of music we do and our preference for things to be very accurate and tight rhythmically. But yes, samples are an important part of our final music delivery.
Q) Can you explain how you use reverb to bring sample libraries together for the final mix? For example we hear from a lot of composers that Cinematic Rooms is essential for their sample library based workflows because it has a wonderful ability to unify the sound of different spaces when libraries have been recorded in different studios (e.g. Abbey Road and AIR Lyndhurst) because it is so transparent.
It can be somewhat challenging to match samples from different libraries in order to get a realistic sound. I feel it is detrimental to the organic nature of an orchestra to go with close mic’ing in order to try and limit or eliminate the sound of the hall where they were recorded. A critical part of any orchestral sample, whether it’s a section or a soloist, comes from the interaction of that instrument or instruments in the space it is occupying. A close mic’ed orchestra can tend to sound small, and a little fake or electronic. So while we do sometimes add a little bit more of close mic’ing to the default settings, the reverb choices we make are a valuable way to get all the sounds of the orchestra to start to blend together more organically.
We will typically engage several reverbs, some, for strings, some for brass, and within those we will use different settings for more staccato or pizzicato, sounds, versus more legato instruments. The differences are subtle to the listener, but give us the most clarity and sense of space for the final mix. We are also careful to not overdo it to make any of the instrument sound soupy or existing in a different space.
Q) What proportion of your work is recorded by a live orchestra, and how does this affect how you use reverb?
We work on such a broad range of projects, from very high budget to very low budget. As you would expect, low budget games cannot afford the cost of recording and mixing a live orchestra. But when you move to AAA projects, the use of orchestra is much more prevalent. In all cases, our use of orchestral samples is actually an important aspect of the score, even with a live orchestra.
We are all very used to the sound of a hybrid between live and sampled. With lower budget projects that are all samples, our goal is to have the same high-level of musicality and realism, as though the orchestra and other instruments were live.
Q) Could you talk about how reverb plays a role in the composition and mix phase? Before you started using LiquidSonics reverbs what were you using, and how have LiquidSonics reverbs made a difference to how you approach your work and a mix?
Reverb is probably the most careful and critical choice we make in our sound design and mixing work. I view reverb as a creative tool, and not just a re-creation of an acoustic space. When it comes to creating electronic music, the careful application of effects, especially reverb, really are integral to the emotional content of the sound, and part itself. My reverb choices move from highly algorithmic, like Eventide’s Blackhole, to very naturalistic reverbs, such as LiquidSonics.
Q) When working with LiquidSonics reverbs, do you have any presets that you return to frequently, and why?
While we do some tweaking for our own tastes, we often start with the Cinematic Rooms – Studio Hall. It has a great sound for opening the mix up in Atmos. It really fills the room without being too forward. We also use Cinematic Rooms – Score Stage. It’s a warmer and fuller space in Atmos to give more legato sounds some added depth.
Q) When it comes to the final mix of the music, are you working primarily in stereo, 5.1/7.1 surround, or Atmos? What are the delivery formats of the music typically for games?
While larger console games are in surround formats, and mobile games remain in stereo, our main means of delivery has been and continues to be multiple stems. The stems are still typically each in stereo, even for games that end up in some form of immersive or surround audio. On a few occasions, we have delivered an orchestral stem in 5.0 or 7.0 surround.
Unlike the use of stems in film and television whose main function is to allow for tweaks to the mix on the final mix or dub stage, stems for video games are there to provide the game’s audio engine the ability to change the music in countless ways for the sake of interactivity, and also to avoid the music ever sounding like it is looping, which it very much is.
So we put different musical “energy levels” or musical elements on different stems, and the game developers program the software to bring these levels or elements in or out, or raise or lower them in real time to create different moods or energy levels. Additionally, melodies and rhythms are kept separated for the same reason, so they can come in and out at opportune moments in gameplay. Our use of reverb or other audio effects is done with a very critical ear to make sure that things still sound good even when fairly radically remixed. Since each stem has its own reverb, we want to make sure that the final music sounds good if all the stems are playing, some of the stems are playing, or only a handful are playing.
Q) What are some of the biggest challenges facing video game music composers today, and how do you see the role of music composition changing over the next 3-5 years with rapid advances in AI?
That’s a question for my crystal ball, which is in the shop at the moment!
At this point, I haven’t heard of any video games being released with AI generated music, but I assume there are some – and it will only grow. The current tools for generative music are still not very applicable to the needs of serious video game developers, who require stems, carefully designed variations of every theme, and the ability to give very specific feedback and notes to the composer to improve on the initial compositions.
As those tools become more available, and that may still be a while, human composers (that is such a weird phrase, but you know what I mean) are still the primary source of music for games. The challenges for composers are the same as the challenges for anyone in most any creative field – the competition is high and getting higher.
If you don’t have something remarkable to show for what you do, then somebody else is likely to get the project. Otherwise, it’s just a very competitive field with a lot of people clamouring to score games. At the same time, many game companies are slowing down production that went a bit wild during Covid when everyone bought a game console.
Q) Music and games can both be hard businesses to get your start in, so what tips or advice would you give to somebody at the beginning of their career looking to get into composing music for video games?
First and foremost, you really need to understand and know the world of video game music.
Being a “gamer” is a step in the right direction, but if you’re really interested in scoring games, you really need to know the major game franchises, their musical choices, who the top writers are in the various genres you hear in video games, and then have a really good understanding of how one composes music specifically for interactive media.
Having a technical awareness and comfort in using middleware engines is paramount. There are some games that don’t require it, but if you want to be seen as a valuable resource by video game developers, you need to speak their language, know the field, and deliver music at the highest creative and technical quality in a format that works perfectly for them. Understand that video games are a unique language and format – distinct from film and television.
With all of that in mind, assemble a demo reel that is focused on video game music. Keep in mind that video games cover a very wide range of musical styles and are not simply the highly dramatic music you hear in first-person-shooters. It’s a far broader range, and you want to show your understanding and awareness of that range. And again, you need to make your music at the highest level of production quality possible. No excuses! Even the best musical ideas don’t come across to a listener if the quality of your arrangements, orchestral mockups, mixing, and mastering don’t hold up to the same degree.
From there I would suggest attending one or more of the several video game developer conferences (not the ones you get dressed up for!) that take place around the world which many developers will attend. Some will feature panels and speakers on the topic of video game, music, and it’s a wonderful opportunity to attend those and learn from more experienced composers.
That, and a good dose of luck, and you may just have a real shot at it!