Skip to main content
Artist interview

In Conversation With Classical Recording Engineer & Producer Morten Lindberg

By 15th July 2024No Comments

Morten Lindberg is a Norwegian visionary recording engineer and music producer specialising in classical, traditional folk and jazz music with a passion for capturing the essence of natural acoustic environments which led him to embrace a holistic approach to audio recording.

During the thriving 1990s classical music scene in Scandinavia, Morten founded his own production company (Lindberg Lyd) collaborating with renowned labels like Universal Music, Naxos, and EMI. However, as the industry shifted, he faced a critical choice – shut down or create something new. Opting for the latter, 2L was born, a music label that initially focused on stereo recordings then surround sound and later, immersive audio. This pioneering spirit extended to his partnerships with artists and composers, fostering joint ownership of master recordings and eschewing traditional, restrictive contracts.

In his professional capacity, Morten wears two hats – an audio engineer and a producer, assuming the role of the former prior to a recording session, then transitioning into the producer role as the session progresses – a process demanding great energy and focus.

As a producer and engineer, he held the record for most Grammy nominations without a win, with twenty-eight through to 2019 (many in the category of Best Surround Sound Album), until he won his first Grammy for Best Immersive Audio Album (Lux) in 2020.

We were fortunate to spend some time with Morten to discuss his holistic recording techniques, why and how he encompasses LiquidSonics reverbs in his work and what’s on his audio bucket list.

Let’s talk about reverb. You’ve always had a passion for acoustic environments, where does that come from?

I think it comes from being a player myself. There’s a very rich tradition in Norway for symphonic wind bands and, from as early as first grade in school, there’s an option of joining a band and learning to play an instrument. I think that whatever you do with an acoustic instrument, it forces you to relate to an acoustic space and that’s something I also brought with me into the engineering side. I was a trumpet player and the fun of it when it comes to the connection with reverb was that I used my trumpet very heavily while tuning my new mastering room. It comes in very handy for when you come to the phase where you’re placing the panels in the room, where you choose where to absorb and where to diffuse. I just took my trumpet and walked around the room, turned around, played in different directions, and just listened to the response of the room from the playing. The trumpet comes in very handy because it’s quite directional in the high frequency registry.

So, you were playing trumpet in your school band but at some point you thought, I’m really into spaces. How did that manifest itself?

It was connected to the music, going to concerts in a lot of different venues. Then, of course, the conscious level of it came when I picked up a pair of microphones and did the first recordings. That’s when it evolved from a subconscious into a conscious experience.

I use artificial reverb as an extension of the acoustic space that I’ve recorded in because my approach to recording music is very much that of the classic balance engineer. I work with a main array of microphones and I spend most of the time finding the best possible positioning of the sound sources – the ensemble, the soloist, the balance between them. I then make sure to find the exact balance between direct sound, reflected early sound, and the reverb, the diffuse fields of the room, so that balance is already set in the raw recording. Even my colleagues think what I do is so puristic that I wouldn’t use an artificial reverb, but when we’ve been through the editing and are ready for the final post production, I always add a little bit of reverb to it.

That initial experience of the space and the earlier reflections are usually already there as I want them, so what I want from a reverb is that tail, that extension. I have a very conscious mind that there’s nothing I can do about the length of the reverb in the room I’m working in. The placement of the microphones can only do something about the balance between direct sound and the reverb. The error that many engineers make if they work in a traditional studio and build their spaces entirely artificially, is that when they go into acoustic spaces, they tend to chase a length of a reverb, which isn’t really there and what they end up with is to favour a too big distance and too little intimacy to the balance. I am very conscious when placing microphones that it is not about the length, I can adjust that afterwards in post, it’s the balance. I always try to just make it a live extension to the reverb because I find that most acoustic spaces have a very lush, rich reverb to them but they can have a little bit of an abrupt ending, to my taste. That’s where you can really match the reverb to the music especially with the kind of music which I work on which has a lot of space in between the notes where you can actually hear the low-level reverb. So, working with that tail of the reverb is really important for me.

Historically, a lot of recording of orchestras would sometimes do multiple takes of the piece and then edit afterwards to find the best possible performance, is that your approach as well?

Oh yeah definitely. A typical classical album can have up to 2,000 edits. It could be down to note by note, in extreme measures, although very rarely. That’s why what I do is so different, I work only with main array microphones.

If you’re going to do a 7.1 album there’s eight mics in the room. If you’re going to do a 9.1.6 album, then there’s 16 mics in the room and that’s why my relationship to the development of reverbs is quite active because with this kind of array we have evolved from stereo to 5.1 to 5.1.4 and then into Atmos over 20 years.

With these arrays, you not only have an immersive pickup, you have a true envelopment because your time difference is actually correct from all directions and developing all through the frequency range.

I need a reverb that can actually hang on to that and develop that further.

What is the ideal position? Does the ideal position change on each recording depending on what you’re doing?

It depends really on what kind of music we are aiming to create but there is a go-to starting point if there are no other factors pointing me in a totally different direction. If there isn’t, then the point which a conductor would choose for his own podium, is a very good starting point. If there is a sonically conscious conductor, when they meet a new orchestra or they establish a relationship with an orchestra, they tend to skew their position of their podium in regards to the orchestra, from other conductors. So, some conductors will insist on having their podium all the way on the edge of the stage, whereas you will see other conductors pull 2-3 metres into the orchestra. Then, of course, every conductor encounters different limitations on a stage because you have very large stages, then you have some stages like the Musikverein in Vienna which is really wide, but so shallow. There’s probably only eight metres from the edge of the podium to the back wall where the double basses are.

So, there are many different stages that the conductor faces but it tends to be a good starting point for an enveloping experience. And, of course, I use all the axis to adjust and find the placement of the microphone array, so I move forward, backward, and I use the height very actively. Also, the angling of the microphones; even though these microphones are omnidirectional, they still have a very clear on-axis response. So, if shooting straight on a section of the orchestra makes it a little too shrill, then I just lift the angle and point above them and then the sound just runs off without using any EQ or anything.

Everybody raves about height in Atmos but how are you using height?

I’m using it for room. Quite an active and full sounding room. I don’t shield the height microphones to avoid direct contact sound at all. I use omnis in heights as well, but it means that the direct sound from any source on the floor will arrive at the height microphones at a later time and that time travel is enough for us to give a very clear sense of grounding, or height perception.

Are rears giving you the delay and stuff like that?

With this kind of array, for a lot of the recordings I had the orchestra full circle, 360 degrees, so you could have the full brass section there in the rear. The thing is, that whatever you do with an orchestral layout, it must make sense to the music because if you use surround placement to just be phonic, then it will be distracting. So, whatever strange layout I might create it has to come from a logic in the score because if any of the musicians in the orchestra do not immediately recognise why they are placed where they are, why they are placed differently, then they will work against you for the whole session.

Quite early on I had this great experience in the Watford Colosseum with the Philharmonia Orchestra. In the score, the tuba part was just a doubling of the double basses in the string section so the tuba was simply a voicing and a sonic flavour to the double bass line so I placed the tuba player so he was surrounded by double basses. When the tuba player arrived, the stage manager pointed him to his seat and he was quite grumpy, questioning why he was with the double basses. Thirty seconds in, I just caught his eye and he gave a thumbs up because he immediately got it, and that’s how it really needs to work.

Let’s touch on reverb again. What was your first experience of LiquidSonics’ reverbs?

For the task that I’ve just described, for some reason I find that algorithmic reverbs are easier for me to shape when I’m just after that tail so I end up in most cases with Cinematic Rooms. But, even though I aim for capturing the perfect balance in one array, there are of course situations where, for example, the harmonica doesn’t really balance out with the full symphony orchestra, so I have a spot mic for one instrument or the other. In those cases where I tend to rely more on building the whole space with the earlier reflections and the full reverb for a spot microphone, I usually end up with Seventh Heaven.

I find that to build from the ground up, when you have a point source, then Seventh Heaven has a little more substance to it than I can get with Cinematic Rooms, but when I’m working on the other side with just the tail, I don’t need that because it will interfere.

Are there any particular presets you reach for each time?

Nope, I build from scratch. What I usually do is just go to any of the larger spaces, that’s basically where I start working on it. And of course, with the usage we just talked about, I would look for totally different presets as a start point when it comes to Seventh Heaven or Cinematic Rooms. I’m after very different stuff there. In Seventh Heaven, I might look for more of a smaller hall, whereas in Cinematic Rooms I’m looking for that lush tail in a larger space.

Do you find EQ is used in the verb at all?

I never use an additional EQ to the reverb. I do those adjustments in the reverb or maybe even more so, I shape what I put into the reverb, then I might use it before the plugin. I work on the Pyramix workstation and I work with extremely high sample rates, for my own projects. I also do mastering for other labels, but for my own work, it’s a high sample rate and there’s a limit to what kind of plugins will actually be transparent to those sample rates.

You’d be surprised how bad the plugins are at dithering their process. I do a lot of tests with any new tools I use. Over the years, I’ve got to work with Bob Stuart from the Meridian Audio crew and he helps me test any new tools to see if they are actually transparent within Pyramix and not leaving any low level distortion or anything to the single path. In the last couple of years, I’ve had the pleasure of working with George Massenburg on his transition from the hardware into the plugin world and I absolutely love his EQ. It’s totally transparent and that’s my go-to EQ on this.

Is there a piece of music that you’ve yet to record that is on your bucket list?

That’s a difficult one because there’s lots of music which I would love to do. Of course, the nature of our work today is to work with contemporary composers and I just love that. I haven’t had the opportunity to work with too much of the really romantic repertoire.

I’ve done a lot of Baroque music but I see some wonderful opportunities with large scale romantic music which hasn’t been used so far.

One of the challenges is that the orchestral world today is a bit stressed, they’re a bit hard pushed on the financial side, on resources, so, for them to actually dig into a classical piece and do an extraordinary recording, is a rare event. I would really love to do some of the large-scale romantic repertoire in this way.

Do you have a favourite?

I’ve had the pleasure of doing Mozart’s violin concertos in a fully enveloping surround recording with Trondheimsolistene and Marianne Thorsen. What I’ve found is that the further we go into this immersive recording style, the better our stereo comes out as well. So this, which I regard as a beautiful surround recording, was selected by Stereophile Magazine as a ‘record to die for in stereo’. But if I want to pick some of the core classical repertoire, I wouldn’t go down the Wagner route, I’d probably end up along the lines of Bruckner.