Amphion, Atmos & Anime: International mix/mastering engineer Gregory Germain | Spatial Audio project
The ease of working with Amphion speakers in a stereo mix translates intuitively to working with Amphion speakers in spatial audio for a global animation blockbuster
Tokyo, Japan – We interviewed CEO and mix/mastering engineer Gregory Germain of Sonic Synergies Engineering based in the Tokyo “megalopolis”, the world’s most populous city, following Apple Music’s Dolby Atmos release of “Uta’s song One Piece Film Red” on January 25, 2023. Gregory and his team also handled both the film score mix (Surround 5.1) and spatial audio mix (9.1.4) to this Japanese animated “fantasy-musical action-adventure” film by Goro Taniguchi – Japan’s highest grossing film in 2022 and fifth-highest grossing film in Japanese history. The film was released in every continent of the planet, as such is the continuing incredible fascination in Japanese anime worldwide. In addition, the songs of character “Uta” are voiced by J-Pop chart-topping artist “Ado”, with the film’s theme song “New Genesis” topping Apple Music’s Global Top 100 and locally in both Billboard Japan and Oricon Japan charts for 2022. The following interview shines a light on the Amphion-equipped Onkio Haus “Studio #7” spatial audio room; discusses the process used in this Dolby Atmos production; and provides insights into the immersive audio industry by a leading international engineer.
Amphion interviewed you very early in 2021 – not long after you opened Sonic Synergies Engineering and some 4 months before Apple Music’s big “Spatial . Audio with Dolby Atmos” announcement. In these two years, how has your company progressed ? And how would you describe the %-demand for your stereo, surround, or immersive audio mix/mastering services ?
Our team continues to grow, with 3 members in Japan and members outside the country. Besides music and post-production projects, we’ve been working intensively on mixing for the Dolby Atmos format. This is a new technology in the music studio context, and very fresh for all of us, so I wanted to be sure to understand it fully. I received technical certification from AVID (Avid Certified Professional: Pro Tools | Dolby Atmos) and I’m also certified by Universal Music Group for ATMOS mixing. We produce around one or two albums a month, as it’s still a small market in Japan – I would say 10% of our entire workload. The cost for an entire ATMOS mixing and mastering production is still pretty high so only the big labels are able to afford it.
Toei Company’s animated movie “Uta’s song One Piece Film Red” premiered in August 2022 and was the highest-grossing film in Japan for 2022, and distributed worldwide by the end of that year. For decades, film-goers have been exposed to Dolby Atmos sonic experiences in “theatres”. But how did you re-work this mixing/mastering for a solely “music” application? What was your approach ?
For the movie, we were in charge of the entire score surround-mixing in Surround 5.1. Most anime movies in Japan are still mixed in 5.1, as ATMOS is still a small part of the entire catalog. The movie was also screened in an Atmos theater but I believe it’s been up-mixed from my 5.1 stems. In the score mix, we had to deal with two different types of content – the background music from the score and the songs sung by Ado/Uta. It was a completely different approach because usually you have dialogue in between the music. However, the movie concept was really to focus on the songs, so we really had to mix them like we mix a record but in 5.1. For the ATMOS version of the 8 songs, we had to start fresh as most of the songs in the movies were short versions. The stems were completely different so I decided to completely ignore the 5.1 mix and start from scratch. Also the loudness and the entire spatial experience in ATMOS is so different from 5.1 that it just couldn’t fit. For this project, I closely worked with Sebastien Ginesy (Fader Crafters), Takeda Shoshi (Onkio Haus), and Julie Pailhes.
The mix/mastering of “Uta’s Songs One Piece Film Red” for Apple Music was completed in Studio #7’s “9.1.4” facility at the prestigious Onkio Haus Studios in central Tokyo. Why was this studio chosen ? What do you particularly like about this space ? What is the gear list ? What were the various roles of the team members you managed ? How many hours were needed to create this 8-Song compilation ?
I choose Onkio Haus studio because they have an Amphion Atmos set up, as they are my favorite speakers. I’m used to working with Amphion in stereo, so for me, it was easy to mix with speakers I know intimately. I also love the tuning of the room, as it feels very natural, open, and less boomy compared to other ATMOS rooms I’ve worked in. It’s a ‘detail’, but that’s also the only studio I know where you can A | B with one single button on the monitor controller between the Atmos mix and the stereo Mix – which is very important when working in Atmos as you need to keep the vibe of the original sound but at the same time expand it without sounding too gimmicky.
The way our team works is in two steps, as we don’t have our own room yet ie. I make a pre-mix before going to the main Atmos studio. We use a small 5.1 room just for the pre-mix and we down-mix the entire project in 5.1 and binaural first. My assistant gets the stems from the different mixes, cleans up everything, and makes sure it’s converted to our own template for Atmos mixing. After I receive his sessions, I start to assign all the tracks depending on the stems/arrangement, make some broad panning moves, and then try some automation ideas etc. I then pass the sessions to my assistant again and he makes other small tweaks and QC (Quality Control) tasks while I’m working on other songs so can be productive and mix many more songs in one day.
There are a lot of requirements in delivering an Atmos mix so we usually double-check everything. We use a Google sheet or template we created specifically for ATMOS with the entire label requirements to ensure we have everything right. Then we move to the Atmos studio at Onkio Haus, and there I make the final mix on the big set. After we’re done, there is another QC process with the local assistant and then I move on to mastering. For this project, I used the Dolby Atmos Album Assembler but the mastering was a part of the mixing itself so it was only a few EQ moves and some level adjustments. In total, I think it took around 5 days to complete the album product for final delivery.
You have mix/mastered singles and albums in various Dolby Atmos studios in Japan now. What makes the Onkio Haus Studio #7 experience so different? What sets it apart from other immersive audio spaces you have worked in ?
It just sounds right. Many studios are using DSPs and active speakers which can be useful in some specific situations like working in a non-treated room. But in a good studio, I don’t think you need it. I always prefer the sound of an ear-tuned room above a DSP-tuned room except if they use DSP on top of the ear tuning to fill in the last little dead spots. Besides the acoustics, I think they really nailed the entire mixing process and they make it easy – as in the way the desk is set up. Also, I’ve found the TAC system monitor controller the best piece of gear for mixing in ATMOS. You have access to many functions of the renderer, and can switch between different formats at your finger tips – which is good for us but also good for the client. We can flip very easily to binaural with different headphones including the ones from Apple, so it’s easier to get an idea of how it will sound when it will be on Apple’s platform. The room itself is also very cozy and looks absolutely beautiful.
More generally now, what are the main challenges faced by engineers wanting to enter the immersive audio business ? Along with ROI (Return On Investment), what are some of the hurdles studios and producers face – both technical and artistic ? What will boost demand ? What would inspire producers to adopt immersive audio ? What would improve workflow ?
Well, technically I think it’s a totally different approach from mixing in stereo and even legacy surround formats (5.1, and 7.1). The biggest challenge with ATMOS is that it is not easy to know how it will sound when It will be streamed on the Apple Music platform so there is a lot of guesswork. The way the phase behaves in this format is very tricky and it sounds completely different between speakers and headphones. Many people believe that because the end-user uses headphones, mixing on headphones from start is the way to go, but actually as soon as you use headphones you’re in binaural which is another subformat for ATMOS. The way ATMOS is working is it’s a virtualized audio format with automatic fold-down, and it’s adapting to where it’s playing back so there is technically no way to hear how it sounds unless you try all the different virtualized formats one by one – which is impossible for now with just a switch. This is not the case with stereo because it is what it is, and there are no drastic sonics changes besides compression in MP3 or ALAC.
Another thing is the tech behind the virtualization is changing all the time and the guys behind the servers at Apple can update the engine without any notice so you cannot really trust what you hear in the cans. It means that the songs you delivered may change over time depending on the algorithm eg. a song I mixed last year may sound completely different if Apple change it’s binaural virtualisation tech. At the end of the day, what you have on the speakers is the only real representation of the mix, so it’s very important to mix on speakers along with the other tools – but this is pricey. Investing in a 12 speaker set up is a big deal, and there is also a lot of tuning to be done. ATMOS is machine-power hungry, so you need a big fat computer to handle that, and I’m not even talking about the huge amount of data (original sessions, stems, atmos masters, ADM etc) !
With stereo, you have way less QC and export time – it’s way more simpler. For ATMOS, you really need to take care of every detail, otherwise the label QC can refuse your mix for whatever reason and you’ll need to reprint an entire song again for just a small loudness issue. The way I see it is, the demand, for now, is only created by Apple. It may change if another big platform such as Spotify decides to stream immersive audio format. I don’t think the end user dictates the trend. The big tech companies will.
Regarding producers, I don’t think it’s necessary to start to produce in ATMOS format from the beginning of a project because it’s just too complex and not flexible yet. I believe stereo will still stay the main vessel for music expression, but ATMOS or another immersive format will live alongside it. However, I think for “live” sound it could be interesting to make music specifically for this format, except for those who are making music in Logic where the renderer is completely native. But the workflow and the format itself are still experimental, and an end user doesn’t have a true indication or representation of what we can really hear in a room like Studio 7. They receive a compressed file with only 16 objects – otherwise it would be technically very hard to stream to the entire planet considering an ATMOS mix ADM master file can be a huge 2GB file – including audio and complex metadata.
I’m really excited about what Apple and Dolby might engineer in the future – getting the listener closer to what we can hear in the studio – as it’s sonically really beautiful. That, I believe, would be the key to completely revolutionizing the music experience. That can only happen with a faster Internet, bigger servers, and more efficient virtualization technology. We have ATMOS in cars now, and if it’s done well it could be a great new experience for the end-user. As for myself and my team, we plan to expand our team more and our ATMOS mixing services even further in the future. So stay tuned.
Read more about Gregory Germain: https://www.gregory-germain.com/
Read more about Sonic Synergies: https://sonic-synergies.com/
Hear more from Gregory Germain via Linktree: https://linktr.ee/mixedbygreg
Learn more about Onkio Haus – Studio #7: https://www.onkio.co.jp/music/7st.html
Learn more about Amphion Immersive Audio Solutions: https://amphion.fi/studio-products/beautifully-immersive/
Apple Spatial – Uta’s song One Piece Film Red: https://music.apple.com/jp/album/utas-songs-one-piece-film-red/1636446787?l=en
Learn more about One Piece Film Red: “ONE PIECE FILM RED” OFFICIAL SITE (onepiece-film.jp)
Japan Pro-Audio Distributor https://www.mixwave.co.jp