Knowledge HUB
Beacuse we belive that sharing the knowledge is the best way to innovate
How the Game Audio Market Is Changing
It is hard to keep up with all the changes on the market. Therefore, I set out to write a series of articles about audio technologies and broken down by audio segment to help both mature experts and newbies who are making their first steps understand our market better. This article is the first in this series and it is about game audio technologies and how game audio market is evolving.
The world of game audio is huge, and its impact stretches far beyond entertainment.
For instance, in healthcare – VR therapy for people with PTSD, and autism support tools designed to support cognitive and emotional development. Therefore, it plays a vital role not only in entertaining people but also helps individuals to develop different important solutions.
The gaming industry is one of the largest in the world in the entertainment sector, generating more revenue than the film and music industries combined. It is estimated that the global gaming market will amount to 503.14 billion U.S. dollars annually in 2025, up from 396 billion U.S. dollars in 2023. And audio is a big part of this success. It doesn’t just support gameplay
it enhances it, shapes its tone and deepens player immersion in ways visuals alone never could. But while the importance of sound remains constant, the way game audio is made is rapidly changing.
New Tools, New Workflows
Before any sounds are made, there’s a planning phase. In pre-production, teams select the tools and software that will shape the game’s audio pipeline. This typically includes a digital audio workstation like Reaper, Nuendo or Pro Tools for composing and editing. Middleware solutions like Wwise or FMOD are chosen to bridge the creative and technical sides, allowing sound to be implemented smoothly within engines like Unity or Unreal Engine. These tools make it possible to create dynamic audio systems, where sounds are triggered by specific events in the game, like footsteps that change with speed or music that reacts to the character’s health or location.
However, in recent years, game engines themselves have started to offer increasingly strong native audio tools that, in some cases, reduce the need for traditional middleware, especially in smaller projects.
In Unreal Engine, two innovations stand out. The Quartz Audio System, introduced in earlier versions, provides low-latency audio timing and precise synchronization. It’s particularly useful for rhythm-based gameplay, where timing is critical. While it doesn’t replace full middleware solutions, it allows developers to keep more of their workflow inside the engine.
Even more transformative is MetaSounds – Unreal’s node-based, procedural audio system. MetaSounds offers control over dynamic DSP effects and behavior-driven audio, allowing designers to create complex sound logic without relying on external tools. For many developers, this opens a lot of different possibilities directly inside the engine.

Pic 1 – Unreal Engine’s Audio System (pic by Amir)
Unity, too, has been creating stuff. Its DSP Graph system allows for the creation of custom real-time audio effect chains, supporting more flexible and responsive sound design. And while it’s not audio-specific, Unity’s Muse AI integration introduces tools that aim to simplify parts of the workflow, speeding up how sound is implemented in games.
The pros of these native tools include cost savings (no need for middleware licensing), streamlined workflows within the engine and increased flexibility for dynamic sound design. However, their limitations include challenges with handling complex multi-track audio or 3D spatialization in large-scale projects. Middleware remains essential for projects requiring advanced features like hierarchical sound management or complex music systems.
For smaller projects or startups, relying on native engine tools can reduce costs and simplify development pipelines. For indie teams or prototypes, that means faster iteration and lower technical concerns. However, developers must carefully evaluate whether these tools meet their specific needs or if investing in middleware is necessary to achieve desired results. As game engines continue to enhance their audio capabilities, the reliance on middleware may decrease over time, but it will likely remain unchanged for high-budget productions with complex audio requirements.
Moreover, sound designers also create tools that significantly simplify their work. For example, Karol Andrzejewski, sound designer at Flying Wild Hog, created a tool to enable quick, simultaneous editing of multiple attenuations. This tool allows rapid iteration, making testing more efficient and speeding up the decision-making process. He also created Reverb Mixing Helper that improves reverb mixing and monitoring workflow in Wwise.
Middleware is Still Evolving
Despite these new engine-native options, dedicated middleware like Wwise and FMOD still play a central role in many professional pipelines, especially in large-scale productions where sound complexity is high.
These tools aren’t standing still either. In its 2024.1 release, Wwise introduced improved live editing features, allowing real-time parameter (volume, filters, RTPCs, etc.) changes during gameplay without needing to rebuild soundbanks or restart the game. This gives audio teams more time to test and iterate, a crucial part of modern interactive audio design.

Pic 2 – Audiokinetic’s Wwise software (pic by Audiokinetic)
Also, Wwise has spatial audio rendering, providing precise positioning and environmental effects that enhance immersion. Its object-based audio pipeline supports advanced spatialization techniques, including binaural rendering for headphones and adaptable mixing across various channel configurations. Wwise also integrates acoustic simulation technologies such as ray tracing for obstruction, occlusion and diffraction effects, ensuring realistic sound propagation in complex environments. While Unreal and Unity now include some live editing and profiling tools, middleware like Wwise still offers more advanced control, broader cross-platform compatibility and deeper flexibility – especially for managing complex, layered audio systems in large games.
Another thing worth mentioning is Strata – Audiokinetic’s advanced sound effects library integrated with Wwise, offering multitrack sound design capabilities.
“The magic is that we distribute the multitrack that’s been used to create the sound (…) It is rare that you capture a sound in nature, and you play in directly. Typically, anything you hear in a movie or a game, there is a lot of sounds assembled together, mixed and processed. When you have rendered file, it is quite difficult to isolate a layer, a lot of time is not possible at all. Any variable a game can have can influence the mix dynamically, at the run time of how this sound should be played. When you have all the layers independent, it makes that really easy to do” explained Simon Ashby, co-founder and the Head of Product at Audiokinetic in our podcast.
So, while some projects may now rely more heavily on native engine tools, middleware remains essential where precision, performance and detailed control are needed.

Pic 3 – Audiokinetic’s Strata (pic from our podcast)
Audio Game Evolvement
As games have evolved, so have their audio applications. Spatial audio continues to grow, used in major titles with technologies like Dolby Atmos, Windows Sonic and Ambisonics. But another shift is happening in how sound is designed to adapt and evolve in real time.
Procedural audio tools like MetaSounds and Quartz are part of this change. They allow sound to react to player behavior, game logic and environmental conditions. Instead of triggering static files, games can now generate and modulate sound on the fly, depending on what’s happening on screen. In Just Cause 4, the whoosh sound when passing NPC vehicles was procedurally generated using FMOD’s runtime synthesis, blending white and brown noise. By adjusting the mix based on factors like vehicle speed and distance, the sound dynamically responded to in-game actions and the environment.
This is especially powerful when combined with adaptive sound techniques. In The Last of Us Part II, for example, ambient sounds change depending on the character’s emotional state or movement, shifting the atmosphere of a scene without needing extra dialogue or animation.
As more games pursue this kind of responsiveness, procedural and adaptive tools are becoming less of a niche and more of an expectation.

Pic 3 – From the game “The Last of us Part II” (pic by Mat Ombler)
Audio Accessibility in Games
Audio accessibility ensures that players with hearing impairments can fully engage with games through alternative cues and customizable features. Developers’ different techniques like visual sound indicators, controller vibrations and detailed subtitles to replace or supplement audio information, all of them without compromising immersion. Nowadays, these solutions are becoming common, not just in AAA games, but also in smaller projects.
Unity and Unreal now support toolkits that make implementation easier. For instance, Unity’s Accessible Audio Toolkit (AAT) includes compass-based sound direction indicators to help players orient themselves in space. Wwise and FMOD adds another layer by spatializing sound, making it possible to trigger directional icons or radar-style interfaces that reflect the audio landscape in real time.
Haptic feedback also plays a big role. SDKs for platforms like PlayStation’s DualSense or Xbox controllers allow designers to assign vibrations to in-game actions like off-screen explosions. or incoming threats. These tactile cues are often combined with on-screen alerts or subtitle enhancements to make sure nothing is missed.
We can also see new startups like Inclusivity Forge emerging, specifically targeting the challenges developers face in implementing accessibility. For game creators, embracing accessibility isn’t just about inclusion; it’s a significant market opportunity, potentially reaching the 25% of gamers who report accessibility issues. Inclusivity Forge aims to make tapping into this market vastly simpler and more cost-effective. They are building plugins for Unity and Unreal Engine that allow developers to integrate sophisticated audio accessibility features, such as sonification and 3D navigation aids, often with just a few lines of code needed beyond adding the plugin itself.
“We want to empower developers by removing the traditional cost and complexity barriers associated with accessibility. Our tools handle the heavy lifting, allowing studios of all sizes to reach a broader audience and enhance their game’s impact ” says Tomasz Żernicki, co-founder of Inclusivity Forge.
They are currently inviting developers to sign up for the beta version of their plugins and be among the first to experience this streamlined approach to inclusive game design.
Overall, accessibility is not only an ethical and design thing but also becoming a legal one. The European Accessibility Act, which takes effect this year (2025), will require digital products including games to meet specific accessibility standards. By embedding accessibility into core design, startups can avoid expensive fixes and get into a relatively broader market.
What it Means for Engineers
Beyond technology, there’s another quiet shift happening. The role of the sound designer is evolving. As engines offer more built-in audio tools, and as AI starts to automate routine implementation tasks, the skill set for audio professionals is getting wider.
Sound designers are now expected to understand technical workflows, scripting basics and runtime systems, while still delivering high-quality creative output. The line between composer, implementer and audio programmer is becoming hardly noticeable.
At the same time, the barrier to entry is getting lower. More accessible tools mean that smaller teams and solo developers can now create rich, interactive audio without full-scale middleware setups. That opens the door to new voices and more experimentation in game sound, a trend that’s already seen, how smaller games approach audio design.
Summary
What I’ve described is only part of the picture. Music composition, AI, voiceover recording, audio accessibility and final mixing are each changing in their own ways, and they all deserve their own focus. For example, we interviewed Max Shafer, co-founder of Just 4 Noice – AI startup which gives a capability to create unique one-shot samples from a single prompt. We cannot fully anticipate how those changes will impact the audio in games. For now, we see a move toward more real-time and procedural and neural sound, with tools becoming integrated into the engines themselves.
I recommend here to familiarize with Nemisindo, a spin-out company from the Centre of Digital Music at Queen Mary University of London. It was founded by a Professor Joshua Reiss, the former President of AES and co-founder of LANDR, Waveshaper and RoEx. Nemisindo offers sound design services based on our innovative procedural audio technology. Their online platform provides a browser-based sound effect synthesis framework, they offer many synthesis models, with selected post-processing tools created for you to create your own sounds from scratch. Each of these models can generate sound in real-time, which can be shaped by manipulating various parameters.
AI will play a crucial role in this shift by giving a huge capability of change based on what is happening in real time. Middleware is still growing, but engine-native tools are starting to catch up, especially for indie developers and smaller teams. And as all these tools evolve, the line between sound designer and audio programmer is getting thinner, changing not just how we build sound, but how we think about it.
If you found useful information in this article and want to be tuned about following series of “Audio technologies”, new job openings, product tips and upcoming events with the industry Product leaders and Audio directors, sign up to our newsletter.
If you would like to meet us in person and talk about future of game audio and more, we will host a panel at AES Europe Convention in Warsaw, Poland in May.
Related articles