How Nicolas Titeux improved his video game sound design workflow using Weaponiser
Nicolas Titeux is a Sound Designer, Audio Engineer & Composer from France. His work can be seen in Film, Animation and Video Games. Learn More about Nicolas on his Website and see more coverage of his process on his YouTube channel.
Designing Sound for Video Games
In a video game, like in an animation film, I often have to create every sound from scratch: Foley, FX, ambiances and music. I always use gameplay video captures when creating my sound effects. The game team usually provides a set of specifications, and a list of required sounds with the in-game events to which they relate.Here is my typical workflow when I work on video games:
There are two specific obstacles in video game sound design: the first is the repetitiveness of the sounds. Unlike film, video games have an important lifespan, in that many player actions (and their sound effects) will be repeated again and again – you have to take this into account during the creation process. Repetitiveness can make the soundtrack irritating and totally discredit certain sounds such as footsteps (a perfect repetition of a footstep sounds like a machine gun). We use middleware like Wwise or FMOD to randomise sounds, to ensure they never repeat the same way twice. The second obstacle is that you cannot predict which sounds will be triggered simultaneously – therefore it is difficult to assess how different sounds will work together without testing them first.A session of FMOD. Each track triggers randomised sounds.
Testing Sounds and Iterations for Video Games
Testing is a very important aspect of video game sound design – far more important than in a linear media like a film.Take, for example, a recent project I worked on: a fighting mobile game called Big Helmet Heroes: Journey
It’s a pretty ambitious game where two characters go head-to-head with weapons and armour made from different materials: Gold, Steel, Wood, Pearl and Obsidian. No material is stronger than another, but each has unique strengths and weaknesses, just like in Rock Paper Scissors – this is called a non-transitive system.
To achieve this, I had to find something specific for each material. Most of my work involved creating the sound textures of these different materials. The graphics and animations are highly elaborate, so the team wanted sounds that were top of the line for a mobile game.I made the impact sounds of weapons, armor, character movement, special attacks, knockouts etc. for each of the five materials. I also created the character voices. In the end there are a hundred different sound events in the game.But in this game I didn’t have time to do the integration part myself. If you look at the initial workflow diagram, you can see that I cannot process proper tests without the integration step, because neither Pro Tools nor FMOD allow me to simulate randomised sounds in this situation. Pro Tools has video but no randomisation, and FMOD has randomisation but no video – there is no way to link them together.After designing sounds in Pro Tools and programming and randomising them in FMOD, I decided to export a bunch of them and manually edit them to simulate a game sequence in Pro Tools. I quickly realised that I was wasting a lot of time because the editing phase of the final sounds was long, boring and above all repetitive. I looked for a way to automate this task. I first tried using samplers like Kontakt but it takes some programming, which also takes a lot of time.
The Solution – Create Randomisation Using Weaponiser
Weaponiser turned out to be the perfect solution to make the editing process almost instantaneous, and to speed up testing. Weaponiser allows randomisation on multiple sound layers, which is basically what a video game middleware does, but with Pro Tools and video support! Editing process is quite simple:
- First I created an instrument track in Pro Tools with an instance of Weaponiser.
- I import the sounds I want to test into Weaponiser with a simple drag-n-drop.
- I record MIDI events using a MIDI controller such as a keyboard or a PAD, in sync with the picture, using half-speed recording to be more precise.
- I repeat the process for other sound layers.
Once that process is done, I have a testing session ready for any future iteration of all sound effects heard in the game sequence. I just need to bounce my session as a video file and send it to my clients for discussion.In the case of this game, there were so many different animations for each token, therefore so many combinations, that I made a lot of versions of each sound before finding the perfect balance. Weaponiser was really helpful to automate the long and boring editing task. Here’s a video of the process: