Voices between worlds
Delving into how characterised voices are achieved between worlds through sound design this article looks into the considerations and reflections from the "father of modern sound design" Ben Burtt and evaluating approaches to new technology that bring a vocal blend to create distinction between the on screen virtual world and real world together.
Ben Burtt says that "creating voices is one of the hardest tasks as a sound designer, our audience is very sensitive about analysing voices because we do it all day long, everyday."
He discusses how sound effects of physical objects can be applied to be expressive as a voice through physical tools like a jews harp, where the vocal punctuation is added to the sound of the instrument, as well as combs and bespok gazoo straws and pipes.
The modulation of the voice through a secondary source used in this way is very affective in combing punctuation over vocals to modulate the physical sound source.
Techniques like these are evident in Disney's early work with vocalising characters where the voice drives the sound effects behaviour.
Notably Disney in the 1940's used in animation and film a Sonovox which was a sound effect device developed by Gilbert M. Wright in 1939 and manufactured by Wright-Sonovox affiliated to radio presenters Free & Peters in Chicago. (Davies and Acker, 2019) draws attention to the specifics of the device as consisting of two earphone sized loudspeakers which are placed on the outside of the throat.
There's a technique the vocalist applies which modify the sounds through moving the tongue and lips so the articulated speech sounds are collected from the naturally raspier bass region of the larynx. A distinctive croaking raspy sound is heard.
Disney used the device successfully in film radio and animation such as the Reluctant Dragon 1941 as shown above in the image.
The Sonovox became a distinctive sound and was used to characterise the voice of Case the Train in Walt Disney's animated film Dumbo in 1941 amongst a plethora of other applications..
This is arguably one of the most innovative devices made for modifying the human voice and Ben Burt passionately discusses this pivotal invention during his sound design interview.
He explains how the voice of EVE is a modern example of the same technique yet now applying a vocoder. He demonstrates that recording the vocal can then be linked to a tone so that the vocal is driving the tone through the vocoder.
We'll be exploring modern ways of working with vocoders in more technical detail later, yet to understand the principle components is really going to assist when looking into how to make a conversant tool to characterise through pure data and in games.
Crucially during Ben's discussion over the vocal's parts he details how the software he's using is able to analyse the component parts of the audio wave formation, highlighting the peaks and transient detail of the wave along with the spaces between the noise and sound ratio.
A threshold would be a good way to approximate a calculation from when working with numbers, however that's thinking out loud when thinking of ways to consider analysing audio content inside of Pure data. This will need further exploration.
The frequency content pitch and volume will make excellent parameters to control and Ben Burtt himself highlights this important manipulatable information over the wave during the WALL-E bonus interview.
Ground Control To Major Tom!
MIDI control and control over the content of the wave to synthesise with the vocal/audio file does certainly require interfaces to enable smooth adjustments in range and dynamics.
Ben uses a Wacom tablet as well as the keyboard midi controller for working with the vocoder on the dedicated software so these are really strong interfacing elements that should be a consideration when developing a tool for characterising voices inside of Pure Data as the effectiveness over the control mapping looks and sounds very smooth.
Below are images taken of Ben when manipulating the recoded vocal files and manipulating them through MIDI controllers.
Audio information detecting transients, pitch volume and tonal structure - frequency content.
This vocoder provides the control of the voice to blend and control a sound that when the MIDI keyboard note is performed the voice is jointly controlling the tone/sample/synth..
This is a must have feature of a sound design tool, so what other creative audio technology can be used to modulate and shift.?
Accelerometer data, what about body sensors? I can already sense ideas and links forming in my brain....
The pen Ben uses on the Wacom enables a lot more flexibility over the sonic content of the file as different degrees of freedom can be mapped to different control parameters which again can be used to give greater flexibility over the sound control however, at the moment in this blog post we are only just touching the surface.
Next we will delve further into the data types and data streams that the equipment shown will send and receive through pure data/Max MSP and think creatively by taking inspiration from this super sound design.
To understand the nature of creating personalities through voice and synthesis and giving the most ease of interfacing, has to be a first principle in design.
To fully bring a characters voice to life what techniques that have been applied in the making of WALL-E how can we explore these inside of Pure Data?
Davies, H. and Acker, A. (2019). Sonovox | Grove Music. Oxfordmusiconline.com. Available from:: http://www.oxfordmusiconline.com/grovemusic/abstract/10.1093/gmo/9781561592630.001.0001/omo-9781561592630-e-4002294814 [Accessed 22 Mar. 2019].