miércoles, 14 de noviembre de 2012

Noise: Creating a Synthesizer for Retro Sound Effects – Audio Processors

This entry is part 3 of 3 in the series Noise: Creating a Synthesizer for Retro Sound Effects

This is the final part in our series of tutorials about creating a synthesizer-based audio engine that can be used to generate sounds for retro-styled games. The audio engine can generate all of the sounds at runtime without the need for any external dependencies such as MP3 files or WAV files. In this tutorial, we’ll add support for audio processors, and code a delay processor which can add a decaying echo effect to our sounds.

If you have not already read the first tutorial or the second tutorial in this series, you should do that before continuing.

The programming language used in this tutorial is ActionScript 3.0 but the techniques and concepts used can easily be translated into any other programming language that provides a low-level sound API.

You should make sure you have Flash Player 11.4 or higher installed for your browser if you want to use the interactive examples in this tutorial.


Audio Processor Demo

In this final tutorial we will be adding audio processors to the core engine and creating a simple delay processor. The following demonstration shows the delay processor in action:

Only one sound is being played in that demonstration but the frequency of the sound is being randomised, and the audio samples generated by the engine are being pushed through a delay processor, which gives it the decaying echo effect.


AudioProcessor Class

The first thing we need to do is create a base class for the audio processors:

  package noise {      public class AudioProcessor {          //          public var enabled:Boolean = true;          //          public function AudioProcessor() {              if( Object(this).constructor == AudioProcessor ) {                  throw new Error( "AudioProcessor class must be extended" );              }          }          //          internal function process( samples:Vector.<Number> ):void {}      }  }  

As you can see, the class is very simple; it contains an internal process() method that is invoked by the AudioEngine class whenever any samples need to be processed, and a public enabled property that can be used to turn the processor on and off.


AudioDelay Class

The AudioDelay class is the class that actually creates the audio delay, and it extends the AudioProcessor class. Here is the basic empty class that we will work with:

  package noise {      public class AudioDelay extends AudioProcessor {          //          public function AudioDelay( time:Number = 0.5 ) {              this.time = time;          }      }  }  

The time argument passed to the class constructor is the time (in seconds) of the delay tap – that is, the amount of time between each audio delay.

Now let’s add the private properties:

  private var m_buffer:Vector.<Number> = new Vector.<Number>();  private var m_bufferSize:int  = 0;  private var m_bufferIndex:int = 0;  private var m_time:Number = 0.0;  private var m_gain:Number = 0.8;  

The m_buffer vector is basically a feedback loop: it contains all of the audio samples passed to the process method, and those samples are modified (in this case reduced in amplitude) continuously as the m_bufferIndex passes through the buffer. This will make sense when we get to the process() method.

The m_bufferSize and m_bufferIndex properties are used to keep track of the buffer’s state. The m_time property is the time of the delay tap, in seconds. The m_gain property is a multiplier that is used to reduce the amplitude of the buffered audio samples over time.

This class only has one method, and that is the internal process() method which overrides the process() method in the AudioProcessor class:

  internal override function process( samples:Vector.<Number> ):void {      var i:int    = 0;      var n:int    = samples.length;      var v:Number = 0.0;      //      while( i < n ) {          v  = m_buffer[m_bufferIndex]; // grab a buffered sample          v *= m_gain; // reduce the amplitude          v += samples[i]; // add the fresh sample          //          m_buffer[m_bufferIndex] = v;          m_bufferIndex++;          //          if( m_bufferIndex == m_bufferSize ) {              m_bufferIndex = 0;          }          //          samples[i] = v;          i++;      }  }  

Finally, we need to add the getters/setters for the private m_time and m_gain properties:

  public function get time():Number {      return m_time;  }  public function set time( value:Number ):void {      // clamp the time to the range 0.0001 - 8.0      value = value < 0.0001 ? 0.0001 : value > 8.0 ? 8.0 : value;      // no need to modify the buffer size if the time has not changed      if( m_time == value ) {          return;      }      // set the time      m_time = value;      // update the buffer size      m_bufferSize    = Math.floor( 44100 * m_time );      m_buffer.length = m_bufferSize;  }  
  public function get gain():Number {      return m_gain;  }  public function set gain( value:Number ):void {      // clamp the gain to the range 0.0 - 1.0      m_gain = value < 0.0 ? 0.0 : value > 1.0 ? 1.0 : value;  }  

Believe or not, that is the AudioDelay class completed. Audio delays are actually very easy once you understand how the feedback loop (the m_buffer property) works.


Updating the AudioEngine Class

The final thing we need to do is update the AudioEngine class so audio processors can be added to it. First of all let’s add a vector to store the audio processor instances:

  static private var m_processorList:Vector.<AudioProcessor> = new Vector.<AudioProcessor>();  

To actually add and remove processors to and from the AudioEngine class we will use two public methods:

AudioEngine.addProcessor()

  static public function addProcessor( processor:AudioProcessor ):void {      if( m_processorList.indexOf( processor ) == -1 ) {          m_processorList.push( processor );      }  }  

AudioEngine.removeProcessor()

  static public function removeProcessor( processor:AudioProcessor ):void {      var i:int = m_processorList.indexOf( processor );      if( i != -1 ) {          m_processorList.splice( i, 1 );      }  }  

Easy enough – all those methods are doing is adding and removing AudioProcessor instances to or from the m_processorList vector.

The last method that we will add rolls through the list of audio processors and, if the processor is enabled, passes audio samples to the processor’s process() method:

  static private function processSamples():void {      var i:int = 0;      var n:int = m_processorList.length;      //      while( i < n ) {          if( m_processorList[i].enabled ) {              m_processorList[i].process( m_sampleList );          }          i++;      }  }  

Now it is time to add the final bit of code, and this is a single line of code that needs to be added to the private onSampleData() method in the AudioEngine class:

  if( m_soundChannel == null ) {      while( i < n ) {          b.writeFloat( 0.0 );          b.writeFloat( 0.0 );          i++;      }      return;  }  //  generateSamples();  processSamples();  //  while( i < n ) {      s = m_sampleList[i] * m_amplitude;      b.writeFloat( s );      b.writeFloat( s );      m_sampleList[i] = 0.0;      i++;  }  

The highlighted line of code is the one that needs to be added to the class; it simply invokes the processSamples() method that we previously added.


Conclusion

That, as they say, is that. In the first tutorial we took a look at various waveforms and how sound waves are stored digitally, then we constructed the core audio engine code in the second tutorial, and now we have wrapped things up with the addition of audio processors.

There is a lot more that could be done with this code, or to a variation of this code, but the important thing to bear in mind is the amount of work an audio engine has to do at runtime. If you push an audio engine too far (and that is easy to do) then the overall performance of your game may suffer as a consequence – even if you move an audio engine into its own thread (or ActionScript 3.0 worker) it will still happily bite chunks out of the CPU if you are not careful.

However, a lot of professional and not-so-professional games do a lot of audio processing at runtime because having dynamic sound effects and music in a game can add a lot to the overall experience, and it can draw the player deeper into the game world. The audio engine we put together in this series of tutorials could just as easily work with regular (non-generated) sound effect samples loaded from files: essentially all digital audio is a sequence of samples in its most basic form.

One final thing to think about: audio is a very important aspect of game design, it is just as important and powerful as the visual side of things, and is not something that should be thrown together or bolted onto a game at the last minute of development if you really care about the production quality of your games. Take your time with the audio design for your games and you will reap the rewards.

I hope you enjoyed this series of tutorials and can take something positive away from it: even if you just think about the audio in your games a little more from now on then I have done my job.

All of the audio engine source code is available in the source download.

Have fun!



No hay comentarios:

Publicar un comentario