Digital Audio Synthesis

Digital audio synthesis is a very broad topic with a great deal of theory, mathematics, engineering, and history behind it. Unfortunately, most of the topic overall is out of the scope of what can be covered in this book. What we will do is look at some basic examples on how we can harness a few built-in classes on Android to create audio from scratch.

As you probably know, sound is formed by a repetitive change in pressure in air (or other substance) in the form of a wave. Certain frequencies of these oscillations, otherwise known as sound waves, are audible, meaning our ears are sensitive to that number of repetitions in a period of time. This range is somewhere between 12 Hz (12 cycles per second), which is a very low sound such as a rumble, and 20 kHz (20,000 cycles per second), which is a very high-pitched sound.

To create audio, we need to cause the air to vibrate at the frequency desired for the sound we want. In the digital realm, this is generally done with a speaker that is driven by an analog electric signal. Digital audio systems contain a chip or board that performs a digital-to-analog conversion (DAC). A DAC will take in data in the form of a series of numbers that represent audio samples and convert that into an electrical voltage, which is translated into sound by the speaker.

In order to synthesize audio, we simply need to synthesize the audio samples and feed them to the appropriate mechanism. In the case of Android, that mechanism is the AudioTrack class.

As we learned in the last chapter, the AudioTrack class allows us to play raw audio samples (such as those captured by the AudioRecord class).

Playing a Synthesized Sound

Here is a quick example showing how to construct an AudioTrack class and pass in data to play. For a full discussion of the parameters used to construct the AudioTrack object, please see the “Raw Audio Playback with AudioTrack” section of Chapter 7.

This example uses an inner class that extends AsyncTask, AudioSynthesisTask. AsyncTask defines a method called doInBackground, which runs any code that is placed inside it in a thread that is separate from the main thread of the activity. This allows the activity and its UI to be responsive, as the loop that feeds the write method of our AudioTrack object would otherwise tie it up.

package com.apress.proandroidmedia.ch08.audiosynthesis;

import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.os.AsyncTask;
import android.os.Bundle;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;

public class AudioSynthesis extends Activity implements OnClickListener {

    Button startSound;
    Button endSound;

    AudioSynthesisTask audioSynth;

    boolean keepGoing = false;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);

        startSound = (Button) this.findViewById(R.id.StartSound);
        startSound.setOnClickListener(this);

        endSound = (Button) this.findViewById(R.id.EndSound);
        endSound.setOnClickListener(this);

        endSound.setEnabled(false);
    }

    @Override
    public void onPause() {
        super.onPause();
        keepGoing = false;

        endSound.setEnabled(false);
        startSound.setEnabled(true);
    }

    public void onClick(View v) {
        if (v == startSound) {
            keepGoing = true;

            audioSynth = new AudioSynthesisTask();
            audioSynth.execute();

            endSound.setEnabled(true);
            startSound.setEnabled(false);
        } else if (v == endSound) {
            keepGoing = false;

            endSound.setEnabled(false);
            startSound.setEnabled(true);
        }
    }

    private class AudioSynthesisTask extends AsyncTask<Void, Void, Void>
    {
        @Override
        protected Void doInBackground(Void... params) {
            final int SAMPLE_RATE = 11025;

            int minSize = AudioTrack.getMinBufferSize(SAMPLE_RATE,
                    AudioFormat.CHANNEL_CONFIGURATION_MONO,
                    AudioFormat.ENCODING_PCM_16BIT);
  
            AudioTrack audioTrack = new AudioTrack(
                    AudioManager.STREAM_MUSIC, SAMPLE_RATE,
                    AudioFormat.CHANNEL_CONFIGURATION_MONO,
                    AudioFormat.ENCODING_PCM_16BIT,
                    minSize,
                    AudioTrack.MODE_STREAM);

            audioTrack.play();

            short[] buffer = {
                    8130,15752,22389,27625,31134,32695,32210,29711,25354,19410,12253,
                    4329,-3865,-11818,-19032,-25055,-29511,-32121,-32722,-31276,-27874,
                    -22728,-16160,-8582,-466
            };

            while (keepGoing) {
                         audioTrack.write(buffer, 0, buffer.length);
            }

            return null;
        }
    }
}

Here is the layout XML in use by the preceding activity.

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical"
    android:layout_width="fill_parent"
    android:layout_height="fill_parent"
    >
    <Button android:layout_width="wrap_content" android:layout_height="wrap_content"Image
     android:id="@+id/StartSound" android:text="Start Sound"></Button>
    <Button android:layout_width="wrap_content" android:layout_height="wrap_content"Image
     android:id="@+id/EndSound" android:text="End Sound"></Button>
</LinearLayout>

The key to the foregoing code is the array of shorts. These are the audio samples that are continuously being passed into the AudioTrack object through the write method. In this case, the samples oscillate from 8,130 to 32,695, down to -32,121 and back up to -466. If we plotted these values on a graph, these samples taken together will construct a waveform. Since sound is created with oscillating pressure, and each of the samples represents a pressure value, having these samples represent a waveform is required to create sound. Varying this waveform allows us to create different kinds of audio. The following set of samples describes a short waveform, only ten samples, and therefore represents a high-frequency sound, one that has many oscillations per second. Low-frequency sounds would have a waveform that spans many more samples at a fixed sample rate.

short[] buffer = {
    8130,15752,32695,12253,4329,
    -3865,-19032,-32722,-16160,-466
};

Generating Samples

Using a little bit of math, we can algorithmically create these samples. The classic sine wave can be reproduced. This example produces a sine wave at 440 Hz.

package com.apress.proandroidmedia.ch08.audiosynthesis;

import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.os.AsyncTask;
import android.os.Bundle;
import android.util.Log;
import android.view.View;
import android.view.View.OnClickListener;
import android.widget.Button;

public class AudioSynthesis extends Activity implements OnClickListener {

    Button startSound;
    Button endSound;

    AudioSynthesisTask audioSynth;

    boolean keepGoing = false;

    float synth_frequency = 440; // 440 Hz, Middle A

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.main);

        startSound = (Button) this.findViewById(R.id.StartSound);
        startSound.setOnClickListener(this);

        endSound = (Button) this.findViewById(R.id.EndSound);
        endSound.setOnClickListener(this);

        endSound.setEnabled(false);
    }

    @Override
    public void onPause() {
        super.onPause();
        keepGoing = false;

        endSound.setEnabled(false);
        startSound.setEnabled(true);
    }

    public void onClick(View v) {
        if (v == startSound) {
            keepGoing = true;

            audioSynth = new AudioSynthesisTask();
            audioSynth.execute();

            endSound.setEnabled(true);
            startSound.setEnabled(false);
        } else if (v == endSound) {
            keepGoing = false;

            endSound.setEnabled(false);
            startSound.setEnabled(true);
        }
    }

    private class AudioSynthesisTask extends AsyncTask<Void, Void, Void>
    {
        @Override
        protected Void doInBackground(Void... params) {
            final int SAMPLE_RATE= 11025;

            int minSize = AudioTrack.getMinBufferSize(SAMPLE_RATE,
                    AudioFormat.CHANNEL_CONFIGURATION_MONO,
                    AudioFormat.ENCODING_PCM_16BIT);
  
            AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
                    SAMPLE_RATE,
                    AudioFormat.CHANNEL_CONFIGURATION_MONO,
                    AudioFormat.ENCODING_PCM_16BIT,
                    minSize,
                    AudioTrack.MODE_STREAM);

            audioTrack.play();

            short[] buffer = new short[minSize];


            float angular_frequency =
               (float)(2*Math.PI) * synth_frequency / SAMPLE_RATE;
            float angle = 0;

            while (keepGoing) {
                 for (int i = 0; i < buffer.length; i++)
                 {

                     buffer[i] = (short)(Short.MAX_VALUE * ((float) Math.sin(angle)));
                     angle += angular_frequency;
                 }
                 audioTrack.write(buffer, 0, buffer.length);
            }

            return null;
        }
    }
}

Here is the layout XML file for the foregoing activity:

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical"
    android:layout_width="fill_parent"
    android:layout_height="fill_parent"
    >

    <Button android:layout_width="wrap_content" android:layout_height="wrap_content"Image
     android:id="@+id/StartSound" android:text="Start Sound"></Button>
    <Button android:layout_width="wrap_content" android:layout_height="wrap_content"Image
     android:id="@+id/EndSound" android:text="End Sound"></Button>
</LinearLayout>

Changing the synth_frequency would allow us to reproduce any other frequency we would like. Of course, changing the function used to generate the values would change the sound as well. You may want to try clamping the samples to Short.MAX_VALUE or Short.MIN_VALUE to do a quick and dirty square wave example.

Of course, this just scratches the surface of what can be done with audio synthesis on Android. Given AudioTrack allows us to play raw PCM samples, almost any technique that can be used to generate digital audio can be utilized on Android, taking into account processor speed and memory limitations.

What follows is an example application that takes some techniques from Chapter 4 for tracking finger position on the touchscreen and the foregoing example code for generating audio. In this application, we'll generate audio and choose the frequency based upon the location of the user's finger on the x axis of the touchscreen.

package com.apress.proandroidmedia.ch08.fingersynthesis;

import android.app.Activity;
import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioTrack;
import android.os.AsyncTask;
import android.os.Bundle;
import android.util.Log;
import android.view.MotionEvent;
import android.view.View;
import android.view.View.OnTouchListener;

Our activity will implement OnTouchListener so that we can track the touch locations.

public class FingerSynthesis extends Activity implements OnTouchListener {

Just like the previous example, we'll use an AsyncTask to provide a thread for generating and playing the audio samples.

    AudioSynthesisTask audioSynth;

We need a base audio frequency that will be played when the finger is at the 0 position on the x axis. This will be lowest frequency played.

    static final float BASE_FREQUENCY = 440;

We'll be varying the synth_frequency float as the finger moves. When we start the app, we'll set it to the BASE_FREQUENCY.

    float synth_frequency = BASE_FREQUENCY;

We'll use the play Boolean to determine when we should actually being playing audio or not. It will be controlled by the touch events.

    boolean play = false;

    @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        setContentView(R.layout.main);

In our layout, we have only one item, a LinearLayout with the ID of MainView. We'll get a reference to this and register the OnTouchListener to be our activity. This way our activity's onTouch method will be called when the user touches the screen.

        View mainView = this.findViewById(R.id.MainView);
        mainView.setOnTouchListener(this);

        audioSynth = new AudioSynthesisTask();
        audioSynth.execute();
    }

    @Override
    public void onPause() {
        super.onPause();
        play = false;

        finish();
    }

Our onTouch method, called when the user touches, stops touching, or drags a finger on the screen, will set the play Boolean to true or false depending on the action of the user. This will control whether audio samples are generated. It will also track the location of the user's finger on the x axis of the touchscreen and adjust the synth_frequency variable accordingly.

    public boolean onTouch(View v, MotionEvent event) {
        int action = event.getAction();
        switch (action)
        {
            case MotionEvent.ACTION_DOWN:
                play = true;
                synth_frequency = event.getX() + BASE_FREQUENCY;
                Log.v("FREQUENCY",""+synth_frequency);
                break;
            case MotionEvent.ACTION_MOVE:
                play = true;
                synth_frequency = event.getX() + BASE_FREQUENCY;
                Log.v("FREQUENCY",""+synth_frequency);
                break;
            case MotionEvent.ACTION_UP:
                play = false;
                break;
            case MotionEvent.ACTION_CANCEL:
                break;
            default:
                break;
        }
        return true;
    }

    private class AudioSynthesisTask extends AsyncTask<Void, Void, Void>
    {
        @Override
        protected Void doInBackground(Void... params) {
            final int SAMPLE_RATE= 11025;

            int minSize = AudioTrack.getMinBufferSize(SAMPLE_RATE,
                    AudioFormat.CHANNEL_CONFIGURATION_MONO,
                    AudioFormat.ENCODING_PCM_16BIT);
  
            AudioTrack audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC,
                    SAMPLE_RATE,
                    AudioFormat.CHANNEL_CONFIGURATION_MONO,
                    AudioFormat.ENCODING_PCM_16BIT,
                    minSize,
                    AudioTrack.MODE_STREAM);

            audioTrack.play();

            short[] buffer = new short[minSize];

            float angle = 0;

Finally, in the AudioSynthesisTask, in the loop that generates the audio, we'll check the play Boolean and do the calculations to generate the audio samples based on the synth_frequency variable, which we are changing based upon the user's finger position.

            while (true) {

                if (play)
                {
                    for (int i = 0; i < buffer.length; i++)
                    {
                        float angular_frequency =
                           (float)(2*Math.PI) * synth_frequency / SAMPLE_RATE;

                        buffer[i] =
                           (short)(Short.MAX_VALUE * ((float) Math.sin(angle)));
                        angle += angular_frequency;
                    }
                    audioTrack.write(buffer, 0, buffer.length);
                } else {
                    try {
                        Thread.sleep(50);
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
            }
        }
    }
}

Here is the layout XML:

<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
    android:orientation="vertical"
    android:layout_width="fill_parent"
    android:layout_height="fill_parent"
    android:id="@+id/MainView"
    >
</LinearLayout>

This example shows some of the power and flexibility of the AudioTrack class. Since we can algorithmically generate audio, we can use just about any method we would like to determine its features (its pitch or frequency in this example).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.145.100.40