Skip to main content

Raw audio processing

In some scenarios, raw audio captured through the microphone must be processed to achieve the desired functionality or to enhance the user experience. Voice SDK enables you to pre and post process the captured audio for implementation of custom playback effects.

Understand the tech

You can use the raw data processing functionality in Voice SDK to process the feed according to your particular scenario. This feature enables you to pre-process the captured signal before sending it to the encoder, or to post-process the decoded signal before playback. To implement processing of raw audio data in your app, you take the following steps.

  1. Setup an audio frame observer.
  2. Register the audio frame observer before joining a channel.
  3. Set the format of audio frames captured by each callback.
  4. Implement callbacks in the frame observers to process raw audio data.
  5. Unregister the frame observers before leaving a channel.

The figure below shows the workflow you need to implement to process raw audio data in your app.

Process raw audio

Prerequisites

To follow this procedure you must have implemented the SDK quickstart project for Voice Calling.

Project setup

To create the environment necessary to integrate raw audio processing in your app, open the SDK quickstart Voice Calling project you created previously.

Implement raw data processing

When a user captures or receives audio data, the data is available to the app for processing before it is played. This section shows how to retrieve this data and process it, step-by-step.

Handle the system logic

This sections describes the steps required to use the relevant libraries, declare the necessary variables, and setup audio processing.

Import the required Android and Agora libraries

To integrate Video SDK frame observer libraries into your app, add the following statements after the last import statement in /app/java/com.example.<projectname>/MainActivity.


_2
import io.agora.rtc2.IAudioFrameObserver;
_2
import java.nio.ByteBuffer;

Process raw audio data

To register and use an audio frame observer in your app, take the following steps:

  1. Setup the audio frame observer

    The IAudioFrameObserver gives you access to each audio frame after it is captured or before it is played back. To setup the IAudioFrameObserver, add the following lines to the MainActivity class after variable declarations:


    _31
    private final IAudioFrameObserver iAudioFrameObserver = new IAudioFrameObserver() {
    _31
    @Override
    _31
    public boolean onRecordAudioFrame(String channelId, int type, int samplesPerChannel,
    _31
    int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) {
    _31
    // Gets the captured audio frame.
    _31
    // Add code here to process the recorded audio
    _31
    return false;
    _31
    }
    _31
    _31
    @Override
    _31
    public boolean onPlaybackAudioFrame(String channelId, int type, int samplesPerChannel,
    _31
    int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) {
    _31
    // Gets the audio frame for playback.
    _31
    // Add code here to process the playback audio
    _31
    return false;
    _31
    }
    _31
    _31
    @Override
    _31
    public boolean onMixedAudioFrame(String channelId, int type, int samplesPerChannel,
    _31
    int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) {
    _31
    // Retrieves the mixed captured and playback audio frame.
    _31
    return false;
    _31
    }
    _31
    _31
    @Override
    _31
    public boolean onPlaybackAudioFrameBeforeMixing(String channelId, int userId, int type, int samplesPerChannel,
    _31
    int bytesPerSample, int channels, int samplesPerSec, ByteBuffer buffer, long renderTimeMs, int avsync_type) {
    _31
    // Retrieves the audio frame of a specified user before mixing.
    _31
    return false;
    _31
    }
    _31
    };

  2. Register the audio frame observer

    To receive callbacks declared in IAudioFrameObserver, you must register the audio frame observer with the Agora Engine, before joining a channel . To specify the format of audio frames captured by each IAudioFrameObserver callback, use the setRecordingAudioFrameParameters, setMixedAudioFrameParameters and setPlaybackAudioFrameParameters methods. To do this, add the following lines before agoraEngine.joinChannel in the joinChannel() method:


    _10
    agoraEngine.registerAudioFrameObserver(iAudioFrameObserver);
    _10
    _10
    // Set the format of the captured raw audio data.
    _10
    int SAMPLE_RATE = 16000, SAMPLE_NUM_OF_CHANNEL = 1, SAMPLES_PER_CALL = 1024;
    _10
    _10
    agoraEngine.setRecordingAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
    _10
    Constants.RAW_AUDIO_FRAME_OP_MODE_READ_WRITE,SAMPLES_PER_CALL);
    _10
    agoraEngine.setPlaybackAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL,
    _10
    Constants.RAW_AUDIO_FRAME_OP_MODE_READ_WRITE,SAMPLES_PER_CALL);
    _10
    agoraEngine.setMixedAudioFrameParameters(SAMPLE_RATE, SAMPLE_NUM_OF_CHANNEL, SAMPLES_PER_CALL);

  3. Unregister the audio observer when you leave a channel

    When you leave a channel, you unregister the frame observer by calling the register frame observer method again with a null argument. To do this, add the following lines to the 'joinLeaveChannel(View view)' method before agoraEngine.leaveChannel();:


    _1
    agoraEngine.registerAudioFrameObserver(null);

Test your implementation

To ensure that you have implemented raw data processing into your app:

  1. Generate a temporary token in Agora Console .

  2. In your browser, navigate to the Agora web demo and update App ID, Channel, and Token with the values for your temporary token, then click Join.

  1. In Android Studio, open app/java/com.example.<projectname>/MainActivity, and update appId, channelName and token with the values for your temporary token.

  2. Edit the iAudioFrameObserver definition to add code that processes the raw audio data you receive in the following callbacks

    • onRecordAudioFrame: Gets the captured audio frame data

    • onPlaybackAudioFrame: Gets the audio frame for playback

  3. Connect a physical Android device to your development device.

  4. In Android Studio, click Run app. A moment later you see the project installed on your device.

    If this is the first time you run the project, grant microphone access to your app.

  5. Press Join to hear the processed audio feed from the web app.

Reference

This section contains information that completes the information in this page, or points you to documentation that explains other aspects to this product.

Voice Calling