Skip to main content

FaceUnity

The FaceUnity AR Filter extension uses 3D vision, 3D graphics, and deep learning technologies to provide industry’s leading AR portrait video effects for Video SDK. It enables you to add diverse special effects that provide full coverage of the human face, including:

  • Basic beauty: Skin beautification, whitening, rosy.
  • Advanced beauty: Video beauty effects for shape, face and skin.
  • Number detection: Detect the number of faces, humans, or gestures.

This page shows you how to integrate and use the FaceUnity AR Filter extension in your app.

Understand the tech

To quickly integrate FaceUnity's AR filter capabilities in your app, you set extension properties using the key and value parameters in Video SDK. When you call setExtensionProperty or setExtensionPropertyWithVendor method of Video SDK and pass in a pair of key and value parameters, it is equivalent to calling the corresponding FaceUnity API. The key is named after the FaceUnity API method, and value wraps the parameters required for that method in JSON. Currently, the extension encapsulates part of the APIs of the FaceUnity Nama SDK. For details, see the FaceUnity key-value overview.

Prerequisites

The development environment requirements are as follows:

  • Android Studio 4.1 or higher.
  • Android SDK API Level 24 or higher.
  • A mobile device that runs Android 4.1 or higher.
  • An Agora account and project.

  • A computer with Internet access.

    Ensure that no firewall is blocking your network communication.

Project Setup

Open the SDK quickstart for Video Calling project you created earlier.

Get the FaceUnity extension

To get the FaceUnity extension, take the following steps:

  1. Download the FaceUnity AR Filter package for your platform from the Extensions Marketplace.

  2. Download the FaceUnity resource package.

  3. Contact Agora for activation and obtain the authpack certificate file.

Add the extension to your project

  1. Unzip the FaceUnity AR Filter package, and save all .aar files to the /app/libs folder in your Android project.

  2. Save the certificate file authpack.java to the folder where the app module is located. For example, if the package name is io.agora.rte.extension.faceunity.example, the certificate file should be saved to /app/src/main/java/io/agora/rte/extension/faceunity/example).

  3. Save the model and prop files you need from the resource package to the /app/src/main/assets folder in your project. For details of files provided in the resource pack, see Resource package structure.

  4. In the app/build.gradle file, add the following line under dependencies:


    _1
    implementation fileTree(dir: "libs", include: ["*.jar", "*.aar"])

  5. To use the required classes, ensure that the following libraries are included in the list of import statements in your Android activity:


    _8
    import io.agora.rtc2.Constants;
    _8
    import io.agora.rtc2.IMediaExtensionObserver;
    _8
    import io.agora.rtc2.IRtcEngineEventHandler;
    _8
    import io.agora.rtc2.RtcEngine;
    _8
    import io.agora.rtc2.RtcEngineConfig;
    _8
    import io.agora.rtc2.video.VideoCanvas;
    _8
    _8
    import io.agora.rte.extension.faceunity.ExtensionManager;

Integrate the extension

This section describes the call sequence you implement to use FaceUnity features in your app.

  1. Enable the extension

    When initializing RtcEngine, call enableExtension before other APIs (including enableVideo and joinChannel) to enable the extension.


    _5
    private void enableExtension(boolean enabled) {
    _5
    // Initialize ExtensionManager before calling enableExtension
    _5
    ExtensionManager.getInstance(mRtcEngine).initialize(this);
    _5
    mRtcEngine.enableExtension("FaceUnity", "Effect", enabled);
    _5
    }

  2. Initialize the extension

    After receiving the onExtensionStarted callback, call setExtensionProperty, and pass in the corresponding key and value pair:


    _31
    private void initExtension() {
    _31
    // Initialization
    _31
    try {
    _31
    JSONObject jsonObject = new JSONObject();
    _31
    JSONArray jsonArray = new JSONArray();
    _31
    for (byte it : authpack.A()) {
    _31
    jsonArray.put(it);
    _31
    }
    _31
    jsonObject.put("authdata", jsonArray);
    _31
    setExtensionProperty("fuSetup", jsonObject.toString());
    _31
    } catch (JSONException e) {
    _31
    Log.e(TAG, e.toString());
    _31
    }
    _31
    _31
    // Load the AI model
    _31
    File modelDir = new File(getExternalFilesDir("assets"),
    _31
    "face_unity/model/ai_face_processor.bundle");
    _31
    try {
    _31
    JSONObject jsonObject = new JSONObject();
    _31
    jsonObject.put("data", modelDir.getAbsolutePath());
    _31
    jsonObject.put("type", 1 << 10);
    _31
    setExtensionProperty("fuLoadAIModelFromPackage", jsonObject.toString());
    _31
    } catch (JSONException e) {
    _31
    Log.e(TAG, e.toString());
    _31
    }
    _31
    }
    _31
    _31
    // Only updates the key and value when calling setExtensionProperty
    _31
    private void setExtensionProperty(String key, String property) {
    _31
    mRtcEngine.setExtensionProperty("FaceUnity", "Effect", key, property);
    _31
    }

  3. Configure beauty effects and body recognition

    Call setExtensionProperty and pass in the corresponding keys and values.

    You can implement the following functions:

    • Load props and adjust beautification intensity
    • Recognize and track human faces, gestures, and bodies

    You can call the method as needed. For a full list of keys and values, see the FaceUnity key-value overview.

  4. Release the resources

    When you do not need the extension, follow these steps to release the resources:

    1. Call setExtensionProperty and pass in the fuDestroyLibData key.
    2. After receiving the fuDestroyLibData callback, call destroy to destroy the Agora Engine.

Sample project

The complete Android FaceUnity sample code and project is available on GitHub.

Run the sample project

  1. Clone the repository:


    _1
    git clone https://github.com/AgoraIO-Community/AgoraMarketPlace.git

  2. On the Extensions Marketplace Downloads page, download the Android package of FaceUnity AR Filter. Unzip the package, and save all .aar files to the FaceUnity/android/app/libs path.

  3. Contact Agora to get the certificate file and resource package.

  4. Save the certificate file authpack.java to FaceUnity/android/app/src/main/java/io/Agora/rte/extension/faceunity/example.

  5. Save the required model and prop files from the resource package to FaceUnity/android/app/src/main/assets/face_unity under the project folder.

  6. Open the sample project FaceUnity/android in Android Studio.

  7. Sync the project with Gradle files.

  8. Open the FaceUnity/android/app/src/main/java/io/Agora/rte/extension/FaceUnity/example/Config.java file, and replace <YOUR_APP_ID> with your App ID. To get an App ID, see Getting Started with Agora.


    _4
    public interface Config {
    _4
    String mAppId = "<YOUR_APP_ID>";
    _4
    String mToken = null;
    _4
    }

  9. Connect an Android device (not an emulator), and run the project.

Reference

This section contains information that completes the information in this page, or points you to documentation that explains other aspects to this product.

FaceUnity key-value overview

This section lists the FaceUnity APIs you can use with Agora SDK.

The key corresponds to the name of the FaceUnity API, and the value corresponds to the parameters of the API. In this section, if the value is the same as the parameters of the FaceUnity API, the link leads to the FaceUnity documentation. If the value is different from the parameters of the FaceUnity API, the link leads to a section on this page.

Method keys

Method keys refer to the keys you pass in when calling the setExtensionProperty or setExtensionPropertyWithVendor method of the Video SDK.

Initialization
Method keysDescription
fuSetupInitialize the extension and authenticate the user. Must be executed before other keys.
fuLoadAIModelFromPackagePreload AI capabilities.
fuReleaseAIModelFree up resources occupied by AI capabilities.
Prop package loading
Method keysDescription
fuCreateItemFromPackageLoads the prop package.
fuLoadTongueModelLoads tongue detection data.
fuItemSetParamModifies or sets the value of a variable in the prop package.
Destruction
Method keysDescription
fuDestroyItemDestroys a specified item.
fuDestroyAllItemsDestroys all loaded items and releases all occupied resources.
fuOnDeviceLostResets the system's GL state. Use this key when the OpenGL context is released/destroyed by external resources.
fuDestroyLibDataFrees up the memory allocated to the face tracking module after calling fuSetup.
System functions
Method keysDescription
fuBindItemsBinds resource items to a target item.
fuUnbindItemsUnbinds the resource items from a target item.
fuIsTrackingSets whether to get the number of faces being tracked.
fuSetMaxFacesSets the maximum number of tarcked faces.
fuSetDefaultRotationModeSets the default human face orientation.
Algorithm functions
Method keysDescription
fuFaceProcessorSetMinFaceRatioSets the distance of face detection.
fuSetTrackFaceAITypeSets the fuTrackFace algorithm type.
fuSetFaceProcessorFovSets the fov (equivalent to focal length) of the FaceProcessor algorithm module.
fuHumanProcessorResetResets the state of the HumanProcessor algorithm module.
fuHumanProcessorSetMaxHumansSets the number of bodies tracked by the HumanProcessor algorithm module.
fuHumanProcessorGetNumResultsSets whether to get the number of bodies tracked by the HumanProcessor algorithm module.
fuHumanProcessorSetFovSets the fov (equivalent to focal length) used by the HumanProcessor algorithm module to track 3D key points on human bodies.
fuHandDetectorGetResultNumHandsSets whether to get the number of gestures tracked by the HandGesture algorithm module. Note that ai_gesture.bundle needs to be loaded.

Method key description

fuSetup
Value parametersDescription
authdataThe path to the certificate file.
fuLoadAIModelFromPackage
Value parametersDescription
dataString. The path of the AI capability model file ai_xxx.bundle. Such model files are located in the assets/AI_Model directory of the resource package.
typeInt. The AI capability type corresponding to the bundle file. Possible values are listed in enum FUAITYPE.
fuCreateItemFromPackage
Value parametersDescription
dataString. The path to the prop package you want to load. A prop package usually has a suffix *.bundle .
fuLoadTongueModel
Value parametersDescription
dataString. The path of tongue model data tongue.bundle.
fuItemSetParam
Value parametersDescription
obj_handleString. The path of the prop package passed in when calling fuCreateItemFromPackage.
nameString. The name of the variable to set in the prop package.
valueObject. The variable value to be set.

For details on the variable names and values in the prop package, refer to the FaceUnity documentation.

fuDestroyItem
Value parametersDescription
itemString. The path of the prop package passed in when calling fuCreateItemFromPackage.
fuBindItems
Value parametersDescription
obj_handleString. The path of the target item.
p_itemsString Array. The paths to the resource items you want to bind.
fuUnbindItems
Value parametersDescription
obj_handleString. The path of the target item.
p_itemsString Array, the paths to the resource items you want to unbind.
fuIsTracking
Value parametersDescription
enableBool. Whether to get the number of faces being tracked. If set to true, you can receive the fuIsTracking callback.
fuHumanProcessorGetNumResults
Value parametersDescription
enableBool. Whether to get the number of human bodies tracked by the HumanProcessor algorithm module. If set to true, you can receive the fuHumanProcessorGetNumResults callback.
fuHandDetectorGetResultNumHands
Value parametersDescription
enableBool. Whether to get the number of gestures tracked by the HandGesture algorithm module. If set to true, you can receive the fuHandDetectorGetResultNumHands callback.

Callback keys

Callback keys refer to the keys returned in the onEvent callback of the Agora SDK.

Callback keysDescription
fuIsTrackingReturns the number of faces being tracked.
fuHumanProcessorGetNumResultsReturns the number of human bodies tracked by the HumanProcessor algorithm module.
fuHandDetectorGetResultNumHandsReturns the number of gestures tracked by the HandGesture algorithm module.
fuDestroyLibDataReports that the memory allocated to the face tracking module after calling fuSetup is released.

Callback key description

fuIsTracking
Value parametersDescription
facesInt. The number of faces being tracked.
fuHumanProcessorGetNumResults
Value parametersDescription
peopleInt. The number of bodies being tracked.
fuHandDetectorGetResultNumHands
Value parametersDescription
handsInt. The number of gestures being tracked.
fuDestroyLibData

Its value contains no parameters.

API reference