Teach you how to integrate Huawei machine learning service (ML Kit) face detection function

Time:2021-8-29

When you take a beautiful self photo of yourself, you find that your face is not thin enough, your eyes are not big enough, and your expression is not rich and cute… Wouldn’t it be great if you could make your face thin with one click and add cute stickers?

When children at home watch the iPad screen for too long or their eyes are too close to the screen, and parents fail to pay attention at all times, is it convenient to have an app that can realize parent control? In the face of the above problems, the face detection function of Huawei machine learning service (ML Kit) can easily help you solve them!

The face detection function of Huawei machine learning service can detect up to 855 key points of the face, so as to return the coordinates of face contour, eyebrows, eyes, nose, mouth, ears and face deflection angle. After integrating face detection services, developers can quickly build face beautification applications based on this information, or add some interesting and lovely sticker elements to the face to increase the interest of the picture. In addition to this powerful function, the face detection service can also recognize the characteristics of the face, including whether the eyes are open, whether they wear glasses or hats, gender, age, beard and so on. In addition, the face detection function can recognize up to seven facial expressions, including smile, expressionless, anger, disgust, panic, sadness and surprise.

Teach you how to integrate Huawei machine learning service (ML Kit) face detection function
Development practice of “thin face and big eyes”

1. Development preparation

For detailed preparation steps, please refer toHuawei developer Alliance

Here are the key development steps.

1.1 configure Maven warehouse address in project level gradle

buildscript {
    repositories {
            ...
        maven {url 'https://developer.huawei.com/repo/'}
    }
}
 dependencies {
                              ...
        classpath 'com.huawei.agconnect:agcp:1.3.1.300'
    }
allprojects {
    repositories {
            ...
        maven {url 'https://developer.huawei.com/repo/'}
    }
}

1.2 file header add configuration

After integrating the SDK, add the configuration in the file header

apply plugin: 'com.android.application'
apply plugin: 'com.huawei.agconnect'

1.3 configuring SDK dependencies in application gradle

dependencies{ 
    //Import basic SDK
    implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
    //Introduce face contour + key point detection model package
    implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'
    //Introduction of expression detection model package
    implementation 'com.huawei.hms:ml-computer-vision-face-emotion-model:2.0.1.300'
    //Introduce feature detection model package
    implementation 'com.huawei.hms:ml-computer-vision-face-feature-model:2.0.1.300'
}

1.4 add the following statement to the androidmanifest.xml file to automatically update the machine learning model

<manifest
    ...
    <meta-data
        android:name="com.huawei.hms.ml.DEPENDENCY" 
        android:value= "face"/>
    ...
</manifest>
 
1.3 application for camera authority
<uses-permission android:name="android.permission.CAMERA" />
<uses-feature android:name="android.hardware.camera" />

2. Code development

2.1 create a face analyzer using the default parameter configuration

analyzer =   MLAnalyzerFactory.getInstance().getFaceAnalyzer();
 
2.2 create an mlframe object through android.graphics.bitmap for the analyzer to detect pictures
MLFrame frame = MLFrame.fromBitmap(bitmap);
 
2.3 call the "asyncanalyseframe" method for face detection
Task<List<MLFace>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLFace>>() {
     @Override
     public void onSuccess(List<MLFace> faces) {
         //The detection is successful, and the key point information of the face is obtained.
     }
 }).addOnFailureListener(new OnFailureListener() {
     @Override
     public void onFailure(Exception e) {
         //Detection failed.
    }
 });

2.4 carry out different degrees of treatment of big eyes and thin faces through the progress bar. Call the magicyeye method and smallfacemesh method to realize the big eye algorithm and thin face algorithm respectively

private SeekBar.OnSeekBarChangeListener onSeekBarChangeListener = new SeekBar.OnSeekBarChangeListener() {
    @Override
    public void onProgressChanged(SeekBar seekBar, int progress, boolean fromUser) {
        switch (seekBar.getId()) {
            Case r.id.seekbareye: // when the big eye progress bar changes
            Case r.id.seekbarface: // when the thin face progress bar changes
        }
    }
}
2.5 release the analyzer after the test is completed
try {
    if (analyzer != null) {
        analyzer.stop();
    }
} catch (IOException e) {
    Log.e(TAG, "e=" + e.getMessage());
}

Demo effect

Teach you how to integrate Huawei machine learning service (ML Kit) face detection function

Development practice of “interesting and lovely stickers”

Preparation before development

Add Huawei Maven warehouse in project grade gradle

Open the androidstudio project level build.gradle file

Teach you how to integrate Huawei machine learning service (ML Kit) face detection function

Add the following Maven addresses incrementally:

buildscript {
     {        
        maven {url 'http://developer.huawei.com/repo/'}
    }    
 }
 allprojects {
    repositories {       
        maven { url 'http://developer.huawei.com/repo/'}
    }
 }

Add SDK dependencies to the application level build.gradle

Teach you how to integrate Huawei machine learning service (ML Kit) face detection function

// Face detection SDK.
 implementation 'com.huawei.hms:ml-computer-vision-face:2.0.1.300'
 // Face detection model.
 implementation 'com.huawei.hms:ml-computer-vision-face-shape-point-model:2.0.1.300'

Apply for camera, network access and storage permissions in the androidmanifest.xml file

<!-- Camera permissions -- > 
 <uses-feature android:name="android.hardware.camera" />
 <uses-permission android:name="android.permission.CAMERA" />
 <!-- Write permission -- > 
 <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
 <!-- Read permission -- > 
 <uses-permission android:name="android.permission.READ_EXTERNAL_STORAGE" />

Key steps of code development

Set face detector

MLFaceAnalyzerSetting detectorOptions;
 detectorOptions = new MLFaceAnalyzerSetting.Factory()
        .setFeatureType(MLFaceAnalyzerSetting.TYPE_UNSUPPORT_FEATURES)
        .setShapeType(MLFaceAnalyzerSetting.TYPE_SHAPES)
        .allowTracing(MLFaceAnalyzerSetting.MODE_TRACING_FAST)
        .create();
 detector = MLAnalyzerFactory.getInstance().getFaceAnalyzer(detectorOptions);

Here, we get the camera frame data through the camera callback, get the face contour points by calling the face detector, and write them to the facepointengine for the sticker filter

@Override
 public void onPreviewFrame(final byte[] imgData, final Camera camera) {
    int width = mPreviewWidth;
    int height = mPreviewHeight;
 
    long startTime = System.currentTimeMillis();
    //Set the front and rear shooting directions to be consistent
    if (isFrontCamera()){
        mOrientation = 0;
    }else {
        mOrientation = 2;
    }
    MLFrame.Property property =
            new MLFrame.Property.Creator()
                    .setFormatType(ImageFormat.NV21)
                    .setWidth(width)
                    .setHeight(height)
                    .setQuadrant(mOrientation)
                    .create();
 
    ByteBuffer data = ByteBuffer.wrap(imgData);
    //Call face detection interface
    SparseArray<MLFace> faces = detector.analyseFrame(MLFrame.fromByteBuffer(data,property));
    //Judge whether face information is obtained
    if(faces.size()>0){
        MLFace mLFace = faces.get(0);
        EGLFace EGLFace = FacePointEngine.getInstance().getOneFace(0);
        EGLFace.pitch = mLFace.getRotationAngleX();
        EGLFace.yaw = mLFace.getRotationAngleY();
        EGLFace.roll = mLFace.getRotationAngleZ() - 90;
        if (isFrontCamera())
            EGLFace.roll = -EGLFace.roll;
        if (EGLFace.vertexPoints == null) {
            EGLFace.vertexPoints = new PointF[131];
        }
        int index = 0;
        //Obtain the contour point coordinates of a person and convert them to the floating-point value in the OpenGL normalized coordinate system
        for (MLFaceShape contour : mLFace.getFaceShapeList()) {
            if (contour == null) {
                continue;
            }
            List<MLPosition> points = contour.getPoints();
 
            for (int i = 0; i < points.size(); i++) {
                MLPosition point = points.get(i);
                float x = ( point.getY() / height) * 2 - 1;
                float y = ( point.getX() / width ) * 2 - 1;
                if (isFrontCamera())
                    x = -x;
                PointF Point = new PointF(x,y);
                EGLFace.vertexPoints[index] = Point;
                index++;
            }
        }
        //Insert face object
        FacePointEngine.getInstance().putOneFace(0, EGLFace);
        //Set the number of faces
        FacePointEngine.getInstance().setFaceSize(faces!= null ? faces.size() : 0);
    }else{
        FacePointEngine.getInstance().clearAll();
    }
    long endTime = System.currentTimeMillis();
    Log.d("TAG","Face detect time: " + String.valueOf(endTime - startTime));
 }

The face contour points returned by the ML kit interface are shown in the figure:

Teach you how to integrate Huawei machine learning service (ML Kit) face detection function

To introduce how to design stickers, first look at the number of stickers JSON data definition. To introduce how to design stickers, first look at the number of stickers JSON data definition

public class FaceStickerJson {
 
    public int[] centerIndexList;   //  The center coordinate index list may be the calculation center point of multiple key points
    public float offsetX;           //  The x-axis offset pixel relative to the center coordinate of the sticker
    public float offsetY;           //  The y-axis offset pixel relative to the center coordinate of the sticker
    public float baseScale;         //  Sticker base zoom
    public int startIndex;          //  The face start index is used to calculate the width of the face
    public int endIndex;            //  The face end index is used to calculate the width of the face
    public int width;               //  Sticker width
    public int height;              //  Sticker height
    public int frames;              //  Sticker frames
    public int action;              //  Action, 0 indicates the default display, which is used to deal with sticker action, etc
    public String stickerName;      //  The name of the sticker, which is used to mark the folder where the sticker is located and the name of the PNG file
    public int duration;            //  Sticker frame display interval
    public boolean stickerLooping;  //  Does the sticker cycle through rendering
    public int maxCount;            //  Maximum number of sticker renderings
 ...
 }

We make a JSON file of cat ear sticker, find the 84 point at the center of the eyebrow and the 85 point at the tip of the nose through the face index, paste the ear and nose respectively, and then put it and the picture in the assets directory

{
    "stickerList": [{
        "type": "sticker",
        "centerIndexList": [84],
        "offsetX": 0.0,
        "offsetY": 0.0,
        "baseScale": 1.3024,
        "startIndex": 11,
        "endIndex": 28,
        "width": 495,
        "height": 120,
        "frames": 2,
        "action": 0,
        "stickerName": "nose",
        "duration": 100,
        "stickerLooping": 1,
        "maxcount": 5
    }, {
    "type": "sticker",
        "centerIndexList": [83],
        "offsetX": 0.0,
        "offsetY": -1.1834,
        "baseScale": 1.3453,
        "startIndex": 11,
        "endIndex": 28,
        "width": 454,
        "height": 150,
        "frames": 2,
        "action": 0,
        "stickerName": "ear",
        "duration": 100,
        "stickerLooping": 1,
        "maxcount": 5
    }]
 }

Here we render the sticker texture. We use glsurfaceview, which is simpler than TextureView. First, instantiate the sticker filter in onsurfacechanged, pass in the sticker path and turn on the camera

@Override
 public void onSurfaceCreated(GL10 gl, EGLConfig config) {
 
    GLES30.glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
    mTextures = new int[1];
    mTextures[0] = OpenGLUtils.createOESTexture();
    mSurfaceTexture = new SurfaceTexture(mTextures[0]);
    mSurfaceTexture.setOnFrameAvailableListener(this);
 
    //Input samplerexternaloes into the texture
    cameraFilter = new CameraFilter(this.context);
 
    //Set the face sticker path under the assets directory
    String folderPath ="cat";
    stickerFilter = new FaceStickerFilter(this.context,folderPath);
 
    //Create screen filter object
    screenFilter = new BaseFilter(this.context);
 
    facePointsFilter = new FacePointsFilter(this.context);
    mEGLCamera.openCamera();
 }

Then initialize the sticker filter in onsurfacechanged

@Override
 public void onSurfaceChanged(GL10 gl, int width, int height) {
    Log.d(TAG, "onSurfaceChanged. width: " + width + ", height: " + height);
    int previewWidth = mEGLCamera.getPreviewWidth();
    int previewHeight = mEGLCamera.getPreviewHeight();
    if (width > height) {
        setAspectRatio(previewWidth, previewHeight);
    } else {
        setAspectRatio(previewHeight, previewWidth);
    }
    //Set the size of the screen, create a framebuffer, and set the display size
    cameraFilter.onInputSizeChanged(previewWidth, previewHeight);
    cameraFilter.initFrameBuffer(previewWidth, previewHeight);
    cameraFilter.onDisplaySizeChanged(width, height);
 
    stickerFilter.onInputSizeChanged(previewHeight, previewWidth);
    stickerFilter.initFrameBuffer(previewHeight, previewWidth);
    stickerFilter.onDisplaySizeChanged(width, height);
 
    screenFilter.onInputSizeChanged(previewWidth, previewHeight);
    screenFilter.initFrameBuffer(previewWidth, previewHeight);
    screenFilter.onDisplaySizeChanged(width, height);
 
    facePointsFilter.onInputSizeChanged(previewHeight, previewWidth);
    facePointsFilter.onDisplaySizeChanged(width, height);
    mEGLCamera.startPreview(mSurfaceTexture);
 }

Finally, the sticker is drawn to the screen through ondrawframe

@Override
 public void onDrawFrame(GL10 gl) {
    int textureId;
    //Clear screen and depth cache
    GLES30.glClear(GLES30.GL_COLOR_BUFFER_BIT | GLES30.GL_DEPTH_BUFFER_BIT);
    //Update to get a map
    mSurfaceTexture.updateTexImage();
    //Get the surface texture conversion matrix
    mSurfaceTexture.getTransformMatrix(mMatrix);
    //Set camera display conversion matrix
    cameraFilter.setTextureTransformMatrix(mMatrix);
 
    //Paint camera textures
    textureId = cameraFilter.drawFrameBuffer(mTextures[0],mVertexBuffer,mTextureBuffer);
    //Paint sticker texture
    textureId = stickerFilter.drawFrameBuffer(textureId,mVertexBuffer,mTextureBuffer);
    //Draw to screen
    screenFilter.drawFrame(textureId , mDisplayVertexBuffer, mDisplayTextureBuffer);
    if(drawFacePoints){
        facePointsFilter.drawFrame(textureId, mDisplayVertexBuffer, mDisplayTextureBuffer);
    }
 }

In this way, our stickers are painted on our faces

Demo effect

Teach you how to integrate Huawei machine learning service (ML Kit) face detection function

For more details, see:Official website and development guidance documents of Huawei developer Alliance

To participate in the developer discussion, please go to reddit:https://www.reddit.com/r/HuaweiDevelopers/

To download the demo and sample code, go to GitHub:https://github.com/HMS-Core

To solve the integration problem, please go to stack overflow:https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest