Android multiple ways to achieve camera circular preview sample code

Time:2020-11-4

The effect picture is as follows:

1、 Fillet preview controls

Set the viewoutline provider for the control


public RoundTextureView(Context context, AttributeSet attrs) {
  super(context, attrs);
  setOutlineProvider(new ViewOutlineProvider() {
    @Override
    public void getOutline(View view, Outline outline) {
      Rect rect = new Rect(0, 0, view.getMeasuredWidth(), view.getMeasuredHeight());
      outline.setRoundRect(rect, radius);
    }
  });
  setClipToOutline(true);
}

Modify fillet values and update as needed


public void setRadius(int radius) {
  this.radius = radius;
}

public void turnRound() {
  invalidateOutline();
}

The fillet size displayed by the control can be updated according to the set fillet value. When the control is square and the fillet value is half the length of the side, the circle is displayed.

2、 Realize square Preview

1. The device supports 1:1 preview size

This paper first introduces a simple but limited method: adjust the size of camera preview and preview control to 1:1.
Generally, Android devices support multiple preview sizes. Take Samsung tab S3 as an example

When using the camera API, the preview sizes supported are as follows:


2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1920x1080
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1280x720
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1440x1080
2019-08-02 13:16:08.669 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1088x1088
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 1056x864
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 960x720
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 720x480
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 640x480
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 352x288
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 320x240
2019-08-02 13:16:08.670 16407-16407/com.wsy.glcamerademo I/CameraHelper: supportedPreviewSize: 176x144

The 1:1 preview size is 1088×1088.

When using the Camera2 API, the preview sizes supported (in fact, picturesize) are as follows:


2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 4128x3096
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 4128x2322
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3264x2448
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3264x1836
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 3024x3024
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2976x2976
2019-08-02 13:19:24.980 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2880x2160
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2592x1944
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1920
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1440
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2560x1080
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2160x2160
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2048x1536
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 2048x1152
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1936x1936
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1920x1080
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1440x1080
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1280x960
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 1280x720
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 960x720
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 720x480
2019-08-02 13:19:24.981 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 640x480
2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 320x240
2019-08-02 13:19:24.982 16768-16768/com.wsy.glcamerademo I/Camera2Helper: getBestSupportedSize: 176x144

The 1:1 preview size is: 3024×3024, 2976×2976, 2160×2160, 1936×1936.
As long as we select the preview size of 1:1, and then set the preview control to square, we can realize the square preview;
By setting the fillet of the preview control to half of the edge length, the circular preview can be realized. 2. The device does not support 1:1 preview size

Select 1:1 preview size defect analysis

Resolution limitations
As mentioned above, we can choose the preview size of 1:1 for preview, but the limitation is high,
The range of options is very small. If the camera does not support a 1:1 preview size, this scheme is not feasible.

resource consumption
Taking Samsung tab S3 as an example, when using Camera2 API, the square preview size supported by the device is very large, which will occupy more system resources in image processing and other operations.

Handle cases where 1:1 preview size is not supported

Add a 1:1 ViewGroup
Put TextureView into ViewGroup
Set the margin value of TextureView to display the central square area

Sketch Map

Sample code

//Keep preview control and preview size scale consistent to avoid stretching
{
  FrameLayout.LayoutParams textureViewLayoutParams = (FrameLayout.LayoutParams) textureView.getLayoutParams();
  int newHeight = 0;
  int newWidth = textureViewLayoutParams.width;
  //Horizontal screen
  if (displayOrientation % 180 == 0) {
    newHeight = textureViewLayoutParams.width * previewSize.height / previewSize.width;
  }
  //Vertical screen
  else {
    newHeight = textureViewLayoutParams.width * previewSize.width / previewSize.height;
  }
  ////When it is not a square preview, add a layer of ViewGroup to limit the display area of the view
  if (newHeight != textureViewLayoutParams.height) {
    insertFrameLayout = new RoundFrameLayout(CoverByParentCameraActivity.this);
    int sideLength = Math.min(newWidth, newHeight);
    FrameLayout.LayoutParams layoutParams = new FrameLayout.LayoutParams(sideLength, sideLength);
    insertFrameLayout.setLayoutParams(layoutParams);
    FrameLayout parentView = (FrameLayout) textureView.getParent();
    parentView.removeView(textureView);
    parentView.addView(insertFrameLayout);

    insertFrameLayout.addView(textureView);
    FrameLayout.LayoutParams newTextureViewLayoutParams = new FrameLayout.LayoutParams(newWidth, newHeight);
    //Horizontal screen
    if (displayOrientation % 180 == 0) {
      newTextureViewLayoutParams.leftMargin = ((newHeight - newWidth) / 2);
    }
    //Vertical screen
    else {
      newTextureViewLayoutParams.topMargin = -(newHeight - newWidth) / 2;
    }
    textureView.setLayoutParams(newTextureViewLayoutParams);
  }
}

3、 Use glsurfaceview for more customized previews

Using the above method, we can complete the square and circular preview, but it is only applicable to the native camera. How to perform the circular preview when our data source is not the native camera? Next, we introduce the scheme of using glsurfaceview to display nv21, which is completely to realize the drawing of preview data.

1. Use process of glsurfaceview

Opengl rendering YUV data flow

The emphasis is on the compilation of the renderer, which is introduced as follows:


/**
 * A generic renderer interface.
 * <p>
 * The renderer is responsible for making OpenGL calls to render a frame.
 * <p>
 * GLSurfaceView clients typically create their own classes that implement
 * this interface, and then call {@link GLSurfaceView#setRenderer} to
 * register the renderer with the GLSurfaceView.
 * <p>
 *
 * <div>
 * <h3>Developer Guides</h3>
 * <p>For more information about how to use OpenGL, read the
 * <a href="{@docRoot}guide/topics/graphics/opengl.html" rel="external nofollow" >OpenGL</a> developer guide.</p>
 * </div>
 *
 * <h3>Threading</h3>
 * The renderer will be called on a separate thread, so that rendering
 * performance is decoupled from the UI thread. Clients typically need to
 * communicate with the renderer from the UI thread, because that's where
 * input events are received. Clients can communicate using any of the
 * standard Java techniques for cross-thread communication, or they can
 * use the {@link GLSurfaceView#queueEvent(Runnable)} convenience method.
 * <p>
 * <h3>EGL Context Lost</h3>
 * There are situations where the EGL rendering context will be lost. This
 * typically happens when device wakes up after going to sleep. When
 * the EGL context is lost, all OpenGL resources (such as textures) that are
 * associated with that context will be automatically deleted. In order to
 * keep rendering correctly, a renderer must recreate any lost resources
 * that it still needs. The {@link #onSurfaceCreated(GL10, EGLConfig)} method
 * is a convenient place to do this.
 *
 *
 * @see #setRenderer(Renderer)
 */
public interface Renderer {
  /**
   * Called when the surface is created or recreated.
   * <p>
   * Called when the rendering thread
   * starts and whenever the EGL context is lost. The EGL context will typically
   * be lost when the Android device awakes after going to sleep.
   * <p>
   * Since this method is called at the beginning of rendering, as well as
   * every time the EGL context is lost, this method is a convenient place to put
   * code to create resources that need to be created when the rendering
   * starts, and that need to be recreated when the EGL context is lost.
   * Textures are an example of a resource that you might want to create
   * here.
   * <p>
   * Note that when the EGL context is lost, all OpenGL resources associated
   * with that context will be automatically deleted. You do not need to call
   * the corresponding "glDelete" methods such as glDeleteTextures to
   * manually delete these lost resources.
   * <p>
   * @param gl the GL interface. Use <code>instanceof</code> to
   * test if the interface supports GL11 or higher interfaces.
   * @param config the EGLConfig of the created surface. Can be used
   * to create matching pbuffers.
   */
  void onSurfaceCreated(GL10 gl, EGLConfig config);

  /**
   * Called when the surface changed size.
   * <p>
   * Called after the surface is created and whenever
   * the OpenGL ES surface size changes.
   * <p>
   * Typically you will set your viewport here. If your camera
   * is fixed then you could also set your projection matrix here:
   * <pre>
   * void onSurfaceChanged(GL10 gl, int width, int height) {
   *   gl.glViewport(0, 0, width, height);
   *   // for a fixed camera, set the projection too
   *   float ratio = (float) width / height;
   *   gl.glMatrixMode(GL10.GL_PROJECTION);
   *   gl.glLoadIdentity();
   *   gl.glFrustumf(-ratio, ratio, -1, 1, 1, 10);
   * }
   * </pre>
   * @param gl the GL interface. Use <code>instanceof</code> to
   * test if the interface supports GL11 or higher interfaces.
   * @param width
   * @param height
   */
  void onSurfaceChanged(GL10 gl, int width, int height);

  /**
   * Called to draw the current frame.
   * <p>
   * This method is responsible for drawing the current frame.
   * <p>
   * The implementation of this method typically looks like this:
   * <pre>
   * void onDrawFrame(GL10 gl) {
   *   gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
   *   //... other gl calls to render the scene ...
   * }
   * </pre>
   * @param gl the GL interface. Use <code>instanceof</code> to
   * test if the interface supports GL11 or higher interfaces.
   */
  void onDrawFrame(GL10 gl);
}



void onSurfaceCreated(GL10 gl, EGLConfig config)
Callback in case of surface creation or reconstruction

void onSurfaceChanged(GL10 gl, int width, int height)
Callback when the size of the surface changes

void onDrawFrame(GL10 gl)
The drawing operation is implemented here. When we set rendermode to rendermode_ When continuously, the function will be executed continuously;
When we set rendermode to rendermode_ WHEN_ Dirty is executed only after the creation is complete and requestrender is called. Generally, we choose rendermode_ WHEN_ Dirty rendering mode to avoid over painting.

Generally, we will implement a renderer by ourselves, and then set the renderer for glsurfaceview. It can be said that the compilation of the renderer is the core step of the whole process. The following is the flow chart of initialization operation in void onsurfacecreated (gl10 GL, eglconfig config) and drawing operation in void OnDraw frame (gl10 GL)

Renderer for rendering YUV data

2. Specific implementation

Introduction to coordinate system

Android view coordinate system

OpenGL world coordinate system

As shown in the figure, unlike the Android view coordinate system, the OpenGL coordinate system is a Cartesian coordinate system.
The coordinate system of Android view takes the upper left corner as the origin, increasing x to the right and increasing y downward;
The OpenGL coordinate system takes the center as the origin, increasing x to the right and Y increasing upward.

Shader writing

/**
 *Vertex shaders
 */
private static String VERTEX_SHADER =
    "  attribute vec4 attr_position;\n" +
        "  attribute vec2 attr_tc;\n" +
        "  varying vec2 tc;\n" +
        "  void main() {\n" +
        "    gl_Position = attr_position;\n" +
        "    tc = attr_tc;\n" +
        "  }";

/**
 *Fragment shaders
 */
private static String FRAG_SHADER =
    "  varying vec2 tc;\n" +
        "  uniform sampler2D ySampler;\n" +
        "  uniform sampler2D uSampler;\n" +
        "  uniform sampler2D vSampler;\n" +
        "  const mat3 convertMat = mat3( 1.0, 1.0, 1.0, -0.001, -0.3441, 1.772, 1.402, -0.7141, -0.58060);\n" +
        "  void main()\n" +
        "  {\n" +
        "    vec3 yuv;\n" +
        "    yuv.x = texture2D(ySampler, tc).r;\n" +
        "    yuv.y = texture2D(uSampler, tc).r - 0.5;\n" +
        "    yuv.z = texture2D(vSampler, tc).r - 0.5;\n" +
        "    gl_FragColor = vec4(convertMat * yuv, 1.0);\n" +
        "  }";

Built in variable interpretation


gl_Position

VERTEX_SHADERIn the codegl_PositionRepresents the drawn spatial coordinates. Since we are two-dimensional rendering, we directly pass in theOpenGLThe lower left (- 1, – 1), lower right (1, – 1), upper left (- 1,1), and upper right (1,1), i.e. {- 1, – 1,1, – 1, – 1,1,1}


gl_FragColor

FRAG_SHADERIn the codegl_FragColorRepresents the color of a single slice

Explanation of other variables


ySampler、uSampler、vSampler

On behalf ofY、U、Vtexture sampler


convertMat

According to the following formula:


R = Y + 1.402 (V - 128)
G = Y - 0.34414 (U - 128) - 0.71414 (V - 128)
B = Y + 1.772 (U - 128)

We can get a YUV to RGB matrix


1.0,  1.0,  1.0, 
0,   -0.344, 1.77, 
1.403, -0.714, 0 

Explanation of some types and functions


vec3、vec4

They represent three-dimensional vector and four-dimensional vector respectively.


vec4 texture2D(sampler2D sampler, vec2 coord)

Convert the image texture of the sampler to color value with the specified matrix
texture2D(ySampler, tc).rWhat we get is y data,
texture2D(uSampler, tc).rWhat we get is u data,
texture2D(vSampler, tc).rWhat we get is V data.

Initialization in Java code

The ByteBuffer texture data corresponding to y, u and V are created according to the image width and height;
The corresponding transformation matrix is selected according to whether the image is displayed or not and the rotation angle;

public void init(boolean isMirror, int rotateDegree, int frameWidth, int frameHeight) {
if (this.frameWidth == frameWidth
    && this.frameHeight == frameHeight
    && this.rotateDegree == rotateDegree
    && this.isMirror == isMirror) {
  return;
}
dataInput = false;
this.frameWidth = frameWidth;
this.frameHeight = frameHeight;
this.rotateDegree = rotateDegree;
this.isMirror = isMirror;
yArray = new byte[this.frameWidth * this.frameHeight];
uArray = new byte[this.frameWidth * this.frameHeight / 4];
vArray = new byte[this.frameWidth * this.frameHeight / 4];

int yFrameSize = this.frameHeight * this.frameWidth;
int uvFrameSize = yFrameSize >> 2;
yBuf = ByteBuffer.allocateDirect(yFrameSize);
yBuf.order(ByteOrder.nativeOrder()).position(0);

uBuf = ByteBuffer.allocateDirect(uvFrameSize);
uBuf.order(ByteOrder.nativeOrder()).position(0);

vBuf = ByteBuffer.allocateDirect(uvFrameSize);
vBuf.order(ByteOrder.nativeOrder()).position(0);
//Vertex coordinates
squareVertices = ByteBuffer
    .allocateDirect(GLUtil.SQUARE_VERTICES.length * FLOAT_SIZE_BYTES)
    .order(ByteOrder.nativeOrder())
    .asFloatBuffer();
squareVertices.put(GLUtil.SQUARE_VERTICES).position(0);
//Texture coordinates
if (isMirror) {
  switch (rotateDegree) {
    case 0:
      coordVertice = GLUtil.MIRROR_COORD_VERTICES;
      break;
    case 90:
      coordVertice = GLUtil.ROTATE_90_MIRROR_COORD_VERTICES;
      break;
    case 180:
      coordVertice = GLUtil.ROTATE_180_MIRROR_COORD_VERTICES;
      break;
    case 270:
      coordVertice = GLUtil.ROTATE_270_MIRROR_COORD_VERTICES;
      break;
    default:
      break;
  }
} else {
  switch (rotateDegree) {
    case 0:
      coordVertice = GLUtil.COORD_VERTICES;
      break;
    case 90:
      coordVertice = GLUtil.ROTATE_90_COORD_VERTICES;
      break;
    case 180:
      coordVertice = GLUtil.ROTATE_180_COORD_VERTICES;
      break;
    case 270:
      coordVertice = GLUtil.ROTATE_270_COORD_VERTICES;
      break;
    default:
      break;
  }
}
coordVertices = ByteBuffer.allocateDirect(coordVertice.length * FLOAT_SIZE_BYTES).order(ByteOrder.nativeOrder()).asFloatBuffer();
coordVertices.put(coordVertice).position(0);
}

Initializes the renderer when the surface is created

private void initRenderer() {
  rendererReady = false;
  createGLProgram();

  //Enable texture
  GLES20.glEnable(GLES20.GL_TEXTURE_2D);
  //Create texture
  createTexture(frameWidth, frameHeight, GLES20.GL_LUMINANCE, yTexture);
  createTexture(frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE, uTexture);
  createTexture(frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE, vTexture);

  rendererReady = true;
}

Where createglprogram is used to create an OpenGL program and associate variables in the shader code

private void createGLProgram() {
 int programHandleMain = GLUtil.createShaderProgram();
 if (programHandleMain != -1) {
   //Using the shader program
   GLES20.glUseProgram(programHandleMain);
   //Get vertex shader variables
   int glPosition = GLES20.glGetAttribLocation(programHandleMain, "attr_position");
   int textureCoord = GLES20.glGetAttribLocation(programHandleMain, "attr_tc");

   //Get fragment shader variables
   int ySampler = GLES20.glGetUniformLocation(programHandleMain, "ySampler");
   int uSampler = GLES20.glGetUniformLocation(programHandleMain, "uSampler");
   int vSampler = GLES20.glGetUniformLocation(programHandleMain, "vSampler");

   //Assign values to variables
   /**
    * GLES20.GL_ Texture0 and ysampler binding
    * GLES20.GL_ Texture1 and usampler binding
    * GLES20.GL_ Texture2 and vsampler binding
    *
    *In other words, the second parameter of gluniform1i represents the layer number
    */
   GLES20.glUniform1i(ySampler, 0);
   GLES20.glUniform1i(uSampler, 1);
   GLES20.glUniform1i(vSampler, 2);

   GLES20.glEnableVertexAttribArray(glPosition);
   GLES20.glEnableVertexAttribArray(textureCoord);

   /**
    *Set vertex shader data
    */
   squareVertices.position(0);
   GLES20.glVertexAttribPointer(glPosition, GLUtil.COUNT_PER_SQUARE_VERTICE, GLES20.GL_FLOAT, false, 8, squareVertices);
   coordVertices.position(0);
   GLES20.glVertexAttribPointer(textureCoord, GLUtil.COUNT_PER_COORD_VERTICES, GLES20.GL_FLOAT, false, 8, coordVertices);
 }
}

Where createtexture is used to create textures based on width, height and format

private void createTexture(int width, int height, int format, int[] textureId) {
   //Create texture
   GLES20.glGenTextures(1, textureId, 0);
   //Bind texture
   GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureId[0]);
   /**
    * {@link GLES20#GL_ TEXTURE_ WRAP_ S} Texture wrapping mode representing left and right directions
    * {@link GLES20#GL_ TEXTURE_ WRAP_ T} Texture wrapping mode for up and down directions
    *
    * {@link GLES20#GL_ Repeat}: repeat
    * {@link GLES20#GL_ MIRRORED_ Repeat}: Mirror repeat
    * {@link GLES20#GL_ CLAMP_ TO_ Edge}: ignore border truncation
    *
    *For example, we use {@ link gles20 ා GL_ REPEAT}:
    *
    *       squareVertices      coordVertices
    *       -1.0f, -1.0f,      1.0f, 1.0f,
    *1.0F, - 1.0F, 1.0F, 0.0F, - > same as TextureView preview
    *       -1.0f, 1.0f,       0.0f, 1.0f,
    *       1.0f, 1.0f        0.0f, 0.0f
    *
    *       squareVertices      coordVertices
    *       -1.0f, -1.0f,      2.0f, 2.0f,
    *1.0F, - 1.0F, 2.0f, 0.0F, - > is divided into four identical previews (bottom left, bottom right, upper left, upper right) compared with TextureView preview
    *       -1.0f, 1.0f,       0.0f, 2.0f,
    *       1.0f, 1.0f        0.0f, 0.0f
    */
   GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_REPEAT);
   GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_REPEAT);
   /**
    * {@link GLES20#GL_ TEXTURE_ MIN_ Filter} represents that the displayed texture is smaller than the loaded texture
    * {@link GLES20#GL_ TEXTURE_ MAG_ Filter} represents the situation when the displayed texture is larger than the loaded texture
    *
    * {@link GLES20#GL_ Nearest}: use the color of the closest pixel in the texture as the color of the pixel to be drawn
    * {@link GLES20#GL_ Linear}: using several colors with the closest coordinates in the texture, the pixel color to be drawn is obtained by weighted average algorithm
    */
   GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_NEAREST);
   GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D, GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);
   GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, format, width, height, 0, format, GLES20.GL_UNSIGNED_BYTE, null);
 }

Invoking in Java code

The frame data is clipped and passed in when the data source gets to it

@Override
 public void onPreview(final byte[] nv21, Camera camera) {
 //Crop the specified image area
 ImageUtil.cropNV21(nv21, this.squareNV21, previewSize.width, previewSize.height, cropRect);
 //Refresh glsurfaceview
 roundCameraGLSurfaceView.refreshFrameNV21(this.squareNV21);
}

Nv21 data clipping code

/**
*Clipping nv21 data
*
*@ param originnv21 original nv21 data
*@ param cropnv21 prunes the result nv21 data, and memory needs to be allocated in advance
*@ param width the width of the original data
*@ param height the height of the raw data
*@ param left the left boundary of the region where the original data is clipped
*@ param top the upper boundary of the region where the original data is clipped
*@ param right the right boundary of the region where the original data is clipped
*The original data is clipped under the boundary param @ param
*/
 public static void cropNV21(byte[] originNV21, byte[] cropNV21, int width, int height, int left, int top, int right, int bottom) {
 int halfWidth = width / 2;
 int cropImageWidth = right - left;
 int cropImageHeight = bottom - top;

 //Original data y upper left
 int originalYLineStart = top * width;
 int targetYIndex = 0;

 //Original data UV upper left
 int originalUVLineStart = width * height + top * halfWidth;

 //UV start value of target data
 int targetUVIndex = cropImageWidth * cropImageHeight;

 for (int i = top; i < bottom; i++) {
   System.arraycopy(originNV21, originalYLineStart + left, cropNV21, targetYIndex, cropImageWidth);
   originalYLineStart += width;
   targetYIndex += cropImageWidth;
   if ((i & 1) == 0) {
     System.arraycopy(originNV21, originalUVLineStart + left, cropNV21, targetUVIndex, cropImageWidth);
     originalUVLineStart += width;
     targetUVIndex += cropImageWidth;
   }
 }
}

Pass to glsurafceview and refresh the frame data

/**
*Incoming nv21 refresh frame
*
*@ param data nv21 data
*/
public void refreshFrameNV21(byte[] data) {
 if (rendererReady) {
   yBuf.clear();
   uBuf.clear();
   vBuf.clear();
   putNV21(data, frameWidth, frameHeight);
   dataInput = true;
   requestRender();
 }
}

Putnv21 is used to extract y, u and V data from nv21

/**
*Take out the Y, u and V components of nv21 data
*
*@ param SRC nv21 frame data
*@ param width width
*@ param height height
*/
private void putNV21(byte[] src, int width, int height) {

 int ySize = width * height;
 int frameSize = ySize * 3 / 2;

 //Take the component y
 System.arraycopy(src, 0, yArray, 0, ySize);

 int k = 0;

 //Take the component UV value
 int index = ySize;
 while (index < frameSize) {
   vArray[k] = src[index++];
   uArray[k++] = src[index++];
 }
 yBuf.put(yArray).position(0);
 uBuf.put(uArray).position(0);
 vBuf.put(vArray).position(0);
}

After the requestrender is executed, the ondrawframe function is called back, where the data binding and rendering of the three textures are performed

@Override
   public void onDrawFrame(GL10 gl) {
   //Each texture is activated, bound and set
   if (dataInput) {
     //y
     GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
     GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, yTexture[0]);
     GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,
         0,
         0,
         0,
         frameWidth,
         frameHeight,
         GLES20.GL_LUMINANCE,
         GLES20.GL_UNSIGNED_BYTE,
         yBuf);

     //u
     GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
     GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, uTexture[0]);
     GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,
         0,
         0,
         0,
         frameWidth >> 1,
         frameHeight >> 1,
         GLES20.GL_LUMINANCE,
         GLES20.GL_UNSIGNED_BYTE,
         uBuf);

     //v
     GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
     GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, vTexture[0]);
     GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D,
         0,
         0,
         0,
         frameWidth >> 1,
         frameHeight >> 1,
         GLES20.GL_LUMINANCE,
         GLES20.GL_UNSIGNED_BYTE,
         vBuf);
     //Draw after data binding is complete
     GLES20.glDrawArrays(GLES20.GL_TRIANGLE_STRIP, 0, 4);
   }
 }

You can complete the drawing.

4、 Add a layer of border

Sometimes it’s not just a circular preview. We may have to add a border to the camera preview

Border effect

In the same way, we dynamically change the border values and redraw them.
The related codes in the border custom view are as follows:


@Override
protected void onDraw(Canvas canvas) {
  super.onDraw(canvas);
  if (paint == null) {
    paint = new Paint();
    paint.setStyle(Paint.Style.STROKE);
    paint.setAntiAlias(true);
    SweepGradient sweepGradient = new SweepGradient(((float) getWidth() / 2), ((float) getHeight() / 2),
        new int[]{Color.GREEN, Color.CYAN, Color.BLUE, Color.CYAN, Color.GREEN}, null);
    paint.setShader(sweepGradient);
  }
  drawBorder(canvas, 6);
}


private void drawBorder(Canvas canvas, int rectThickness) {
  if (canvas == null) {
    return;
  }
  paint.setStrokeWidth(rectThickness);
  Path drawPath = new Path();
  drawPath.addRoundRect(new RectF(0, 0, getWidth(), getHeight()), radius, radius, Path.Direction.CW);
  canvas.drawPath(drawPath, paint);
}

public void turnRound() {
  invalidate();
}

public void setRadius(int radius) {
  this.radius = radius;
}

5、 Complete demo code:

https://github.com/wangshengyang1996/GLCameraDemo

Use the camera API and Camera2 API and select the preview size closest to the square
Use camera API and add a layer of parent control for it dynamically to achieve the effect of square preview
Use the camera API to get the preview data, and use OpenGL to display it. Finally, we recommend an Android free offline face recognition SDK, which can be perfectly combined with the technology in this paper https://ai.arcsoft.com.cn/product/arcface.html

The above is the whole content of this article, I hope to help you in your study, and I hope you can support developeppaer more.