OpenGL (II) OpenGL es video recording


OpenGL (I) fundamentals of OpenGL es rendering
OpenGL (II) glsurface video recording
OpenGL (III) filter application
OpenGL (IV) stickers and skin grinding theory

About recording video, I’m in front of youRTMP (I) theory of screen recording and live broadcasting

We use OpenGL es to draw camera data to the phone screen. If we need to perform beauty and other processing on the image data, which are completed in the shader, how can we get the data from the shader and then encode it to generate video? In other words, generally speaking, we use the soft coding library or mediacodec to encode. If we give us the byte [] array of the image, we can encode the data according to the previous course. But now our image is processed in the shader without byte [],

Analyze how to turn our processed image (beauty) into an MP4 file

By default, the results we draw in glsurfaceview are displayed on the screen. However, in practice, many cases do not need to be rendered on the screen.

OpenGL (II) OpenGL es video recording


The screenfilter function we encapsulated before is to preview the camera data. Read out the shader file and draw on the canvas.
Screenfilter embarrassing scenario:

  1. Turn on (beauty) effect: use sampler2d;
  2. Unopened effect: use samplerexternaloes.

First of all, we should make it clear that OpenGL is process oriented and a state machine.

Note: we need to manually release the corresponding resources we created.


Frame buffer object, frame buffer object, off screen rendering.
By using FBO, OpenGL can redirect the render output to FBO instead of the framebuffer of the glsurfaceview window

  1. Camerafilter uses samplerexternaloes to output FBO;
  2. The subsequent processing uses sampler2d to output FBO;
  3. Screenfilter uses sampler2d output to display the window;
OpenGL (II) OpenGL es video recording


Or do you want to review mediacodec

Before solving the above problems, let’s review the mediacodec used in our course. Mediacodec is a Java API for encoding and decoding provided to us in the Android SDK.

Mediacodec has two internal cache queues: input cache queue and output cache queue. We only need to use queueinputbuffer to submit the byte data to be encoded to the input queue, and use queueoutputbuffer to retrieve the byte data from the output queue
Take out the encoded data, that is, the encoded data.

OpenGL (II) OpenGL es video recording

Screenshot 2020-12-04 15.06.02.png

In the description at the beginning, we can’t get the original byte array of the image. What should we do?

Surface surface = mMediaCodec.createInputSurface();

We use the above method to get a surface. This surface is created by the encoder. If we draw on this surface, the image drawn on it will be encoded automatically. So now the problem becomes,How can we draw the image processed by OpenGL into this?
The answer is —- >With EGL

EGL environment construction

If OpenGL es is a brush, the Android surface is a canvas. We can use OpenGL es to draw images to the surface. However, OpenGL can’t draw directly on the Android surface. It needs eglsurface to associate the Android surface.
Eglsurface official website description

EGL context is created for us in glsurfaceview, including. However, this is bound with the surface used by glsurfaceview for display, which is responsible for displaying it on the real mobile phone screen. We need to use OpenGL es to draw the image into the eglsurface associated with mmediacodec. Createinputsurface(). They should be like this:

OpenGL (II) OpenGL es video recording

EGL get image.png

Eglsurface can be an off screen buffer allocated by EGL or a window allocated by the operating system. It can be bound to the surface and let OpenGL paint on eglsurface, which is equivalent to painting on surface.

You need to build an EGL environment in a separate thread, create an eglsurface and bind it to the surface of mediacodec to draw images to the surface to be encoded.

Take a look at the source code of glsuerfaceview:

OpenGL (II) OpenGL es video recording


Nativewindow is the surface

We need to build an EGL environment in a separate thread to draw images to the surface to be encoded. To complete the whole process:

//Create OpenGL environment
        HandlerThread handlerThread = new HandlerThread("codec-gl");
        mHandler = new Handler(handlerThread.getLooper()); Runnable() {
            public void run() {
                //Create EGL environment
                eglEnv = new EGLEnv(mContext,mGlContext, mSurface,mWidth, mHeight);
                isStart = true;

You can refer to the glsurfaceview source code

1. Create display

//Obtain the display window as the drawing target of OpenGL
        mEglDisplay = EGL14.eglGetDisplay(EGL14.EGL_DEFAULT_DISPLAY);
        if (mEglDisplay == EGL14.EGL_NO_DISPLAY) {
            throw new RuntimeException("eglGetDisplay failed");

        //Initialize display window
        int[] version = new int[2];
        if(!EGL14.eglInitialize(mEglDisplay, version,0,version,1)) {
            throw new RuntimeException("eglInitialize failed");
  1. Configure display
//Configure property options
        int[] configAttribs = {
                EGL14.EGL_ RED_ Size, 8, // number of red bits in color buffer
                EGL14.EGL_ GREEN_ Size, 8, // number of green bits in the color buffer
                EGL14.EGL_BLUE_SIZE, 8, //
                EGL14.EGL_ALPHA_SIZE, 8,//
                EGL14.EGL_RENDERABLE_TYPE, EGL14.EGL_OPENGL_ES2_BIT, //opengl es 2.0
        int[] numConfigs = new int[1];
        EGLConfig[] configs = new EGLConfig[1];
        //EGL selects a configuration based on attributes
        if (!EGL14.eglChooseConfig(mEglDisplay, configAttribs, 0, configs, 0, configs.length,
                numConfigs, 0)) {
            throw new RuntimeException("EGL error " + EGL14.eglGetError());

        mEglConfig = configs[0];
  1. Create EGL context
         *EGL context
        int[] context_attrib_list = {
        //Share data with eglcontext in glsurfaceview. Only in this way can we get the image texture displayed after processing.
        mEglContext= EGL14.eglCreateContext(mEglDisplay,mEglConfig,mGlContext ,context_attrib_list,0);

        if (mEglContext == EGL14.EGL_NO_CONTEXT){
            throw new RuntimeException("EGL error " + EGL14.eglGetError());
  1. Create eglsurface (associated with surface)
         *Create eglsurface
        int[] surface_attrib_list = {
        mEglSurface = EGL14.eglCreateWindowSurface(mEglDisplay, mEglConfig, surface, surface_attrib_list, 0);
        // mEglSurface == null
        if (mEglSurface == null){
            throw new RuntimeException("EGL error " + EGL14.eglGetError());

         *Bind the display of the current thread
       if (!EGL14.eglMakeCurrent(mEglDisplay,mEglSurface,mEglSurface,mEglContext)){
           throw new RuntimeException("EGL error " + EGL14.eglGetError());
  1. draw
public void draw(int textureId, long timestamp) {
        //Eglsurface is a double buffered mode


As long as it is drawn into the eglsurface, mediacodec can encode the drawn image because it is associated with mediacodec’s surface.
However, after the encoding is completed, the MP4 file needs to be generated, so the encoded data needs to be encapsulated.

OpenGL (II) OpenGL es video recording

OPengGL camera acquisition.png

Double buffering mechanism, in popular terms, is two canvases, canvas 1 and canvas 2. Canvas 1 is displayed at the same time, canvas 2 is drawing, then exchange, display 2, background drawing 1, and repeat.

Recording specific referenceSource code