Tiktok, how to achieve QQ video call and shake out video display

Time:2020-3-22

Tiktok, how to achieve QQ video call and shake out video display

First of all, why is there such an article
In 2014, Lenovo made a short video software called “magic show”. Tiktok is basically the same as it is now, but because “magic show App” was born in Lenovo, it was destined to grow into a towering tree in a Hard Suits Inc, and finally ended with only one version.
At that time, I designed and implemented the video echo module of “magic show app”, so I had this article.
After many years, this article will be sorted out, because this technology is still not out of date, but widely used

This article was calledYUV420 to RGB in OpenGL esIt’s a technical title. When sorting out, we found that with this title, we actually didn’t know what the technology was used for, so we changed the name to be more eye-catching.

YUV420 to RGB in OpenGL esThis technology is mainly to achieve efficient video, bandwidth saving echo video image.

  • Why efficient?
    Because it is directly implemented with OpenGL es, it bypasses the layer by layer encapsulation of Androi;
    Moreover, OpenGL itself is a graphics interface, which is naturally efficient.
  • Why save bandwidth?
    Because the YUV420 data format used in network transmission is a kind of lossy data format. However, due to the characteristics of the format, the color restoration has no effect on the image display, so it is widely used in video call scene.

Here are the specific descriptions in the following aspectsYUV420 to RGB in OpenGL esHow to implement this technology:

  • First, understand the concept of “grayscale”
  • YUV data format
  • Yuv444 and YUV420
  • YUV420 to RGB
  • Yuv420p to RGB in OpenGL es

1、 First, understand the concept of “grayscale”

Let’s first understand the concept of gray y. I don’t know if you’ve seen itOld black and white TV
The image of the old black-and-white TV has only one y channel, and the image on the old black-and-white TV is the gray-scale image ; later color TV used YUV data signal, which was compatible with old black-and-white TV and can display color image on new color TV. The predecessors were so powerful…)

  • Definition of grayscale:
  • Gray value and RGB calculation formula
  • The realization of “color image conversion” into “gray image” shader

1.1 definition of gray scale image:

According to the logarithmic relationship between white and black, it can be divided into several grades, which are called gray scale. The gray level is divided into 256 levels.

1.2. Calculation formula of gray value y and RGB:

Y = 0.299R + 0.587G + 0.114*B

When the TV station sends out a signal, RGB data will be converted into y data. The old black-and-white TV can display the image when it receives the Y signal.

1.4 conversion of “color image conversion” to “gray image” shader

Here is a technical implementation. In OpenGL es, how to transform a color texture image into a gray-scale image with a shader chip shader?

The effect is as follows:

彩色图

转化后的灰度图

Of course, the above formula of RGB to y is used for conversion. Next, we will see the specific implementation of the chip source shader code:

//Shader chip shader
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D sTexture;

void main() {
        //Read RGB color of current slice element from texture
         vec4 color=texture2D(sTexture, vTextureCoord);
         //Formula to calculate gray value
         float col=color.r*0.299+color.g*0.587+color.b*0.114;
         //Set the generated y gray value to RGB channel
         color.r=col;
         color.g=col;
         color.b=col;
         //Pass to movie source shader
         gl_FragColor =color;
}

In the shader implementation, I made a special comment.
Students who understand glsl syntax can read the code implementation above carefully;
Of course, students who don’t know grammar should read it simply (glsl is a kind of C language, as long as they have learned C language, they should be able to read it)

2、 YUV data format

Above we understand the implementation of gray-scale image, here we introduce a YUV data format.
It is mainly divided into the following parts:

  • YUV definition
  • Benefits of using YUV
  • YUV and RGB conversion formula
  • Yuv444 and YUV420

2.1、YUV

YUV is defined as follows:

Y: Is the gray value;
UV: used to specify the color of pixels.

It doesn’t matter that UV is a little confused now. Let’s continue to look down

2.2 conversion formula of YUV and RGB

// RGB to YUV
Y= 0.299*R + 0.587*G + 0.114*B
U= -0.147*R - 0.289*G + 0.436*B = 0.492*(B- Y)
V= 0.615*R - 0.515*G - 0.100*B = 0.877*(R- Y)
//############################################
// YUV to RGB
R = Y + 1.140*V
G = Y - 0.394*U - 0.581*V
B = Y + 2.032*U

2.3 advantages of YUV:

  • Backward compatibility of transmission signal with old black and white TV (used to optimize the transmission of color video signal and make it backward compatible with old black and white TV)
  • YUV420 takes up less bandwidth (we mentioned this in the front, as for the specific reason, we will give a detailed introduction later)

Compared with RGB video signal transmission, the biggest advantage of YUV420 is that it only needs a small bandwidth (to be introduced later)

2.4 yuv444 and YUV420

The YUV data we have been talking about is actually YUV420 data format. YUV420 data format is lossy in UV color transmission, but yuv444 is actually a lossless data format.
So why should we talk about yuv444 format here?
Actually, it’s for the backRestore YUV420 data to RGB for preparation

First, the data format of yuv444 is introduced

  • YUV444:
    One pixel corresponds to one y, one u and one V (yuv one by one)
    The data format of yuv444 is as follows:

YUV444数据格式

YUV channels in yuv444 correspond one by one, which is easy to understand. This is the YUV420 data format at the bottom. The UV data is lossy.

  • YUV420:
    One pixel corresponds to one y;
    Four pixel points correspond to one u and one V;

The specific data format is as follows:

YUV420数据格式

As can be seen from the above figure, the UV color channel is lossy, which is why the YUV420 takes up less bandwidth when displaying.

a. No one-to-one correspondence between Y, u and V, color loss of image
b. That’s why it takes up less bandwidth;
c. In the same network transmission, the traffic occupied is also reduced;
d. But it has little effect on the color display of the image

Because of less traffic, it has little effect on color display, so it is widely used in video call scene, video echo scene and so on.

##3、 YUV420 to RGB

Wow, the basic knowledge is finally finished. Here’s our core technology:YUV420 to RGB

  • The first step is YUV420 to yuv444;
  • The second step is to transfer yuv444 to RGB.

Why transfer YUV420 to yuv444?
Because the UV channel data in YUV420 is lost during transmission. But when we render, we need to restore the lost UV color data channel, and then convert yuv444 to RGB

First, YUV420 to yuv444

3.1. YUV420 to yuv444

In order to convert YUV420 to yuv444, you need to change “YUV420” U and “V” in the figure above The part of is filled.

Through YUV420 data, the existing u and Y data, through the difference calculation, fill in the vacancy part. The following is the specific implementation formula of the difference operation. The difference calculation is as follows (it is recommended to refer to the data format chart of YUV420, otherwise it is not easy to be confused):

U01 = (u00 + u02) / 2; // use the existing u00 and u02 to calculate u01
U10 = (u00 + U20) / 2; // use the existing u00 and U20 to calculate U10
U11 = (u00 + u02 + U20 + u22) / 4; // use the existing u00, u02, U20, u22 to calculate U11

//######################
V01 = (v00 + V02) / 2; // use the existing v00 and V02 to calculate V01
V10 = (v00 + V20) / 2; // use the existing v00 and V20 to calculate v10
V11 = (v00 + V02 + V20 + V22) / 4; // use the existing v00, V02, V20, V22 to calculate v11

After the above formula, YUV420 is converted to yuv444 (data completion is successful). Next, how to convert yuv444 to RGB.

3.2 yuv444 to RGB

Yuv444 to RGB has a ready-made formula. We can use it directly. YUV to RGB formula:

R = Y + 1.140*V
G = Y - 0.394*U - 0.581*V
B = Y + 2.032*U

Formula, then how is the specific code implementation?

Note:
1、 Second, third and fourth, these four points introduce the basic principle of YUV to RGB, and the following is the specific implementation.

4、 Yuv420p to RGB in OpenGL es

This section introduces the specific technical implementation, but at the beginning, we still need to introduce two data formats (hey, I know you are tired, I am tired, but I still have to say)

  • Data format of yuv420p
  • Data format of yuv420sp (yuv420sp to RGB will not be introduced here)
  • Yuv420sp to RGB

4.1 data format of yuv420p

The data format of yuv420p is as follows (a byte []):

YUV420p

4 / 6 of the data is y; 1 / 6 is u; 1 / 6 is v.

4.2. Data format of yuv420sp (yuv420sp to RGB will not be introduced here)

The data format of yuv420sp is shown in the following figure (a byte []):

YUV420sp

4 / 6 of the data is y; 1 / 6 is u; 1 / 6 is v.

4.3 yuv420sp to RGB

In fact, the general principle is:

  • Take the YUV data from YUV420 data and generate three texture maps respectively
  • The YUV420 data is converted to yuv444 data by using the feature that each chip of the chip shader is executed once
  • From the yuv444 data, take out the YUV data corresponding to each other
  • Finally, transfer yuv444 to RGB
  • Over

Here are three texture map renderings of YUV:

YUV三张纹理图

YUV420 to yuv444

How to complete the color data of UV part in YUV420 data?

Here’s a neat way:
When OpenGL es generates texture, linear texture sampling is used. Linear sampling results in “? In U and V textures The color value of the section. In this way, one-to-one corresponding yuv444 data can be obtained.

The corresponding java code is as follows:

/**
     * 
     * @param w
     * @param h
     * @param date
     *Data
     * @param textureY
     * @param textureU
     * @param textureV
     * @param isUpdate
     *Update or not
     */
    public static boolean bindYUV420pTexture(int frameWidth, int frameHeight,
            byte frameData[], int textureY, int textureU, int textureV,
            boolean isUpdate) {

        if (frameData == null || frameData.length == 0) {
            return false;
        }
        Log.d(TAG, "----bindYUV420pTexture-----");

        if (isUpdate == false) {

            /**
             *Data buffer
             */
            // Y
            ByteBuffer buffer = LeBuffer.byteToBuffer(frameData);
            // GLES20.glActiveTexture(GLES20.GL_TEXTURE0);
            GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureY);

            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);// GL_LINEAR_MIPMAP_NEAREST
            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);

            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);

            /**
             *Target specifies the target texture, which must be GL? Text? 2D; level
             *Execution level of detail, 0 is the most basic image level, n represents the nth level of map refinement; internalformat
             *Specifies the color component in the texture. The optional values are GL? Alpha, GL? RGB, GL? RGBA, GL? Luminance,
             *Width specifies the width of texture image; height specifies the height of texture image; border
             *Specifies the width of the border; format the color format of the pixel data. The optional values refer to internalformat; type
             *Specifies the data type of pixel data. The available values are GL? Unsigned? Byte
             * ,GL_UNSIGNED_SHORT_5_6_5,GL_UNSIGNED_SHORT_4_4_4_4
             *, GL? Unsigned? Short? 5? 5? 5? 1; pixels specifies the pointer to image data in memory;
             * 
             */
            GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE,
                    frameWidth, frameHeight, 0, GLES20.GL_LUMINANCE,
                    GLES20.GL_UNSIGNED_BYTE, buffer);

            /**
             * 
             */
            // U
            buffer.clear();
            buffer = LeBuffer.byteToBuffer(frameData);
            buffer.position(frameWidth * frameHeight);
            //
            // GLES20.glActiveTexture(GLES20.GL_TEXTURE1);
            GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureU);

            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);// GL_LINEAR_MIPMAP_NEAREST
            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);

            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);

            GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE,
                    frameWidth / 2, frameHeight / 2, 0, GLES20.GL_LUMINANCE,
                    GLES20.GL_UNSIGNED_BYTE, buffer);

            /**
             * 
             */
            // V
            buffer.clear();
            buffer = LeBuffer.byteToBuffer(frameData);
            buffer.position(frameWidth * frameHeight * 5 / 4);
            //
            // GLES20.glActiveTexture(GLES20.GL_TEXTURE2);
            GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureV);

            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_MIN_FILTER, GLES20.GL_LINEAR);// GL_LINEAR_MIPMAP_NEAREST
            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_MAG_FILTER, GLES20.GL_LINEAR);

            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_WRAP_S, GLES20.GL_CLAMP_TO_EDGE);
            GLES20.glTexParameteri(GLES20.GL_TEXTURE_2D,
                    GLES20.GL_TEXTURE_WRAP_T, GLES20.GL_CLAMP_TO_EDGE);

            GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_LUMINANCE,
                    frameWidth / 2, frameHeight / 2, 0, GLES20.GL_LUMINANCE,
                    GLES20.GL_UNSIGNED_BYTE, buffer);

        } else {
            /**
             * Y
             */
            ByteBuffer buffer = LeBuffer.byteToBuffer(frameData);
            GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureY);
            GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0, frameWidth,
                    frameHeight, GLES20.GL_LUMINANCE, GLES20.GL_UNSIGNED_BYTE,
                    buffer);

            /**
             * U
             */
            //
            buffer.clear();
            buffer = LeBuffer.byteToBuffer(frameData);
            buffer.position(frameWidth * frameHeight);
            //
            GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureU);
            GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0,
                    frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE,
                    GLES20.GL_UNSIGNED_BYTE, buffer);
            /**
             * V
             */
            //
            buffer.clear();
            buffer = LeBuffer.byteToBuffer(frameData);
            buffer.position(frameWidth * frameHeight * 5 / 4);
            //
            GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, textureV);
            GLES20.glTexSubImage2D(GLES20.GL_TEXTURE_2D, 0, 0, 0,
                    frameWidth / 2, frameHeight / 2, GLES20.GL_LUMINANCE,
                    GLES20.GL_UNSIGNED_BYTE, buffer);
        }
        return true;
    }

Code Description:
The previous code is to convert the incoming frame data byte framedata [] into three texture map codes.
The 16 lines, 5051 lines and 7576 lines of the code are the codes to extract y, u and V data from byte framedata [].
Code 5659 and code 8184 respectively set the sampling method of u and V texture as linear sampling code.
At the end of the above code, three texture images will be generated in memory.
Pass the three texture images into the slice metashader for the next step.

YUV444 to RGB

YUV one-to-one corresponding texture is available. Here, how to realize yuv444 to RGB is introduced:

According to the YUV to RGB formula, take out y, u, V one-to-one corresponding, and carry out YUV to RGB operation to generate pixel points.

Corresponding chip shader code implementation:

recision mediump float;
//Three textures of YUV are input into the chip shader
uniform sampler2D sTexture_y;
uniform sampler2D sTexture_u;
uniform sampler2D sTexture_v;

varying vec2 vTextureCoord;

//Shader implementation of YUV to RGB
void getRgbByYuv(in float y, in float u, in float v, inout float r, inout float g, inout float b){  
    //
    y = 1.164*(y - 0.0625);
    u = u - 0.5;
    v = v - 0.5;
    //
    r = y + 1.596023559570*v;
    g = y - 0.3917694091796875*u - 0.8129730224609375*v;
    b = y + 2.017227172851563*u;
}

void main() {
    //
    float r,g,b;
    
    //From the three textures of YUV, one-to-one corresponding YUV data is sampled
    float y = texture2D(sTexture_y, vTextureCoord).r;
    float u = texture2D(sTexture_u, vTextureCoord).r;
    float v = texture2D(sTexture_v, vTextureCoord).r;
    //YUV to RGB
    getRgbByYuv(y, u, v, r, g, b);
    
    //Final color assignment
    gl_FragColor = vec4(r,g,b, 1.0); 
}

5、 It’s all right

Source code is really lazy to organize, so, we still understand the principle of implementation, do it yourself, don’t ask me for code!!!
Source code is really lazy to organize, so, we still understand the principle of implementation, do it yourself, don’t ask me for code!!!
Source code is really lazy to organize, so, we still understand the principle of implementation, do it yourself, don’t ask me for code!!!

========== THE END ==========

wx_gzh.jpg

Recommended Today

Analysis of super comprehensive MySQL statement locking (Part 1)

A series of articles: Analysis of super comprehensive MySQL statement locking (Part 1) Analysis of super comprehensive MySQL statement locking (Part 2) Analysis of super comprehensive MySQL statement locking (Part 2) Preparation in advance Build a system to store heroes of the Three KingdomsheroTable: CREATE TABLE hero ( number INT, name VARCHAR(100), country varchar(100), PRIMARY […]