Explaining texture rendering of OpenGL es in IOS from zero

Time:2020-12-24

Explaining texture rendering of OpenGL es in IOS from zero

This article mainly introduces how to use OpenGL es to render an image. The content includes: basic concept explanation, how to use glkit to render texture, how to use glsl to render texture.

preface

OpenGL(Open Graphics Library)It is developed and maintained by the Khronos group, a graphics software and hardware industry association that focuses on open standards for graphics and multimediastandardIt is hardware independent. It mainly defines a series of functions API for us to operate graphics and pictures. OpenGL itself is not API.

OpenGL ES(OpenGL for Embedded Systems)Is a subset of OpenGL, designed for mobile phones, PDAs and game console and other embedded devices. The specification was also developed and maintained by the Khronos group.

OpenGL es removesQuadrilateral (GL)_ QUADS)Polygon (GL_ POLYGONS)The rest of the essential, essential and other core features. Can be understood as a mobile platformA concise specification that can support the most basic functions of OpenGL

Currently, the IOS platform supports OpenGL es 1.0, 2.0 and 3.0. OpenGL es 3.0 has added some new features, but it needs IOS 7.0 or above, as well as iPhone 5S devices. Considering the existing equipment, we mainly use OpenGL es 2.0.

Note:OpenGL es below refers to OpenGL es 2.0.

1、 Concept

1. What is cache

OpenGL es runs partly on CPU and partly on GPUBuffersIt’s a new concept. Both CPU and GPU have independent control memory area. Cache can avoid data replication between these two memory areas and improve efficiency.Cache is actually a piece of continuous ram

2. Meaning of texture rendering

textureIs used to save the element value of image colorcacheRenderingIt refers to the process of generating image from data.Texture renderingIt is the process of generating image from the color value and other data stored in memory.

3. Coordinate system

1. OpenGL es coordinate system

Explaining texture rendering of OpenGL es in IOS from zero

OpenGL es coordinate systemIt is a three-dimensional coordinate system, usually represented by X, y and Z. The positive direction of the z-axis points out of the screen. Without considering the Z axis, the lower left corner is (- 1, – 1,0), and the upper right corner is (1,1,0).

2. Texture coordinate system

Explaining texture rendering of OpenGL es in IOS from zero

Texture coordinate systemIt is a two-dimensional coordinate system. The horizontal axis is called s axis and the vertical axis is called t axis. In the coordinate system, the abscissa of a point is generally represented by u, and the ordinate of a point is generally represented by V. The lower left corner is (0,0), and the upper right corner is (1,1).

Note:The (0,0) point of UIKit coordinate system is in the upper left corner, and the direction of its vertical axis is just opposite to that of texture coordinate system.

4. Texture related concepts

  • Texel:When an image is initialized to a texture buffer, each pixel becomes a texture pixel. The coordinates of the texture range from 0 to 1. Within this unit length, it may contain any number of texture elements.
  • Rasterizing:The rendering step of converting geometry data to fragments.
  • Fragment:Color pixels in the coordinates of the viewport. When a texture is not used, the object vertex is used to calculate the color of the clip; when texture is used, the color is calculated based on the texture element.
  • Mapping:How to align vertices and schlieren. That is to say, vertex coordinates (x, y, z) correspond to texture coordinates (U, V).
  • Sampling:After the vertex is fixed, each segment finds the corresponding morpheme according to the calculated (U, V) coordinates.
  • Frame buffer:A buffer to receive rendering results, which specifies the area for GPU to store rendering results. More popular, it can be understood as the area of storing the final display frame on the screen.

Note:(U, V) may exceed the range of 0 ~ 1, which needs to be passedglTextParameteri()Configure the corresponding scheme to map to s-axis and T-axis.

5. How to use cache

In practical applications, we need to use a variety of caches. For example, before the texture rendering, we need to generate a texture cache which saves the image data. Here are the general steps of cache management

The process of using cache can be divided into two parts7Step 1:

  1. Generate:Build cache identifierglGenBuffers()
  2. Bind:Bind a cache to the next operationglBindBuffer()
  3. Buffer data:Copy data from CPU memory to cached memoryglBufferData() / glBufferSubData()
  4. Enable or disable:Sets whether the cached data is to be used in the next renderingglEnableVertexAttribArray() / glDisableVertexAttribArray()
  5. Set pointers:Tells the type of data cached and the offset of the corresponding dataglVertexAttribPointer()
  6. Draw:Drawing with cached dataglDrawArrays() / glDrawElements()
  7. Delete:Delete cache and free resourcesglDeleteBuffers()

this7Step 1 is very important. Now let’s have an impression. We will use it again and again in the actual example later.

6. Context of OpenGL es

OpenGL es is a state machine, and relevant configuration information is saved in aContextThese values are saved until they are modified. But we can configure multiple contexts by calling[EAGLContext setCurrentContext:context]To switch.

7. Primitives in OpenGL

PrimitivesIt refers to the basic graphics that support rendering in OpenGL es. OpenGL es only supports three kinds of primitives: vertex, line segment and triangle. Complex graphics have to be achieved by rendering multiple triangles.

8. How to render triangles

Explaining texture rendering of OpenGL es in IOS from zero

The basic process of rendering triangles is shown in the figure above. Among them,Vertex Shader andFragment Shader It’s the programmable part,ShadersIs a small program, they run on the GPU, in the main program running time for dynamic compilation, instead of writing dead in the code. The language for writing shaders isGLSL(OpenGL Shading Language)In Section 3, we’ll talk about it in detail.

Here’s what each step of the rendering process does:

1. Vertex data

In order to render a triangle, we need to pass in an array containing three 3D vertex coordinates. Each vertex has its corresponding vertex attribute, which can contain any data we want to use. In the example above, each vertex contains a color value.

Moreover, in order to let OpenGL es know that we want to draw triangles instead of points or line segments, we will transfer the primitive information to OpenGL es when we call the drawing instruction.

2. Vertex shaders

The vertex shader performs an operation on each vertex, and it can use the vertex data to calculate the coordinates, colors, lighting, texture coordinates, and so on.

An important task of vertex shaders is to performcoordinate transformation For example, convert the original coordinate system of the model (generally refers to the coordinates in its 3D modeling tool) to the screen coordinate system.

3. Element assembly

After the vertex shader program outputs vertex coordinates, each vertex is assembled into an element according to the element type parameters in the drawing command and the vertex index array.

Through this step, 3D elements in the model have been converted to 2D elements on the screen.

4. Geometry shaders

In the OpenGL version, there is aOptionalShaders, calledGeometry shader

Geometric shaders take a set of vertices in primitive form as input, and it can generate other shapes by generating new vertices to construct new primitives.

At present, we don’t support geometry shaders in OpenGL.

5. Rasterization

In the rasterization stage, the basic primitives are converted into fragments for the clip shaders. The fragment represents the pixels that can be rendered to the screen. It contains information such as position, color, texture coordinates, etc. these values are obtained by interpolating the vertex information of primitives.

Before the fragment shader is run, all pixels outside the view will be trimmed to improve the execution efficiency.

6. Fragment shaders

The main function of the fragment shader is to calculate the final color value of each fragment (or discard the fragment). The fragment shader determines the color value of each pixel on the final screen.

7. Testing and mixing

In this step, OpenGL es will discard or mix the clips according to whether the clips are occluded or not, whether the drawn clips already exist in the view, etc., and the remaining clips will be written into the frame cache and finally presented on the device screen.

9. How to render multiple deformations

Since OpenGL es can only render triangles, polygons need to be composed of multiple triangles.

Explaining texture rendering of OpenGL es in IOS from zero

As shown in the figure, a Pentagon can be divided into three triangles to render.

To render a triangle, we need an array of 3 vertices. That means we need to render with a Pentagon vertex. What’s more, we can see that V0, V2 and V3 are repeated vertices, which is a bit redundant.

So is there a simpler way to reuse the previous vertex? The answer is yes.

In OpenGL es, there are three drawing modes for triangles. Given the same array of vertices, we can specify how we want to join. As shown in the figure below:

Explaining texture rendering of OpenGL es in IOS from zero

1、GL_TRIANGLES

GL_TRIANGLESIn other words, the way we start with three vertices is that we don’t start with each vertex. The first triangle uses V0, V1, V2, the second uses V3, V4, V5, and so on. If the number of vertices is not a multiple of 3, the last one or two vertices are discarded.

2、GL_TRIANGLE_STRIP

GL_TRIANGLE_STRIPWhen drawing a triangle, the first two vertices are reused. The first triangle still uses V0, V1, V2, the second uses V1, V2, V3, and so on. The nth will use V (n-1), V (n), V (n + 1).

3、GL_TRIANGLE_FAN

GL_TRIANGLE_FANWhen drawing a triangle, the first vertex and the previous vertex are reused. The first triangle still uses V0, V1, V2, the second uses V0, V2, V3, and so on. The nth will use V0, V (n), V (n + 1). It looks like you’re fanning around v0.

2、 Rendering with glkit

Congratulations on the boring concept presentation. From here on, we will enter the actual example and explain the rendering process with code.

In glkit, apple dad encapsulates some operations in OpenGL es, so we can use glkit to render without some steps.

So curious, you will ask, what has glkit done for us in the matter of “texture rendering”?

Don’t worry. We’ll answer this question after we finish the third section of glsl rendering.

Now, let’s take a look at how glkit renders textures with trepidation and anticipation.

1. Get vertex data

Define vertex data, use a three-dimensional vector to save (x, y, z) coordinates, and use a two-dimensional vector to save (U, V) coordinates

typedef struct {
    GLKVector3 positionCoord; // (X, Y, Z)
    GLKVector2 textureCoord; // (U, V)
} SenceVertex;

Initialization vertex data:

self.vertices  =Malloc (sizeof (sensevertex) * 4); // 4 vertices
    
self.vertices [0] = (sensevertex) {{- 1, 1, 0}, {0, 1}}; // upper left corner
self.vertices [1] = (sensevertex) {{- 1, - 1, 0}, {0, 0}; // bottom left corner
self.vertices [2] = (sensevertex) {{1, 1, 0}, {1, 1}; // upper right corner
self.vertices [3] = (sensevertex) {1, - 1, 0}, {1, 0}; // bottom right corner

When exiting, remember to release the memory manually:

- (void)dealloc {
    // other code ...
    
    if (_vertices) {
        free(_vertices);
        _vertices = nil;
    }
}

2. Initialize glkview and set context

//Create context using version 2.0
EAGLContext *context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
    
//Initialize glkview
CGRect frame = CGRectMake(0, 100, self.view.frame.size.width, self.view.frame.size.width);
self.glkView = [[GLKView alloc] initWithFrame:frame context:context];
self.glkView.backgroundColor = [UIColor clearColor];
self.glkView.delegate = self;
    
[self.view addSubview:self.glkView];
    
//Set the context of glkview to the current context
[EAGLContext setCurrentContext:self.glkView.context];

3. Load texture

useGLKTextureLoaderTo load the texture and use theGLKBaseEffectSave the ID of the texture for later rendering.

NSString *imagePath = [[[NSBundle mainBundle] resourcePath] stringByAppendingPathComponent:@"sample.jpg"];
UIImage *image = [UIImage imageWithContentsOfFile:imagePath]; 

NSDictionary *options = @{GLKTextureLoaderOriginBottomLeft : @(YES)};
GLKTextureInfo *textureInfo = [GLKTextureLoader textureWithCGImage:[image CGImage]
                                                           options:options
                                                             error:NULL];
self.baseEffect = [[GLKBaseEffect alloc] init];
self.baseEffect.texture2d0.name = textureInfo.name;
self.baseEffect.texture2d0.target = textureInfo.target;

becauseTexture coordinate systemandUIKit coordinate systemThe direction of the vertical axis of the is oppositeGLKTextureLoaderOriginBottomLeftSet toYESTo eliminate the difference between two coordinate systems.

Note:If you use it hereimageNamed:To read the image, when repeatedly loading the same texture, there will be upside down error.

4. A proxy method to implement glkview

stayglkView:drawInRect:In the proxy method, we need to implement the rendering logic of vertex data and texture data.This step is the key point. Pay attention to the specific usage of “7 steps of cache management”.

The code is as follows:

- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
    [self.baseEffect prepareToDraw];
    
    //Create vertex cache
    GLuint vertexBuffer;
    Glgenbuffers (1, & vertexbuffer); // step 1: generate
    glBindBuffer(GL_ ARRAY_ Buffer, vertexbuffer); // step 2: bind
    GLsizeiptr bufferSizeBytes = sizeof(SenceVertex) * 4;
    glBufferData(GL_ ARRAY_ BUFFER, bufferSizeBytes,  self.vertices , GL_ STATIC_ Draw); // step 3: cache data
    
    //Set vertex data
    (4); // enable vertexbglray
    glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_ FLOAT, GL_ False, sizeof (sensevertex), null + offsetof (sensevertex, positioncoord)); // step 5: set pointer
    
    //Set texture data
    Glenablevertexattribarray (glkvertexattribtexcoord0); // step 4: enable or disable
    glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_ FLOAT, GL_ False, sizeof (sensevertex), null + offsetof (sensevertex, texturecoord)); // step 5: set the pointer
    
    //Start drawing
    glDrawArrays(GL_ TRIANGLE_ Strip, 0, 4); // step 6: Drawing
    
    //Delete vertex cache
    Gldeletebuffers (1, & vertexbuffer); // step 7: delete
    vertexBuffer = 0;
}

5. Start drawing

We callGLKViewOfdisplayMethod can be triggeredglkView:drawInRect:Callback to start rendering logic.

The code is as follows:

[self.glkView display];

At this point, the process of texture rendering using glkit is finished.

If you don’t think it’s enough, go to the next section to learn how to render textures directly through the shaders written by glsl.

3、 Rendering through glsl

In this section, we will explain how to achieve texture rendering without using glkit. We’ll focus on the parts that are different from glkit rendering.

Note:When you actually check the demo, you will find that there are still some introductions<GLKit/GLKit.h>This header file. This is mainly for useGLKVector3GLKVector2These two types, of course, can be used without use. The purpose is to keep the data format consistent with the glkit example, so that you can focus on the real difference between the two.

1. Shader writing

First, we need to write our own shaders, including vertex shaders and fragment shaders, using glsl. We will not expand glsl here, but just explain the parts we will use later. For more detailed syntax, please refer to here.

Create a new file, general vertex shader with suffix.vsh, fragment shaders with suffix.fsh(of course, you don’t like to name it like this, but for the convenience of other people, it’s better to follow this specification), and then you can write the code.

The code for the vertex shader is as follows:

attribute vec4 Position;
attribute vec2 TextureCoords;
varying vec2 TextureCoordsVarying;

void main (void) {
    gl_Position = Position;
    TextureCoordsVarying = TextureCoords;
}

The code for the fragment shader is as follows:

precision mediump float;

uniform sampler2D Texture;
varying vec2 TextureCoordsVarying;

void main (void) {
    vec4 mask = texture2D(Texture, TextureCoordsVarying);
    gl_FragColor = vec4(mask.rgb, 1.0);
}

Glsl is like C language written, if you have studied C language, it is very fast to start. Here is a brief explanation of the code of these two shaders.

attributeModifiers only exist in vertex shaders and are used to store the input information of each vertex. For example, here we define thePositionandTextureCoordsTo receive the position and texture information of vertices.

vec4andvec2Is the data type, which refers to four-dimensional vector and two-dimensional vector respectively.

varyingModifier refers to the output of vertex shaders and also the input of fragment shaders. If both vertex shaders and fragment shaders are required to be declared at the same time and are completely consistent, the data in vertex shaders can be obtained in fragment shaders.

gl_Positionandgl_FragColorIs a built-in variable. Assigning values to these two variables can be understood as outputting the location information and color information of the segment to the screen.

precisionYou can specify the default precision for the data type,precision mediump floatThis sentence means will befloatThe default precision for type is set tomediump

uniformUsed to save the read-only value passed in, which is not modified in both vertex and clip shaders. Vertex shaders and fragment shaders shareuniformThe namespace of the variable,uniformVariables are declared in the global area. The sameuniformVariables are accessible in both vertex and clip shaders.

sampler2DIs a texture handle type that holds the texture passed in.

texture2D()Method can get the corresponding color information according to the texture coordinates.

Then the meaning of these two codes is very clear. The vertex shader outputs the input vertex coordinate information directly, and transmits the texture coordinate information to the fragment shader; according to the texture coordinate, the fragment shader obtains the color information of each segment and outputs it to the screen.

2. Loading of texture

less thanGLKTextureLoaderWe can only generate texture by ourselves. The steps of texture generation are relatively fixed, and the following are encapsulated into one method:

- (GLuint)createTextureWithImage:(UIImage *)image {
    //Convert uiimage to cgimageref
    CGImageRef cgImageRef = [image CGImage];
    GLuint width = (GLuint)CGImageGetWidth(cgImageRef);
    GLuint height = (GLuint)CGImageGetHeight(cgImageRef);
    CGRect rect = CGRectMake(0, 0, width, height);
    
    //Drawing pictures
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    void *imageData = malloc(width * height * 4);
    CGContextRef context = CGBitmapContextCreate(imageData, width, height, 8, width * 4, colorSpace, kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGContextTranslateCTM(context, 0, height);
    CGContextScaleCTM(context, 1.0f, -1.0f);
    CGColorSpaceRelease(colorSpace);
    CGContextClearRect(context, rect);
    CGContextDrawImage(context, rect, cgImageRef);

    //Generate texture
    GLuint textureID;
    glGenTextures(1, &textureID);
    glBindTexture(GL_TEXTURE_2D, textureID);
    glTexImage2D(GL_ TEXTURE_ 2D, 0, GL_ RGBA, width, height, 0, GL_ RGBA, GL_ UNSIGNED_ Byte, imagedata); // write the image data to the texture cache
    
    //Set how to map morphemes to pixels
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    
    //Unbundling
    glBindTexture(GL_TEXTURE_2D, 0);
    
    //Free memory
    CGContextRelease(context);
    free(imageData);
    
    return textureID;
}

3. Compiler links for shaders

For well written shaders, we need to compile links dynamically when the program is running. The code for compiling a shader is also relatively fixed. Here, the type of shaders is distinguished by the suffix name. Look at the code directly:

- (GLuint)compileShaderWithName:(NSString *)name type:(GLenum)shaderType {
    //Find shader file
    NSString *shaderPath = [[NSBundle mainBundle] pathFo rResource:name ofType :shaderType == GL_ VERTEX_ Shader? @ "VSH": @ "FSH]; // determine the suffix according to different types
    NSError *error;
    NSString *shaderString = [NSString stringWithContentsOfFile:shaderPath encoding:NSUTF8StringEncoding error:&error];
    if (!shaderString) {
        Nsasert (no, @ "read shader failed");
        exit(1);
    }
    
    //Create a shader object
    GLuint shader = glCreateShader(shaderType);
    
    //Get the content of the shader
    const char *shaderStringUTF8 = [shaderString UTF8String];
    int shaderStringLength = (int)[shaderString length];
    glShaderSource(shader, 1, &shaderStringUTF8, &shaderStringLength);
    
    //Compile shader
    glCompileShader(shader);
    
    //Query whether the shader is compiled successfully
    GLint compileSuccess;
    glGetShaderiv(shader, GL_COMPILE_STATUS, &compileSuccess);
    if (compileSuccess == GL_FALSE) {
        GLchar messages[256];
        glGetShaderInfoLog(shader, sizeof(messages), 0, &messages[0]);
        NSString *messageString = [NSString stringWithUTF8String:messages];
        Nsasert (no, @ "shader compilation failed:% @", message string);
        exit(1);
    }
    
    return shader;
}

Vertex shaders and fragment shaders also need to go through this compilation process. After compilation, a shader program needs to be generated to link the two shaders. The code is as follows:

- (GLuint)programWithShaderName:(NSString *)shaderName {
    //Compiling two shaders
    GLuint vertexShader = [self compileShaderWithName:shaderName type:GL_VERTEX_SHADER];
    GLuint fragmentShader = [self compileShaderWithName:shaderName type:GL_FRAGMENT_SHADER];
    
    //Mount the shader to the program
    GLuint program = glCreateProgram();
    glAttachShader(program, vertexShader);
    glAttachShader(program, fragmentShader);
    
    //Link program
    glLinkProgram(program);
    
    //Check if the link is successful
    GLint linkSuccess;
    glGetProgramiv(program, GL_LINK_STATUS, &linkSuccess);
    if (linkSuccess == GL_FALSE) {
        GLchar messages[256];
        glGetProgramInfoLog(program, sizeof(messages), 0, &messages[0]);
        NSString *messageString = [NSString stringWithUTF8String:messages];
        Nsasert (no, @ "program link failed:% @", message string);
        exit(1);
    }
    return program;
}

In this way, we only need to name the two shaders uniformly and add the suffix according to the specification. Then pass the shader name into this method to get a compiled and linked shader program.

After having a shader program, we need to pass data into the program. First, we need to obtain the variables defined in the shader. The specific operations are as follows:

Note:Different types of variables get different ways.

GLuint positionSlot = glGetAttribLocation(program, "Position");
GLuint textureSlot = glGetUniformLocation(program, "Texture");
GLuint textureCoordsSlot = glGetAttribLocation(program, "TextureCoords");

Pass in the generated texture ID:

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, textureID);
glUniform1i(textureSlot, 0);

glUniform1i(textureSlot, 0)Will betextureSlotAssigned to0, and0AndGL_TEXTURE0If you write here1glActiveTextureIt’s going to be introducedGL_TEXTURE1In order to match.

Set vertex data:

glEnableVertexAttribArray(positionSlot);
glVertexAttribPointer(positionSlot, 3, GL_FLOAT, GL_FALSE, sizeof(SenceVertex), NULL + offsetof(SenceVertex, positionCoord));

Set texture data:

glEnableVertexAttribArray(textureCoordsSlot);
glVertexAttribPointer(textureCoordsSlot, 2, GL_FLOAT, GL_FALSE, sizeof(SenceVertex), NULL + offsetof(SenceVertex, textureCoord));

4. Settings of the viewport

When rendering textures, we need to specify the size of the viewer, which can be understood as the size of the rendered window. callglViewportMethod to set:

glViewport(0, 0, self.drawableWidth, self.drawableHeight);
//Gets the render cache width
- (GLint)drawableWidth {
    GLint backingWidth;
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
    
    return backingWidth;
}

//Gets the render cache height
- (GLint)drawableHeight {
    GLint backingHeight;
    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
    
    return backingHeight;
}

5. Binding of render layers

Through the above steps, we have the texture, as well as the vertex location information. Now to the final step, how do we associate the cache with the view? In other words, if there are two views on the screen, how does OpenGL es know which view to render the image to?

So we need to bind the render layer. adoptrenderbufferStorage:fromDrawable:To achieve:

- (void)bindRenderLayer:(CALayer <EAGLDrawable> *)layer {
    Glue renderbuffer; // render cache
    Fluent framebuffer; // frame cache
    
    //Bind render cache to output layer
    glGenRenderbuffers(1, &renderBuffer);
    glBindRenderbuffer(GL_RENDERBUFFER, renderBuffer);
    [self.context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
    
    //Bind the render cache to the frame cache
    glGenFramebuffers(1, &frameBuffer);
    glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
    glFramebufferRenderbuffer(GL_FRAMEBUFFER,
                              GL_COLOR_ATTACHMENT0,
                              GL_RENDERBUFFER,
                              renderBuffer);
}

The above code generates a frame cache and a render cache, mounts the render cache to the frame cache, and then sets the output layer of the render cache tolayer

Finally, render the bound render cache to the screen:

[self.context presentRenderbuffer:GL_RENDERBUFFER];

At this point, the key steps of rendering textures with glsl are over.

Final effect:
Explaining texture rendering of OpenGL es in IOS from zero

To sum up, we can answer the question in the second section,Glkit mainly helps us to do the following:

  • Writing of shaders:Glkit has built-in simple shaders, so we don’t have to write them ourselves.
  • Loading of texture:GLKTextureLoaderIt encapsulates a method to convert image into texture.
  • Compiler links for shaders:GLKBaseEffectWe can ignore the concept of “shaders” in the process of compiling and linking shaders.
  • Settings for the viewer:When rendering textures, you need to specify the size of the viewer,GLKViewCallingdisplayMethod will be set internally.
  • Binding of render layers:GLKViewInternal callrenderbufferStorage:fromDrawable:Will ownlayerSet to the output layer of the render cache. Therefore, thedisplayMethod is called internallypresentRenderbuffer:To render the render cache to the screen.

Source code

Go to GitHub to see the full code.

reference resources

  • OpenGL es application development practice guide
  • Hello, triangle – learnpongl cn
  • OpenGL es IOS introduction 2 – drawing a polygon
  • Image texture of IOS – OpenGLES

For a better reading experience, please visit the original address [Lyman’s blog] to explain the texture rendering of OpenGL es in IOS from scratch

Recommended Today

Analysis of super comprehensive MySQL statement locking (Part 1)

A series of articles: Analysis of super comprehensive MySQL statement locking (Part 1) Analysis of super comprehensive MySQL statement locking (Part 2) Analysis of super comprehensive MySQL statement locking (Part 2) Preparation in advance Build a system to store heroes of the Three KingdomsheroTable: CREATE TABLE hero ( number INT, name VARCHAR(100), country varchar(100), PRIMARY […]