From the image fillet of IOS to rendering

Time:2020-7-3

Original address: http://chars.tech/2017/07/03/…

Fillet is a very common view effect. Compared with right angle, it is softer, more graceful and easy to accept. Setting fillet will bring some performance loss. How to improve the performance is a key topic to be discussed.

Common fillet codex.layer.cornerRadius = xx; x.clipsToBounds = YES;These two lines do provide a rounded visual effect. In factx.layer.cornerRadius = xx;The fillet has been implemented, but it is not effective in some controls, because some layers are displayed above the cut fillet layer. andx.clipsToBounds = YES;The consequence is to produceoffscreen rendering 。 You can use the coreanimation tool in instruments to open theColor Offscren-Rednered YellowOption, the visible yellow area is the off screen rendering part.

So what does off screen rendering bring? When the resource is then depleted, it may cause stuck. Because the hardware resources of iPhone devices are different, when off screen rendering is not much, it is not obvious to feel its shortcomings.

Specific source code can be transferred to GitHub for star ddcornerradius welcome issue.

What are pixels

Pixel, the basic unit of video display, is translated from English “pixel”. Pix is a common abbreviation of the English word picture. When the English word “element” is added, the pixel is obtained. Therefore, “pixel” means “portrait element”, sometimes also known as pel (picture element). Each such message element is not a point or a square, but an abstract sampling. Pixels are made up of red, green and blue color components. Therefore, bitmap data is sometimes referred to as RGB data.

Display mechanism

How is a pixel drawn to the screen? There are many ways to map something to the screen, and they need to call a combination of different frameworks, many functions and methods. Here we take a look at what happens after the screen.

If the image wants to be displayed on the screen, it needs the power of pixels to make it visible to the naked eye. They are densely arranged on the screen of the mobile phone, showing any graphics through different color values. The process of computer display can be roughly described as converting an image into a series of pixel arrangement and printing it on the screen. The process of converting an image into a pixel can also be called rasterization, that is, from the description of vector points, lines and planes to the description of pixels.

Looking back on history, we can start from the principle of CRT display in the past. The CRT electron gun scans from top to bottom according to the above method. After scanning, the display will display a frame picture, and then the electron gun will return to the initial position to continue the next scanning. In order to synchronize the display process of the monitor with the video controller of the system, the display (or other hardware) will use the hardware clock to generate a series of timing signals. When the electron gun changes to a new line and is ready to scan, the display will send out a horizontal synchronization signal (Hsync for short); when a frame of picture is drawn, the electron gun will return to its original position, and before the next frame is ready to be drawn, the display will send out a vertical synchronization signal (Vsync). The display is usually refreshed at a fixed frequency, which is the frequency of the Vsync signal. Although most of today’s devices are liquid crystal displays, the principle remains unchanged.

On the simple principle explanation of Caton

After the Vsync signal arrives, the system graphics service will notify the app through the cadisplay link mechanism, and the app main thread will start to calculate the display content in the CPU, such as view creation, layout calculation, image decoding, text drawing, etc. Then the CPU will submit the calculated content to the GPU, which will transform, synthesize and render. The GPU will then submit the rendering result to the frame buffer and wait for the next Vsync signal to be displayed on the screen. Due to the vertical synchronization mechanism, if the CPU or GPU does not complete the content submission within a Vsync time, the frame will be discarded and displayed again at the next opportunity. At this time, the display will keep the previous content unchanged. That’s why the interface is stuck.

Either CPU or GPU hinders the display process, which will cause frame dropping. Therefore, when developing, we also need to evaluate and optimize the CPU and GPU pressure respectively.

Rendering mechanism

When pixels are mapped to the screen, a lot of things happen in the background. But once they are displayed on the screen, each pixel consists of three color components: red, green, and blue. Three independent color units are displayed on a pixel based on the given color. On the iPhone monitor, there are 1136 × 640 = 727040 pixels, so there are 218, 1120 color units. On some retina screens, the number will be more than a million. All graphics stacks work together to ensure correct display each time. When you scroll through the screen, millions of color units have to be refreshed 60 times per second, which is a lot of work.

In short, the display mechanism of IOS is roughly like this:
From the image fillet of IOS to rendering

The upper layer of display is the graphics processing unit (GPU). GPU is a processing unit specially designed for high concurrent graphics computing. That’s why it can update all the pixels at the same time and display them on the display. Its concurrent nature allows it to synthesize different textures efficiently. Therefore, in the development, we should try our best to let the CPU be responsible for the UI transfer of the main thread, and hand over the graphic display related work to the GPU.

GPU driver is a code block that communicates directly with GPU. Different GPUs are different performance monsters, but drivers make them display more uniform at the next level. The typical next level is OpenGL / OpenGL es

OpenGL (Open Graphics Library) is an API that provides 2D and 3D graphics rendering. GPU is a very special piece of hardware. OpenGL and GPU work closely to improve the ability of GPU and realize hardware accelerated rendering.

There are many things that extend beyond OpenGL. On IOS, almost everything is drawn through core animation. However, on OS X, bypassing core animation and directly using core graphics is not uncommon. For some special applications, especially games, the program may communicate directly with OpenGL / OpenGL es.

It should be emphasized that GPU is a very powerful graphics hardware and plays a key role in displaying pixels. It’s connected to the CPU. In terms of hardware, there is a certain type of bus between the two, and there are frameworks like OpenGL, core animation and core graphics to carefully arrange data transmission between GPU and CPU. In order to display the pixels on the screen, some processing will be done on the CPU. The data is then transferred to the GPU and the final pixels are displayed on the screen.

From the image fillet of IOS to rendering

As shown in the figure above, the GPU needs to synthesize the textures (bitmaps) of each frame together (60 times a second). Each texture will occupy VRAM (video RAM), so it is necessary to set a limit on the number of textures that GPU can keep at the same time. GPU is very efficient in synthesis, but some synthesis tasks are more complex than others, and the work GPU can do in 16.7 MS (1 / 60 s) is limited.

Another problem is transferring data to the GPU. In order for GPU to access data, it is necessary to move data from RAM to VRAM. This is about uploading data to the GPU. These may seem trivial, but some large textures can be very time-consuming.

Finally, the CPU starts running the program. You might ask the CPU to load a PNG image from the bundle and unzip it. All this is done on the CPU. Then when you need to display the decompressed image, it needs to be uploaded to the GPU in some way. Some seemingly trivial, such as displaying text, is a very complex thing for CPU, which will promote the closer integration of core text and core graphics framework to generate a bitmap based on text. Once ready, it will be uploaded to the GPU as a texture and ready to be displayed. When you scroll or move text on the screen, the same texture can be reused. The CPU simply tells the GPU the new location, so the GPU can reuse the existing texture. The CPU does not need to re render the text, and the bitmap does not need to be re uploaded to the GPU.

In the graphic world, compositing is a process that describes how different bitmaps come together to create an image that you will eventually see on the screen. Everything on the screen is textured. A texture is a rectangle containing RGBA values. For example, each pixel contains red, green, blue and transparency values. In the world of core animation, this is equivalent to a calayer.

Each layer is a texture, and all textures are stacked on top of each other in some way. For each pixel on the screen, the GPU needs to figure out how to mix these textures to get the RGB value of the pixel. This is synthesis.

If we have a single texture that is the same size as the screen and aligned with the screen pixels, then each pixel on the screen is equivalent to one pixel in the texture, and the last pixel of the texture is the last pixel of the screen.

If we have a second texture on top of the first texture, then the GPU will synthesize the second texture into the first texture. There are many different compositing methods, but if we assume that the pixels of two textures are aligned and use the normal blending mode, we can use the formula to calculate each pixel:R = S + D * ( 1 – Sa )
The resulting color is the source color (top texture) + target color (lower texture) * (1-transparency of the source color). In this formula, all colors are assumed to have been pre multiplied by their transparency.

Then we make the second assumption that both textures are completely opaque, such as alpha = 1. If the target texture (lower texture) is blue (RGB = 0,0,1) and the source texture (top texture) color is red (RGB = 1,0,0), because SA is 1, the result is:R = S
The result is the red of the source color. That’s exactly what we’re looking for (red over blue). If the source color layer is 50% transparent, such as alpha = 0.5, since the alpha component needs to be pre multiplied into the RGB value, then the RGB value of S is (0.5, 0, 0), and the formula looks like this:

                       0.5   0               0.5
R = S + D * (1 - Sa) = 0   + 0 * (1 - 0.5) = 0
                       0     1               0.5

We end up with RGB values of (0.5, 0, 0.5), which is a purple color. This is exactly what we expect when we synthesize transparent red on a blue background.

Remember that we just synthesized one pixel of the texture onto another. When two textures are overlaid together, the GPU needs to do this for all pixels. As you know, many programs have many layers, so all the textures need to be synthesized together. Although the GPU is a highly optimized piece of hardware to do this, it still makes it very busy.

Why does image scaling increase GPU workload

When all the pixels are aligned, we get a relatively simple formula. Whenever the GPU needs to calculate the color of a pixel on the screen, it only needs to consider the corresponding single pixel in all layers above the pixel and merge these pixels together. Or, if the topmost texture is opaque (at the bottom of the layer tree), the GPU can simply copy its pixels to the screen.

When all the pixels on a layer are perfectly aligned with the pixels on the screen, the layer is pixel aligned. There are two main reasons for misalignment. The first is scrolling. When a texture scrolls up and down, the pixels in the texture will not align with the pixel arrangement on the screen. Another reason is when the starting point of the texture is not on the boundary of a pixel.

In both cases, the GPU needs to do additional calculations. It needs to mix multiple pixels on the source texture to produce a value for compositing. When all the pixels are aligned, the GPU has very little work to do.

Core animation tools and simulators have aColor Misaligned Images Option, this function can be shown to you when these happen in your instance of calayer.

For some size restrictions on IOS devices, see here: iosres

offscreen rendering

On screen rendering means the current screen rendering, which means that the rendering operation of GPU is performed in the screen buffer currently used for display.
Off screen rendering means off screen rendering, which means that GPU creates a new buffer outside the current screen buffer for rendering operations.

Off screen rendering is called up when a mixture of layer properties is specified that it cannot be drawn directly on the screen without pre compositing. Off screen rendering does not mean software rendering, but it does mean that layers must be rendered in an off screen context (whether CPU or GPU) before they are displayed.

Off screen rendering can be triggered automatically by core animation or forced by an application. Off screen rendering merges / renders part of the layer tree into a new buffer, which is then rendered to the screen.

Special off screen rendering: CPU rendering

If we rewrite the drawRect method and use any of the core graphics techniques for rendering operations, CPU rendering is involved.
The whole rendering process is completed synchronously by CPU in app, and the rendered bitmap is finally submitted to GPU for display.

Off screen rendering

Compared with the current screen rendering, the off screen rendering cost is very high, mainly reflected in two aspects:

  • 1 create a new buffer
    To render off screen, you first create a new buffer.
  • 2 context switch
    In the whole process of off screen rendering, the context environment needs to be switched several times: first, from the current screen (on screen) to off screen (off screen); after the off screen rendering is finished, the rendering results of off screen buffer are displayed on the screen, and the context environment needs to be switched from off screen to current screen. The context switching is costly.

Trigger off screen rendering

1、drawRect
2、layer.shouldRasterize = true;
3. There are masks or shadows( layer.masksToBounds , layer.shadow *);
3.1) should rasterize
3.2) masks
3.3) shadows
3.4) edge antialiasing
3.5) group opacity
4、Text(UILabel, CATextLayer, Core Text, etc)…
Note: layer.cornerRadius , layer.borderWidth , layer.borderColor There is no offscreen render, because these do not need to be masked.

Fillet optimization

So much has been said before. Here is a practical plan. At present, two aspects are considered in the optimization of fillet: one is to start with the picture and cut the image into the specified fillet style. The other is to use Bezier curve to draw mask view with specified fillet style by using calayer layer.

Uiimage cutting:

UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
CGContextRef context = UIGraphicsGetCurrentContext();
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGContextScaleCTM(context, 1, -1);
CGContextTranslateCTM(context, 0, -rect.size.height);

CGFloat minSize = MIN(self.size.width, self.size.height);
if (borderWidth < minSize / 2.0) {
    UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:CGRectInset(rect, borderWidth, borderWidth) byRoundingCorners:corners cornerRadii:CGSizeMake(radius, borderWidth)];
    CGContextSaveGState(context);
    [path addClip];
    CGContextDrawImage(context, rect, self.CGImage);
    CGContextRestoreGState(context);
}

UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
image = [image dd_imageByCornerRadius:radius borderedColor:borderColor borderWidth:borderWidth corners:corners];
UIGraphicsEndImageContext();

Picture drawing:

UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
[self drawAtPoint:CGPointZero];
CGRect rect = CGRectMake(0, 0, self.size.width, self.size.height);
CGFloat strokeInset = borderWidth / 2.0;
CGRect strokeRect = CGRectInset(rect, strokeInset, strokeInset);
UIBezierPath *path = [UIBezierPath bezierPathWithRoundedRect:strokeRect byRoundingCorners:corners cornerRadii:CGSizeMake(radius, borderWidth)];
path.lineWidth = borderWidth;
[borderColor setStroke];
[path stroke];
UIImage *result = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Recommended Today

Analysis of super comprehensive MySQL statement locking (Part 1)

A series of articles: Analysis of super comprehensive MySQL statement locking (Part 1) Analysis of super comprehensive MySQL statement locking (Part 2) Analysis of super comprehensive MySQL statement locking (Part 2) Preparation in advance Build a system to store heroes of the Three KingdomsheroTable: CREATE TABLE hero ( number INT, name VARCHAR(100), country varchar(100), PRIMARY […]