IOS — detailed explanation of off screen rendering



  • 1. Image display principle
  • 2. Image display principle
    • 2.1 image to screen process
    • 2.2 display process
  • 3. Jamming and frame dropping
    • 3.1 vertical synchronization Vsync + double buffering
    • 2.3 essence of frame dropping and screen jamming
  • 4. Off screen rendering
    • 4.1 what is the process of off screen rendering and off screen rendering
    • 4.2 since off screen rendering affects the interface, why use it
  • 5. Trigger off screen rendering
  • 6. How to optimize
1. Introduction

Let’s talk about why we need to understand off screen rendering?
Look at the current environment of app development. In Shenzhen in the past 14 years, basically every company had to make an app. You may not be able to pull down more investment without making an app. Now, more than half of them are dead, and now users don’t want to download too many apps. Generally, only some commonly used apps are left on mobile phones, which are basically all large manufacturers’ apps. Then the question of IOS becomes more and more difficult. Performance optimization will definitely ask. There are many summaries of performance optimization on the Internet, but you can’t help but know why it can be optimized and why. Then, at this time, you need to know how the interface is rendered, when the frame will fall, and when it will get stuck. All these make it very necessary for us to understand off screen rendering.
Off screen rendering pass

2. Image display principle
2.1 image to screen process

Let’s look at a picture first. Let’s combine this picture

IOS -- detailed explanation of off screen rendering

Core Animation pipeline png

The first thing to understand is the render server process. The app itself is not responsible for rendering. There is an independent process responsible for rendering, which is render server.

When we set and modify the UI interface in the code, the essence is to modify calayer through core animation. In the subsequent summary of core animation, we will talk about the relationship between uiview and calayer, as well as the setting of core animation. There are a lot of knowledge points, which need to be summarized separately in detail. So finally, it is displayed according to the process in the picture.

  • First, there are app handle events. For example, when a user clicks a button, it will trigger an animation of other views
  • Secondly, the app completes the calculation of the display content through the CPU, such as the creation of the view, the layout of the view, the drawing of the picture text, etc. After completing the calculation of the display content, the app packages the layer and sends it to the render server the next time it runs loop
  • As mentioned above, render server is responsible for rendering. Render server executes relevant programs of open GL, core graphics metal. Call GPU
  • GPU completes the rendering of the image in the physical layer.

Speaking of this, we’ll stop and look at the next picture

IOS -- detailed explanation of off screen rendering


The above flowchart details the process from GPU to controller.
After getting the bitmap, the GPU performs vertex shading, primitive assembly, rasterization, fragment shading, etc., and finally submits the rendering results to the frame buffer (frame buffer)
Then, the video controller obtains the object to be displayed from the frame buffer and displays it on the screen
The yellow dotted line in the picture doesn’t matter for the time being. We’ll understand when we talk about the vertical synchronization signal.
This is the process of setting up the UI from our code and then to the screen.

2.2 display process

Now that the rendered view is obtained from the frame cache, how can it be displayed on the display?

Let’s look at a picture first

IOS -- detailed explanation of off screen rendering

Scan display png

We can also roughly understand the process shown in the figure.

The electron beam of the display is displayed line by line from the top left of the screen. After the first line is scanned, the second line is scanned from left to right, and then to the bottom of the screen. We all know. The mobile phone has the number of screen refreshes. Android is now 120, IOS is 60. Refresh 60 times per second. When we finish scanning, the screen will refresh, and then the view will be displayed.

3. UI jamming and frame dropping
3.1 vertical synchronization Vsync + double buffering

First, after we understand the above rendering process, we need to consider what to do in some special cases? We write a very complex UI view in our code, then the CPU calculates the layout, GPU rendering, and finally put it into the cache. If the bitmap is not rendered when the electron beam starts scanning a new frame, it is only rendered when it is scanned to the middle of the screen and placed in the frame buffer.
Then the scanned part is the picture of the previous frame, and the non scanned part is the image of the new frame. Does this cause the screen to tear.

However, have we ever encountered the problem of screen tearing in our normal development process? No, why?
Obviously, apple did the optimization operation. That is, Vsync + double buffering mechanism for vertical synchronization.

Vertical synchronization Vsync
Vertical synchronization Vsync is equivalent to locking the frame cache. Remember the yellow dotted line mentioned above. After we scan one frame, we will send a vertical synchronization signal to start scanning the image of the next frame. He’s like an orderly person. You have to queue up for me one by one. Don’t jump in the queue. The consequence of jumping in line is that the screen is torn.
Double buffering
The scanning display is queued, so when the bitmap of the next frame is transmitted, it means that I need to get the bitmap immediately. You can’t wait for CPU + GPU to calculate and render the bitmap, which will affect the performance. How to solve this problem? You must have finished all this before you render it. You are like waiting in line for an injection. In order to save time, you must roll up your sleeves in advance. At that time, the doctor will go straight to the injection. Far away, ha ha. If you want to render in advance, you need another cache to put down a frame of bitmap. When it needs to be scanned, you give the rendered bitmap to the frame cache. After the frame cache is obtained, you can start scanning and displaying happily.
A graph interpretation

IOS -- detailed explanation of off screen rendering

Dual cache png
3.2 frame dropping jamming

Vertical synchronization and dual cache mechanism perfectly solve the problem of screen tearing, but it leads to a new problem: frame dropping.
What does falling frame mean? Copy a map from the Internet

IOS -- detailed explanation of off screen rendering

Drop frame png

In fact, it’s easy to understand. We said that the IOS screen refresh is 60 times. In the process of one refresh, our CPU + GPU does not put the newly rendered bitmap into the frame buffer. At this time, is it still the original image displayed. When you refresh the next frame, you get a new bitmap. Is there a frame lost here.

Root cause of Caton:
CPU and GPU rendering pipeline take too long to lose frames
When we usually write the interface, we can detect the interface jam through some open source libraries or libraries written by ourselves using runloop, and the screen refresh rate is more than 50. Ordinary people can’t experience 10 frames. If you want the refresh rate to be 30, it’s obvious that carton thought about it.

4 off screen rendering
4.1 what is the off screen rendering process

It means that GPU opens up a buffer outside the current screen buffer for rendering operation
Process: first, a new buffer other than the current screen buffer will be created, and the screen rendering will have a context environment. The off screen rendering process is to cut the flower context environment, switch the current screen to off screen, and switch the context back after it is finished. So it takes longer to deal with it. A long time may cause frame loss.
In addition, offscreen buffer off screen cache itself requires additional space, and a large number of off screen rendering may cause excessive memory pressure. And the off screen buffer is not unlimited. It cannot exceed 2.5 times the total pixels of the screen.

4.2 why off screen rendering

1. Some special effects need to use additional offscreen buffer to save the intermediate state of rendering, so off screen rendering has to be used.
2. For efficiency purposes, you can render and save the content in advance in the offscreen buffer to achieve the purpose of reuse.
When fillet, shadow and mask are used, the mixture of layer attributes is specified that it cannot be drawn directly on the screen before pre synthesis (before the start of the next Vsync signal), so off screen rendering is required.

5. Trigger off screen rendering
  1. Mask a layer (layer. Mask)
  2. The layer of the layer masksToBounds/view. Set the clipstobonds property to true
  3. Set the layer layer Allowsgroupopacity is set to yes and layer opacity<1.0
  4. Set shadows for layers (layer. Shadow)
  5. Set shouldrasterize rasterization for layers
    6. Set rounded corners for complex shapes, etc
    7 gradient
    8 text (any kind, including uilabel, catextlayer, core text, etc.)
    9 using cgcontext to draw in the drawRect: method will lead to off screen rendering in most cases, or even just an empty implementation.
6 optimization of off screen rendering
1 fillet optimization

Method 1

iv.layer.cornerRadius = 30;
iv.layer.masksToBounds = YES;

Method 2
Use mask to set the fillet, and use Bezier curve and cashapelayer to complete it

CAShapeLayer *mask1 = [[CAShapeLayer alloc] init];
mask1.opacity = 0.5;
mask1.path = [UIBezierPath bezierPathWithOvalInRect:iv.bounds].CGPath;
iv.layer.mask = mask1;

Method 3
Use coregraphics to draw a circular context, and then draw the picture

- (void)setCircleImage
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        UIImage * circleImage = [image imageWithCircle];
        dispatch_async(dispatch_get_main_queue(), ^{
            imageView.image = circleImage;

#import "UIImage+Addtions.h"
@implementation UIImage (Addtions)
//Returns a circular picture
- (instancetype)imageWithCircle
    UIGraphicsBeginImageContextWithOptions(self.size, NO, 0);
    UIBezierPath *path = [UIBezierPath bezierPathWithOvalInRect:CGRectMake(0, 0, self.size.width, self.size.height)];
    [path addClip];
    [self drawAtPoint:CGPointZero];
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    return image;

After setting the shadow, set the shadowpath of calayer

view.layer.shadowPath = [UIBezierPath pathWithCGRect:view.bounds].CGPath;
Mask (mask)

Do not use mask
Use a blend layer use a blend layer to overlay a translucent layer of the corresponding mask shape above the layer

sublayer.contents = (id)[UIImage imageNamed:@"xxx"].CGImage;
[view.layer addSublayer:sublayer];

Close the allowsgroupopacity property and control the layer transparency according to the product requirements

Edge antialiasing

Do not set the allowsedgeantialising property to Yes (no by default)

When the view content is static, set shouldrasterize to yes, which is the most practical and convenient scheme

view.layer.shouldRasterize = true;
view.layer.rasterizationScale = view.layer.contentsScale;

If the view content changes dynamically, such as the pictures in the cell, the use of rasterization will increase the system load.

Recommended Today

Customize the array methods provided by ES6 (foreach, filter, map, some, every, find, FindIndex)

Principle part After the ES6 version, JavaScript provides some more convenient methods for developers to use. The implementation principle isProvide methods in the corresponding constructor prototype. Then for developers to use. Next, let’s customize these simple functions provided by ES6. Implementation principle of method Some methods provided by ES6, the bottom layer mainly usesFor loopIt […]