IOS knowledge combing – UI (I) rendering & touch

Time:2020-6-18

Take a look at the IOS UI.

There are several important concepts related to IOS UI: window, viewcontroller, view, layer.

First of all, we need to know that in these concepts, the core layer of UI presentation, all the information that determines the final presentation is on the layer.

View is a direct encapsulation of the layer, which provides a more concise interface and handles some external inputs (touch events, etc.). Each view is associated with a layer, and the UI related changes to this view will basically be synchronized to the layer; when the view is in addsubview, the layer of subview will also be added to the layer.

Viewcontroller is the controller layer in MVC, which is responsible for the organization and management of view and the jump between VC. It can be simply understood as a page level manager. Similar to the relationship between view and layer, when the UI is changed by VC method, it is realized by view. When a VC is pushed, it is essentially moving the view associated with VC.

UIWindow is actually a subclass of uiview. As a special view, it represents the top level hierarchy of IOS UI. In the IOS application framework, all UIs can be displayed only when they are hung in window, which can be understood as the entry of UI. In addition, window is also the core of user interaction.

Here are a few noteworthy points.

Rendering process

The rendering process of IOS has a lot to do with the core animation framework. In fact, core animation is not only for animation, but also for managing everything related to layers. Our UI will be stored in the layer tree as layers, and core animation’s responsibility is to combine and render these layers to the screen as soon as possible.

The rendering process is not open-source, and there are not many related materials,

In the procedure:

  • layout-This is the stage to prepare your view / layer hierarchy and set layer properties (location, background color, border, etc.).
  • display-This is the stage when the boarding image of the layer is drawn. Drawing may involve your-drawRect:and-drawLayer:inContext:Method.
  • get ready-This is the stage when core animation is ready to send animation data to the rendering service. This is also the time point when core animation will perform some other transactions, such as decoding the pictures to be displayed in the animation process.
  • Submit-This is the final stage. Core Animation packs all the layers and animation attributes, and then sends them to the rendering service for display through IPC (internal processing communication).

Out of program (system rendering service) are:

  • Intermediate values are calculated for all layer attributes, and OpenGL geometry (textured triangle) is set to perform rendering
  • Rendering visible triangles on screen

The tree structure of the layer we directly contact is called model tree, which records all the attributes. If there is animation applied to the layer, the data of the current model tree is actually the data after the end of the animation, and the data that the layer should display currently, corresponding to the presentation layer, constitutes the presentation tree. When rendering, the rendering tree is packaged and sent to the rendering service, which deserializes it into a rendering tree to perform the final rendering.

In a general rendering process, after getting the layers, rasterization and compositing are usually needed. Rasterization refers to that in an original layer, only the original data such as drawing instructions or related attributes are usually saved. The process of generating the color of each pixel through these original data, that is, the graphic data in memory, is called rasterization. Synthesis refers to that an interface is composed of many layers. If each layer independently generates its own graphic data, it needs to Combine them.

The rasterization process can also be that each layer rasterizes into the target buffer of the corresponding screen, rather than its own separate buffer, which is called direct rasterization. If all layers are rasterized directly, there is no need for this step of compositing. But most of the time, you need to allocate your own buffer to some layers, that is, indirect rasterization. On the one hand, the performance is optimized. Some layers do not change their content, so it costs a lot to redraw each time. You can allocate an independent buffer, only redraw when you need it. Each screen refresh only copies from this buffer to the target buffer, which greatly reduces the performance consumption. On the other hand, according to the different content, some layers need CPU to draw, some need GPU, and two Rendering is usually not on a serial pipeline, and CPU rendering performance is generally poor, so the content of CPU rendering is often allocated independent buffer.

Coregraphics is a framework that mainly relies on CPU rendering. If we use coregraphics in drawRect or drawlayer methods or assign a cgimage directly to the contents of layer, layer allocates a buffer in memory to store the graphics data before rendering. In fact, the software is rasterized before sending it to the rendering service. In general, layer In fact, it is a collection of drawing instructions / attributes. It needs to generate OpenGL / mental drawing instructions, which are sent to the system rendering service and then rasterized through GPU. In this part, most of the time, it is direct rasterization.

But sometimes, because of the problem of ability, indirect grating must be used. For example, if the parent layer has corneraradius + clipstobonds set, you can only synthesize this layer and all its sublayers first, then crop them, and finally copy them to the target buffer. This is inevitable in the current rendering process of IOS. It is called off screen rendering, which needs to be avoided in the development process. Common attributes that cause off screen rendering include: corneraradius + clipstobonds, shadow, group opacity, mask, uiblureffect, etc.

Response chain

Here we focus on the touch event.

The root of the touch event is the screen (hardware). All you can get is the screen coordinates, which are then distributed to the application through the system and processed according to the logic of UIKit.

Obviously, uiapplication should be the first application to sense the touch event. Then we can find the real view through the relationship between view location and hierarchy.

The complete process of event response is as follows:

Starting from keywindow, the target view is found by traversing through the hittest method layer by layer, mainly through the location relationship;

After finding the target view (first responder), pass it up by view level.

The search logic from top to bottom mainly depends on the frame of the view. Write a simple pseudo code as follows:

- (UIView *)hitTest:(CGPoint)point withEvent:(UIEvent *)event
{
    //Not in yourself, return directly
    if(![self pointInside:point withEvent:event]){
        return nil;
    }
    //Traverse backward, the first qualified subview from outside to inside
    for(int i = self.subViews.length - 1;i>=0;i--){
        UIView *subView = self.subViews[i];
        UIView *targetView = [subView hitTest:point withEvent:event];
        if(targetView){
            return targetView;
        }
    }
    //If none of the subviews meet the conditions, return self
    return self;
}

The response chain from bottom to top mainly depends on the hierarchical relationship of Views:

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
  self.next.touchesBegan(touches, with: event);
}

Next is the property of uiresponder,

IOS knowledge combing - UI (I) rendering & touch

As shown in the figure above, if a view is the root view of uiviewcontroller, its next responder is viewcontroller, otherwise it is its parent view until it is passed to window and then to uiapplication.

What we can do in these two processes are as follows:

  1. The top-down first responder search process can override the hittest / pointinside method to make a view change the response area.
  2. In bottom-up event delivery, event delivery can be blocked or forwarded to other responders for processing when necessary.

UIControl

Uicontrol inherits from uiview. Subclasses of uicontrol, such as uibutton, can add events such as click:

button.addTarget(self, action: #selector(onClickButton), for: UIControl.Event.touchUpInside);

Uicontrol has two main features:

  1. Uicontrol will block all touch events
  2. Uicontrol only deals with when it is the first responder UIControl.Event

gesture

Gesture is a higher-level encapsulation of touch events, which can correspond to one or a series of touch events.

Therefore, gesture recognition will be maintained by a state machine:

IOS knowledge combing - UI (I) rendering & touch

As shown in the figure, for discrete gestures, the state is relatively simple, with only three states: possible, failed and recognized.

For continuous gestures, when the touch is recognized for the first time, it will become began, then changed, and then changed – > changed until the user’s fingers leave the view, and the status is recognized.

By default, multiple gesture recognizers do not recognize at the same time, and there is a default order. ThroughgestureRecognizer:shouldRecognizeSimultaneouslyWithGestureRecognizer:Control the simultaneous recognition of multiple gestures; you can userequireGestureRecognizerToFail:Control the order of gesture recognition.

Gestures and touch

IOS knowledge combing - UI (I) rendering & touch

By default, touchesbegan and touchesmoved events are passed to the gesture recognizer and view at the same time, while touchesend events are first passed to the gesture recognizer. If the gesture recognizer succeeds in recognition, it will pass touchscancelled to view, if the recognition fails, it will pass touchedend to view.

The gesture recognizer has several attributes that affect this process:

  • Delaystouchesbegan (default no): touchesbegan / move will be sent to gesture recognizer first to ensure that when a gesture recognizer is recognizing gestures, no touch event will be passed to view.
  • Delaystouchesend (yes by default): as mentioned above, it will wait until the gesture recognition results and then send cancelled / ended to the view.

Recommended Today

Comparison and analysis of Py = > redis and python operation redis syntax

preface R: For redis cli P: Redis for Python get ready pip install redis pool = redis.ConnectionPool(host=’39.107.86.223′, port=6379, db=1) redis = redis.Redis(connection_pool=pool) Redis. All commands I have omitted all the following commands. If there are conflicts with Python built-in functions, I will add redis Global command Dbsize (number of returned keys) R: dbsize P: print(redis.dbsize()) […]