High performance web animation and rendering principles series (4) “composition pipeline speech ppt” learning summary

Time:2020-4-30

Catalog

  • abstract
    • 1. Synthesis pipeline
    • 2. Predefined UI layer
    • 3. What is the meaning of paint
    • 4. Advantages and disadvantages of stratification
    • 5. View properties and processing methods
    • 6. Quads
    • 7. Compositor Frame
    • 8. About rasterization and rendering methods
    • 9. [important] difference between software rendering and hardware rendering

The sample code is hosted at: http://www.github.com/dashkeywords/blogs

Address of blog Park: original blog directory of “big history lives in front end”

Huawei cloud community address: [the upgrade guide for front-end fighting]

High performance web animation and rendering principles series (4)

The attachment PPT is from the official website development document of chromium. In termsccrefer toChromium Compositor

I have always wanted to understand the operation mechanism of browser composition layer, but most of the relevant Chinese materials focus on the framework and development technology, which is really too little. Later, in thechromiumIn the official website document, malaykeshav, a member of the project team, found a report onBrowser composition pipelineThe PPT of our speech is very clear. Because we didn’t find the video, some parts can only be understood by ourselves. In this paper, we only take notes of key information. Readers interested in this can take notes at the beginning of the articlegithubTake this ppt from the warehouse or accessories and learn by yourself.

abstract

1. Synthesis pipeline

Composition pipeline refers to the workflow of browser processing composition layer. Its basic steps are as follows:

High performance web animation and rendering principles series (4)

The general process isPaintThe link will generate a list in which the drawing instructions of page elements are registered. Then the list needs to go through the following stepsRasterGrating processing, andComposite frameTexture, lastDrawThe link is to display these texture images in the browser content area.

2. Predefined UI layer

Some UI layers of the specified type are predefined in chromium, roughly divided into:

  • Not drawn – non paint layers for transparency or filter effects, transform deformation, or clip clipping
  • Solid color layer – intrinsic color layer
  • Painted texture layer – texture texture texture will execute in this layerpaintRendering and subsequentrasterizedRasterization task
  • Transferable resource layer – the shared resource layer, which may be the texture texture in the GPU or the bitmap that will be sent to the GPU in the future
  • Surface layer – temporary occupancy layer, because when traversing the layer tree from top to bottom, the subtree has not been processed, so it needs to occupy first and fill last
  • Nine patch layer – layer used to implement shadows

3. What is the meaning of paint

High performance web animation and rendering principles series (4)

Each layerlayerBy severalviewsComposed, so-calledpaint, is eachviewsAdd the drawing instruction of the corresponding graph to the list of displayable elements of the layerDisplay Item ListIn, this list will be added to a delayed rasterization task, and finally generate the texture of the current layer (which can be understood as the rendering result of the current layer). Considering the transmission performance and future incremental update requirements, the rasterization result will betilesKeep in tile form. You can also see the result of page tile splitting in chrome:

High performance web animation and rendering principles series (4)

4. Advantages and disadvantages of stratification

High performance web animation and rendering principles series (4)

The advantages and disadvantages of layering are also explained here, which is basically the same as the answer of our active thinking before.

5. View properties and processing methods

viewsProperties supported in includeClipCut,transformTransformation,effectEffect (such as translucency or filter, etc.),maskMask, which is usually traversed from bottom to top in the way of backward traversal.

clipThe clipping method is to insert a clipping layer between the parent node and the child node, which is used to clip the rendering results of its subtree to a limited range, and then merge with the parent node upward;

transformThe transformation directly acts on the parent node. When it reaches this node, all its subtrees have been processed, and the whole application can be transformed directly;

effectGenerally, the effect directly affects the current processing nodes, and sometimes cross dependent scenes are generated;

Ppt page 40effectIn effect processing, two different transparency processing requirements are described, which leads to aRender SurfaceIt is a temporary layer. Its subtree needs to be drawn on this layer first and then merged with the parent node. The screen is root levelRender Surface

6. Quads

LayerThe result of traversing the output of processing is calledQuads(in the sense of meaning, it means that a lot of rectangular blocks are output), each of whichquadAll the resources it needs to draw to the target buffer can be divided into:

  • Solid Color-Fixed color
  • Texture-Textural type
  • Tile-Tile type
  • Surface-Temporary drawing surface type
  • Video-Video frame type
  • Render PassRender SurfaceType of footprint,Render SurfaceThe subtree is filled into the associatedRender Pass

7. Compositor Frame

The real work of composition layer is about to start, the concept of protagonistCompositor Frame(composite frame) on stage, it is responsible forquadsWhen combined, pages 59-62 of the film clearly show the synthesis process, and the final output is the texture of the root node.

High performance web animation and rendering principles series (4)

chromiumIt’s a multi process architecture,Browser ProcessThe browser process will generate a composite frame for the screen of the menu bar and other container parts to output. TheRender ProcessThe rendering process will generate composite frames for the page content to output, and the final results will be shared withGPU ProcessThe GPU process aggregates and produces the final, complete composite surface, which is thenDisplay CompositorThe last bitmap is displayed on the screen.

8. About rasterization and rendering methods

There is no specific rasterization process described in the film, butlayerOutputquadsIt seems to be the result of rasterization. Presumably, it should be processingDisplay Item ListThe drawing instructions in are similar to those in webglVertex Shader andFragment Shader In the process, the pixel interpolation is completed automatically.

9. [important] difference between software rendering and hardware rendering

Statement: the content of this section is personal understanding, only for technical exchange, no guarantee!

The difference between software rendering and hardware rendering has always been very abstract for the author, just knowing the basic concepts. Later, in the [chrome developer documentation] (it may not be accessible in China)《Compositor Thread Architecture》In this composition thread architecture article, we find some related descriptions and solve the doubts in the author’s mind. The relevant parts are excerpted as follows:

Texture Upload

One challenge with all these textures is that we rasterize them on the main thread of the renderer process, but need to actually get them into the GPU memory. This requires handing information about these textures (and their contents) to the impl thread, then to the GPU process, and once there, into the GL/D3D driver. Done naively, this causes us to copy a single texture over and over again, something we definitely don’t want to do.

We have two tricks that we use right now to make this a bit faster. To understand them, an aside on “painting” versus “rasterization.”

  • Painting is the word we use for telling webkit to dump a part of its RenderObject tree to a GraphicsContext. We can pass the painting routine a GraphicsContext implementation that executes the commands as it receives them, or we can pass it a recording context that simply writes down the commands as it receives them.
  • Rasterization is the word we use for actually executing graphics context commands. We typically execute the rasterization commands with the CPU (software rendering) but could also execute them directly with the GPU using Ganesh.
  • Upload: this is us actually taking the contents of a rasterized bitmap in main memory and sending it to the GPU as a texture.With these definitions in mind, we deal with texture upload with the following tricks:
  • Per-tile painting: we pass WebKit paint a recording context that simply records the GraphicsContext operations into an SkPicture data structure. We can then rasterize several texture tiles from that one picture.
  • SHM upload: instead of rasterizing into a void* from the renderer heap, we allocate a shared memory buffer and upload into that instead. The GPU process then issues its glTex* operations using that shared memory, avoiding one texture copy.The holy grail of texture upload is “zero copy” upload. With such a scheme, we manage to get a raw pointer inside the renderer process’ sandbox to GPU memory, which we software-rasterize directly into. We can’t yet do this anywhere, but it is something we fantasize about.

It’s easy for the English speaking children to understand. GPU processes pictures according to texture. For those unfamiliar with this, you can check what I’ve sent beforeThree.jsRelated posts.

Texture upload:
One of the challenges of texture processing is that it is done in the main process of rendering process (which can be understood as a single tab page process), but it needs to be put into GPU memory finally. In this way, texture data needs to be submitted to the synthesizer thread, then to the GPU process (there is a special GPU process in the chromium architecture for special processing and GPU cooperation tasks), and finally to the underlyingDirect3DorOpenGL(that is, the underlying technology of graphics). If we just follow the normal process, we will need to copy the generated texture data again and again, which is obviously not what we want.
We now use two small ways to make this process a little faster. They act onpainting(drawn) andrasterization(rasterization) two stages.

  • Knowledge point 1!!!PaintingWe used to tell WebKitRenderObject TreeTo generate the correspondingGraphicsContext。 By givingpainting routine(drawing process) passing aGraphicsContextThe concrete realization ofimplementThese programmed drawing commands can also pass arecord context(record context) simply put all the drawing commandsRecordCome down.
  • Knowledge point 2!!!Rasterization(rasterization) meansGraphics contextThe process by which the associated drawing command is actually executed. Generally, we use CPU (that is, the way of software rendering) to perform the rasterization task, or directly use GPU to render (that is, the way of hardware rendering).
  • Upload: refers to the process of obtaining the raster bitmap content in the main thread storage area and then uploading it to GPU as a texture. Considering the above mentioned definition, the upload process is as follows:
    • Tile drawing: we use it in WebKitrecording contextTo simply recordGraphics Context, which is stored asSkPictureType (generated when directly using software rasterizationSkBitmapType) can then be rasterized from a picture to obtain multipleTextured tile
    • Shared memory: in the way of software rendering, the result of rasterization will be stored inrendererIn the heap memory of the process, we don’t do this now. We reallocate a shared buffer, and then pass related objects through it. The GPU process then obtains the texture directly from the shared memory, so as to avoid copying the data.
      Generally speaking, the process of texture upload is almost zero copy. Using this structure, werendererIn the sandbox environment of the process (that is, the rendering process of the web page), the pointer to the GPU memory can also be obtained. In the process of software rasterization, the bitmap results are directly put here.
  • Painting: this is the process of asking Layers for their content. This is where we ask webkit to tell us what is on a layer. We might then rasterize that content into a bitmap using software, or we might do something fancier. Painting is a main thread operation.
  • Drawing: this is the process of taking the layer tree and smashing it together with OpenGL onto the screen. Drawing is an impl-thread operation.
  • Painting: the process of representation is to query the layer content from the layers object, that is, to let WebKit tell us what is on each layer. Next, we can use the way of software rasterization to process these contents as bitmap, or we can do something better. Painting is a main thread behavior.
  • Drawing: refers to the process of drawing the content in layer on the screen with OpenGL. It is an operation in another thread.

It’s difficult for readers who have many concepts and no foundation to understand. I’ll try to retell it in my own words:

[software rendering]In the mode ofpaintIt will be directly usedGraphics ContextThe drawing context draws the result in aSkBitmapSaved as bitmap information in the instance;[hardware rendering]In the mode ofpaintPass in aSkPictureFor example, save the drawing command to be executed in it, do not execute it first, then pass it to GPU process through shared memory, and finally execute the drawing command with GPU to generate multiple tile bitmap texture results(OpenGLWhen the vertex shader transfers data to the slice shader, data interpolation can be performed automatically to complete the task of rasterization). Strictly speaking, there is no concept of composite layer in pure software rendering, because the final output is only a bitmap, which is drawn from the bottom to the top in order, which is the same as drawing to a new layer and then pasting the new layer to the existing results.

No matter which approach is used,paintActions are all bitmap data, and the finaldrawThis action is to use OpenGL and bitmap data to finally display the graphics on the display.

therefore[hardware rendering]It is the rendering process that writes all the things to be done and the data needed, and then packages and hands it to GPU to work.