[grid compression evaluation] meshquan, meshopt, Draco



  1. introduceKHR_mesh_quantizationandEXT_meshopt_compression
  2. Use, usage
  3. Performance (compression, loading, parsing)
  4. Available (scene, range)
  5. shortcoming
  6. Guess the future direction of improvement
  7. summary


Gltf compression, estimates all knowDracoBecause of the adaptation of gltfloader to applets, we found that there are many other compression extensions, such asKHR_mesh_quantizationandEXT_meshopt_compression

[grid compression evaluation] meshquan, meshopt, Draco


Quantification, also known as vectorization, is to express the data represented by floating-point numbers with shaping data, which is convenient for compression and storage, but will lose precision. We have seen a similar method in tfjs model before. The grid data of gltf is stored in floating-point numbers. A single precision floating-point number accounts for 32 bits, 4 bytes, 3 vertices, 12 bytes, texture coordinates, 8 bytes, normal, 12 bytes, tangent space, 16 bytes, and the amount of information attached to a vertex needs 48 bytes. Therefore, through this extension, we can useSHORTTo save vertex coordinates, texture coordinates 4byte, usingBYTEIt stores 4 bytes of normal coordinates and 4 bytes of tangent space (in order not to damage the standard, they are all multiples of 4 bytes), a total of 20 bytes, so the vectorized mesh can be compressed by 58.4%.


The extension enters three in R122. The compressed pipeline is as follows. The fifth step is the above vectorization.

[grid compression evaluation] meshquan, meshopt, Draco


https://github.com/google/dracoFor more information, you can readThe analysis of big brother, the default compression parameters are more aggressive than those above

Because of the support of wasm and the size of JS and the limitation of the number of workers, it is not suitable for small programs. JS coder is bigger than three

[grid compression evaluation] meshquan, meshopt, Draco


KHR_mesh_quantizationandEXT_meshopt_compressionYou can use the same toolgltfpack

> npm i -g gltfpack
#Gltfpack command-line tool is built by C project and executed by wasm. It can be seen that there will be more wasm projects in the future
#The optimized version of the basic trancoder of KHR is compiled into wasm using assembly script

#To KHR_ mesh_ quantization
> gltfpack -i model.glb -o out.glb

# EXT_ meshopt_ Compression only needs to add a - CC parameter
> gltfpack -i model.glb -o out.glb -cc

The advantage of using gltfpack is that the parameters of vectorization areIt can be adjustedFor example, if the normal needs to be modified with higher accuracy-vnThat is, the specific use of readinggltfpack -hthat will do

[grid compression evaluation] meshquan, meshopt, Draco

KHR_DRACO_MESH_COMPRESSIONhave access togltf-pipeline

> npm i -g gltf-pipeline
> gltf-pipeline -i model.glb -o out.glb -d

The vectorization parameters of Draco can also be adjusted

[grid compression evaluation] meshquan, meshopt, Draco

Performance (compression, loading, parsing)

Then we need to compare the performance of the compressed mesh

The model used is derived fromglTF-Sample-Models

[grid compression evaluation] meshquan, meshopt, Draco

[grid compression evaluation] meshquan, meshopt, Draco

You can see here that there is only vertex data ReciprocatingSaw.glb then isdracoIs the best, in fact, the reason is very simple, Draco default compression parameters than meshquan much more radical, but there is animation BrainStem.glbmeshOptIt’s the best.WaterBottle.glbThe difference is not too much, because the number of vertices is limited, texture occupies a large number of main volume.

Since the default parameters are not the same, Draco has advantages in comparison. Therefore, when comparing other compression parameters that need to be the same, we only need to compare brainstem and reciprocating saw. Here, align the gltfpack parameter to Draco

> gltfpack -i model.glb -o out.glb -vp 11 -vt 10
> gltfpack -i model.glb -o out.glb -vp 11 -vt 10 -cc

[grid compression evaluation] meshquan, meshopt, Draco

You can see that even if the parameters are modified, the size of meshquan remains unchanged, because it has become a standard. The vectorized parameters are fixed and cannot be modified, but the meshopt is slightly improved.

Decoder loading

To use a compression scheme, in addition to comparing the compression size, we also need to compare the loading difficulty of the decoder (in the environment of web, small programs, etc.), and there are more than ten KB of dracoloader that have not been included

[grid compression evaluation] meshquan, meshopt, Draco

It is worth noting that the version of meshopt wasm includes two versions, one is the basic version and the other is the SIMD version

[grid compression evaluation] meshquan, meshopt, Draco

However, the official version of ASM is not available, so it needs to be manually approvedbinaryen/wasm2jsYou can change it, or use meConverted meshop_ decoder.asm.module .js

On the decoder level, it feels likeMeshopt wins

Analysis time comparison

The model used here is compressed by default parameters, the power mode is high performance, and the CPU is always at the highest frequency. Try to find the mean value five times, all of them are wasm version. Among them, meshopt is the SIMD version, chrome 88

[grid compression evaluation] meshquan, meshopt, Draco

It can be seen that the loading time of meshquan and meshopt is much less than that of uncompressed loading, while that of Draco is much more. If you consider the loading time of decoder, you need to look at the first loading, three extensionsonlyDarco will have an impact, because only Draco loads gltf for the first time and needs to download wasm from the network. Meshopt is serialized and stored in the string. When it is used, it only needs unpack and no network request is required.

The analysis time comparison of ASM version is in the usability section

Effect comparison before and after compression

According to the principle, in addition to comparing the performance indicators above, we have to compare whether there is a problem after compression, but this is not easy to be indexed, it still depends on whether the designer compromises, but for the sake of intuitive comparison, so I wrote onecompressed-model-difftool

There are three contrast modes and wireframe contrastOnline use

[grid compression evaluation] meshquan, meshopt, Draco

[grid compression evaluation] meshquan, meshopt, Draco

[grid compression evaluation] meshquan, meshopt, Draco

Available (scene, range)

The available scenarios mainly depend on the loading difficulty and size of decoder

[grid compression evaluation] meshquan, meshopt, Draco

You can see that Draco is more uncomfortable on the small program, while meshquan does not need a decoder, so its usability is the highest. Meshopt only needs to use ASM compatible small program IOS.

So if the first mock exam platform is compatible with Draco, it is not appropriate for the platform to be loaded. Of course, it can also load the corresponding model corresponding to the platform. But the model effect needs to be adjusted individually or even with the unified model of the whole platform.

So here is a separate supplement to the decode performance of the meshopt ASM version

[grid compression evaluation] meshquan, meshopt, Draco

But there is a strange discoveryThe first parsing takes a lot of time, but the third one is close to the uncompressed performance, and the fifth one is close to the wasm performanceWhy? Is it a potential optimization method?

First of all, let’s see how the first browser supporting ASM in foxes works. Is there a similar situation?

[grid compression evaluation] meshquan, meshopt, Draco

It seems that in this case, the execution of JS is much better than that of chrome, but wasm doesn’t bring better effect, but there is a similar rule. Does it mean that the decoder needs a programpreheat, tell the browser that the code of code needs special optimization?

So this is done by loading a 1.21kb triangle- meshopt.glb , 5 times, then load the test model and record the data

[grid compression evaluation] meshquan, meshopt, Draco

It seems that there is no effect. It’s estimated that the number of warm ups is not enough, triangle- meshopt.glb There are only three vertices, and there are only 15 executed five times. The order of magnitude is not up. fromDoes web 3D need webassembly?I found the answer

Because the V8 engine of Chrome browser supports JIT: just in time compilation, the execution performance of JS program is close to wasm (for web 3D program, the logic of performance hotspot will be executed many times in the main loop). After multiple execution, V8 will determine the hotspot code and optimize it to bytecode, and then execute bytecode directly from the next execution.)

It takes 1.5 ~ 2.3 times of uncompressed ASM to load for the first time\
The first loading time of ASM in Chrome is 3.08 ~ 4.4 times that of uncompressed

Of course, because it’s just a wechat applet IOS, the iPhone CPU itself is better, so it’s acceptable. If you can find the warm up method, the first load time is equal to the fifth.


After all, vectorization is a lossy compression method with lost precision, but it is also the sameKHR_mesh_quantizationThe introduction is a compromise of precision and size

Vertex attributes are usually stored using FLOAT component type. However, this can result in excess precision and increased memory consumption and transmission size, as well as reduced rendering performance.

KHR_mesh_quantizationThe disadvantage is that the vectorization parameters are fixed and cannot be modified. The advantage is that there is no need for additional coder, and theEXT_meshopt_compressionThe upgraded version can customize the vectorization parameters, such as increasing the capacity when the normal accuracy is high

KHR_draco_mesh_compressionThis is a well-known extension. In this evaluation, although the compression ratio of pure vertex model is the highest, its decoder size and decoding performance are not outstanding.

KHR_mesh_quantizationandEXT_meshopt_compressionThey are all fixed conversions. For example, if the size of the model itself is very small, the range of floating-point numbers used is also very small. At this time, the loss of precision is relatively large. The solution is not only to increase the storage capacity, but also to enlarge the model to make the distance between vertices larger, and then reduce it when using it. The text description may not be so intuitive. Let’s look at the moving picture.

[grid compression evaluation] meshquan, meshopt, Draco

When the model is magnified 10 times, the color change is not ignored for the same model version

[grid compression evaluation] meshquan, meshopt, Draco

The difference of tundish is negligible, but there is still texture offset below.

Guess the future direction of improvement

It can be inferred that the next step can be optimized: dynamic vectorization + migration, visual inspection can further provide accuracy and compression ratio, but the time consumption of decoder will increase.

For example, the modelBoundingBox.xyzThe maximum value is mapped to 0-1 and then to integer / custom XByteOf course, the sub mesh in the model can be maintained as well, XYZ can be maintained separately, and so on.

Of course, it’s just an idea. In fact, the purpose of this paper is to find an extension of cowhideMeshOptHowever, it is also a process of exploration to test and verify the usability of the project.



Recommended Today

Java work for two years, even mybatis plug-in mechanism are not understand, then you work dangerous!

Configuration and use of plug in Configure the plugin node in mybatis-config.xml configuration file, such as a custom log plug-in loginterceptor and an open source paging plug-in pageinterceptor <plugins> <plugin interceptor=”com.crx.plugindemo.LogInterceptor”></plugin> <plugin interceptor=”com.github.pagehelper.PageInterceptor”> <property name=”helperDialect” value=”oracle” /> </plugin> </plugins> How plug-ins work With the help of the responsibility chain mode, a series of filters are […]