# Lua performance optimization skills (III): about table

Time：2022-5-1

In general, you don’t need to know the details of lua implementation table to use it. In fact, Lua spent a lot of effort to hide the internal implementation details. However, the implementation details reveal the performance overhead of table operations. Therefore, to optimize the program using tables (specifically Lua program here), it is very beneficial to understand the implementation details of some tables.

Lua’s table implementation uses some smart algorithms. The interior of each Lua table contains two parts: the array part and the hash part. The array part holds elements with an integer from 1 to a specific n as the key (we’ll discuss how this n is calculated later). All other elements (including integer keys outside the above range) are stored in the hash section.

As its name suggests, the hash section uses a hash algorithm to save and find keys. It uses an implementation called an open address table, which means that all elements are stored in a hash array. Use a hash function to obtain the index corresponding to a key; If there is a conflict (that is, if two keys produce the same hash value), these keys will be put into a linked list, where each element corresponds to an array item. When Lua needs to add a new key to the table, but the hash array is full, Lua will re hash. The first step in re hashing is to determine the size of the new array part and hash part. Therefore, Lua traverses all elements, counts and classifies them, and then selects a size for the array part, which is equivalent to the maximum power of 2 that can fill more than half of the space of the array part; Then select a size for the hash part, which is equivalent to the minimum power of 2 that can accommodate all elements of the hash part.

When Lua creates an empty table, the size of both parts is 0. Therefore, it is not assigned an array. Let’s see what happens when the following code is executed:

Copy codeThe code is as follows:

local a = {}
for i = 1, 3 do
a[i] = true
end

This code starts by creating an empty table. In the first iteration of the loop, the assignment statement

Copy codeThe code is as follows:

a[1] = true

Triggered a rehash; Lua sets the size of the array part to 1, and the hash part is still empty; At the second iteration

Copy codeThe code is as follows:

a[2] = true

Triggered another re hashing, expanding the array part to 2 Finally, the third iteration triggers another re hashing, expanding the size of the array part to 4.

Similar to the following code

Copy codeThe code is as follows:

a = {}
a.x = 1; a.y = 2; a.z = 3

Do similar things, but increase the size of the hash part.

For large tables, the cost of several initial re hashes is allocated to the whole table creation process. A table with three elements needs three re hashes, while a table with one million elements only needs 20 times. However, when creating thousands of small tables, the performance impact of re hashing will be very significant.

The old version of lua preselected the allocation size (4, if I remember correctly) when creating an empty table to prevent these expenses when initializing a small table. But such an implementation will waste memory. For example, if you want to create millions of points (represented as a table with two elements), each of which uses twice the actual memory, it will pay a high price. This is why Lua no longer pre allocates arrays for new tables.

If you use C programming, you can use Lua’s API function Lua_ Create table to avoid re hashing; Except Lua_ In addition to state, it also accepts two parameters: the initial size of the array part and the initial size of the hash part [1]. As long as you specify the appropriate value, you can avoid re hashing during initialization. It should be noted that Lua will only shrink the size of the table when re hashing. Therefore, if too large a value is specified during initialization, Lua may never correct your wasted memory space.

When programming with Lua, you may be able to use constructors to avoid re hashing during initialization. When you write down

Copy codeThe code is as follows:

{true, true, true}

Lua knows that the array part of the table will have three elements, so she will create an array of corresponding size. Similarly, if you write down

Copy codeThe code is as follows:

{x = 1, y = 2, z = 3}

Lua also creates an array of size 4 for the hash part. For example, it takes 2.0 seconds to execute the following code:

Copy codeThe code is as follows:

for i = 1, 1000000 do
local a = {}
a[1] = 1; a[2] = 2; a[3] = 3
end

Given the correct size when creating the table, the execution time can be reduced to 0.7 seconds:

Copy codeThe code is as follows:

for i = 1, 1000000 do
local a = {true, true, true}
a[1] = 1; a[2] = 2; a[3] = 3
end

But if you write something like

Copy codeThe code is as follows:

{[1] = true, [2] = true, [3] = true}

Lua is not smart enough to recognize the array index specified by the expression (numeric literal in this case), so it will create an array of size 4 for the hash part, wasting memory and CPU time.

The size of the two parts will only be recalculated when Lua re hashes, and the re hashing will only occur when Lua needs to insert new elements after the table is completely filled. Therefore, if you traverse a table and clear all its entries (that is, set all to nil), the size of the table will not shrink. But at this point, if you need to insert new elements, the size of the table will be adjusted. In most cases, this won’t be a problem, but don’t expect to reclaim memory by clearing table entries: it’s best to clear the table itself directly.

One dirty way to force rehash is to insert enough Nils into the table. For example:

Copy codeThe code is as follows:

a = {}
lim = 10000000
For I = 1, Lim do a [i] = I end — create a macro table
print(collectgarbage(“count”))              –> 196626
For I = 1, Lim do a [i] = nil end — clear all elements
print(collectgarbage(“count”))              –> 196626
For I = Lim + 1, 2 * Lim do a [i] = nil end — create a bunch of nil elements
print(collectgarbage(“count”))              –> 17

I don’t recommend this trick except in special cases: it’s slow and there’s no easy way to know how many Nils to insert.

You may wonder why Lua doesn’t shrink the table when clearing entries. The first is to avoid writing the test to the table. If you check whether the value is nil during assignment, all assignment operations will be slowed down. Second and most importantly, it allows the table entry to be assigned nil when traversing the table. For example, the following loop:

Copy codeThe code is as follows:

for k, v in pairs(t) do
if some_property(v) then
T [k] = nil – clear element
end
end

If Lua hashes the table again after each nil assignment, the loop will be broken.

If you want to clear all the elements in a table, simply traverse it:

Copy codeThe code is as follows:

for k in pairs(t) do
t[k] = nil
end

A “smart” alternative solution:

Copy codeThe code is as follows:

while true do
local k = next(t)
if not k then break end
t[k] = nil
end

However, for large tables, this cycle will be very slow. When the function next is called, if the previous key is not given, the first element of the table will be returned (in some random order). In this case, next will traverse the table, looking for a non nil element from the beginning. Since the loop always sets the first element found as nil, the next function will take longer and longer to find the first non nil element. As a result, the “smart” loop takes 20 seconds to clear a table with 100000 elements, while the loop implemented with pairs takes only 0.04 seconds.

[1] Although the rehashing algorithm always sets the size of the array to a power of 2, the size of the array can still be any natural value. The size of the hash must be a power of 2, so the second parameter will always be rounded to a power of not less than the smallest 2 of the original value.

## JVM + GC parsing (premise knowledge concatenation)

Premise preparation JVM GC garbage collection JVM virtual machine monitoring, tuning and troubleshooting Tomcat and microservice optimization 1. Premise review 1.1. JVM memory structure 1.1.1、 JVM Architecture Overview The gray part in the figure isThread private, there is almost no garbage collectionOrange partThread sharing, the main place where garbage recycling occurs What is the class […]