How to use multithreading efficiently in IOS


1、 Multithreading

Thread is the smallest unit of program execution flow. A thread includes: unique ID, program counter, register set and stack. The same process can have multiple threads, which share global variables and heap data of the process.

The PC (program counter) here refers to the current instruction address. To run our program through the update of PC, a thread can only execute one instruction at the same time. Of course, we know that threads and processes are virtual concepts. In fact, PC is a register in the CPU core, which actually exists. Therefore, it can be said that a CPU core can only execute one thread at the same time.

Whether it is multiprocessor or multi-core devices, developers often only need to care about the number of CPU cores, and do not need to care about their physical composition. The number of CPU cores is limited, that is to say, the number of threads executed concurrently by a device is limited. When the number of threads exceeds the number of CPU cores, a CPU core often has to deal with multiple threads. This behavior is called thread scheduling.

Thread scheduling is simple: a CPU core makes each thread execute for a period of time in turn. Of course, there is also complex logic in the middle, which will be analyzed later.

2、 Optimization of multithreading

In the development of mobile terminal, due to the complexity of the system, developers often can’t expect all threads to execute concurrently. Moreover, developers don’t know when XNU switches kernel state threads and when to schedule threads. Therefore, developers should always consider the situation of thread scheduling.

1. Reduce thread switching

When the number of threads exceeds the number of CPU cores, the CPU core switches user mode threads through thread scheduling, which means that there is context conversion (register data, stack, etc.), and too much context switching will bring resource overhead. Although the switching of kernel state threads will not be a performance burden in theory, we should try to reduce the switching of threads in development.

2. Thread priority trade off

Generally speaking, in addition to the rotation method, there are priority scheduling schemes for thread scheduling. In thread scheduling, high priority threads will execute earlier. Two concepts need to be clear:

  • IO intensive thread: the thread waiting frequently will make time slice when waiting.
  • CPU intensive threads: threads that rarely wait, which means that CPU is occupied for a long time.

In a special scenario, when multiple CPU intensive threads occupy all CPU resources, and their priority is higher, the IO intensive threads with lower priority will continue to wait, resulting in the phenomenon of thread starvation. Of course, in order to avoid thread starvation, the system will gradually increase the priority of “neglected” threads. IO intensive threads are usually easier to get priority promotion than CPU intensive threads.

Although the system will do these things automatically, it will cause time waiting and may affect the user experience. Therefore, the author thinks that developers need to balance priorities from two aspects

  • Let IO intensive threads take precedence over CPU intensive threads.
  • Give urgent tasks higher priority.

For example, there is a scene: a large number of asynchronous decompression tasks of images. The decompressed images do not need to be fed back to the user immediately. At the same time, there are a large number of asynchronous query disk cache tasks. After the query disk cache task is completed, it needs to be fed back to the user.

Image decompression is a CPU intensive thread, while query disk cache is an IO intensive thread. The latter needs to be fed back to the user more urgently. Therefore, the priority of image decompression thread should be lower, and the priority of query disk cache thread should be higher.

It is worth noting that there are a lot of asynchronous tasks, which means that the CPU is likely to be full load. If the CPU resources are more than enough, there is no need to deal with the priority problem.

3. Optimization of main thread task

Some businesses can only be written in the main thread, such as the initialization and layout of UI class components. In fact, there are many optimizations in this area. Most of the performance optimizations in the industry are aimed at reducing the pressure on the main thread, which seems to deviate from the scope of multi-threaded optimization. Here are some points based on the management of main thread tasks

  • Memory reuse

Memory reuse can reduce the time consumption of memory development, which is widely used in system UI components, such as the reuse of uitableviewcell. At the same time, reducing memory means reducing memory release and saving CPU resources.

  • Lazy loading task

Since the UI component must be initialized in the main thread, it needs to be initialized in time. The write time replication of swift is similar.

  • Task splitting queue execution

A large number of tasks are separated by listening to the notification that the runloop is about to end, and a small number of tasks are executed in each runloop cycle. In fact, before practicing this optimization idea, we should think about whether we can put tasks into asynchronous threads instead of using this extreme optimization method.

  • Executing tasks when the main thread is idle
//Here is the main thread context

`dispatch_async(dispatch_get_main_queue(), ^{

//Wait until the main thread is idle to execute the task


3、 About “lock”

Multithreading will bring thread safety problems. When the atomic operation can not meet the business, we often need to use a variety of “locks” to ensure the read-write security of memory.

Commonly used locks include mutex, read-write lock and idle lock. In general, pthread is a mutex in IOS development_ mutex_ t、dispatch_ semaphore_ t. Read write lock pthread_ rwlock_ T can meet most of the requirements, and the performance is good.

When the read lock fails, the thread may have two states:

  • Idle state: the thread executes an empty task, circularly waits, and obtains the lock immediately when the lock is available.
  • Suspend state: the thread is suspended, and other threads need to wake up when the lock is available.

It takes a long time to wake up the thread. The idle thread consumes CPU resources, and the longer the time, the more it consumes. Therefore, idle is suitable for a small number of tasks, and hang is suitable for a large number of tasks.

In fact, mutex lock and read-write lock both have the characteristics of idle lock. When they fail to acquire the lock, they will idle for a period of time and then hang up. The idle lock will not idle forever and will hang up after a specific idle time. Therefore, it is not necessary to use the idle lock. Casa taloyum has a detailed explanation in the blog.

1. Priority inversion of osspinlock

Priority inversion concept: for example, two threads a and B, priority a < B. When a acquires the lock to access the shared resources, B attempts to acquire the lock, then B will enter the busy state. The longer the busy state is, the more CPU resources will be occupied; Because the priority of a is lower than that of B, a cannot compete with high priority threads for CPU resources, resulting in the delay of task completion. There are “priority ceiling” and “priority inheritance” to solve priority inversion. Their core operations are to increase the priority of the thread that is currently accessing shared resources.

2. Avoid deadlock

A very common scenario is the deadlock caused by the same thread repeatedly acquiring the lock, which can be handled by recursive lock, pthread_ mutex_ T use pthread_ mutex_ init_ Recursive () method initialization can have the recursive lock feature.

Using pthread_ mutex_ Trylock () and other methods can effectively avoid deadlock

3. Minimize locking tasks

Developers should fully understand the business, minimize the code area contained in the lock, and do not use the lock to protect the code that will not have thread safety problems, so as to improve the performance of lock in concurrency.

4. Always pay attention to the safety of non reentrant methods

When a method is re entrant, it can be used boldly. If a method is not re entrant, developers should pay more attention to it and think about whether there will be multiple threads accessing the method. If so, they should add thread lock honestly.

5. Compiler over optimization

In order to improve the efficiency, the compiler may write the variable to the register without writing it back for the next time. We know that a code is converted into more than one instruction, so in the process of writing the variable to the register without writing it back, the variable may be read and written by other threads. In order to improve efficiency, the compiler will also change the order of the instructions that it thinks are order independent.

All of the above may lead to thread insecurity where locks are used reasonably. The volatile keyword can solve this kind of problem. It can prevent the compiler from caching variables to registers for efficiency and not writing them back in time. It can also prevent the compiler from adjusting the order of instructions that operate volatile to modify variables.

Atomic auto increasing function has a similar applicationint32_t OSAtomicIncrement32( volatile int32_t *__theValue )

 6. CPU out of order execution

The CPU may also exchange the order of instructions in order to improve efficiency, resulting in unsafe locking code. To solve this kind of problem, the memory barrier can be used. After the CPU crosses the memory barrier, it will refresh the allocation of registers to variables.

The above is the details of how to use IOS multithreading efficiently. For more information about IOS multithreading, please pay attention to other related articles of developer!

Recommended Today

Summary of import and export usage in JavaScript

import import 和 require 的区别 import 和js的发展历史息息相关,历史上 js没有模块(module)体系,无法将一个大程序拆分成互相依赖的小文件,再用简单的方法拼装起来。这对开发大型工程非常不方便。在 ES6 之前,社区制定了一些模块加载方案,最主要的有 CommonJS 和 AMD 两种。前者用于服务器,后者用于浏览器。ES6 在语言标准的层面上,实现了模块功能,而且实现得相当简单,完全可以取代 CommonJS 和 AMD 规范,成为浏览器和服务器通用的模块解决方案。也就是我们常见的 require 方法。 比如 `let { stat, exists, readFile } = require(‘fs’);` 。ES6 在语言标准的层面上,实现了模块功能。ES6 模块不是对象,而是通过export命令显式指定输出的代码,再通过import命令输入。 import 的几种用法: 1. import defaultName from ‘modules.js’; 2. import { export } from ‘modules’; 3. import { export as ex1 } from […]