Thread safety of multiple network requests in IOS



There is a common scenario in IOS network programming: we need to process two requests in parallel, and only after they are all successful can we proceed to the next step. The following are some common processing methods, but it is also easy to make mistakes in the use process:

  • Dispatchgroup: put multiple requests into a group through GCD mechanism, and then send them through GCDDispatchGroup.wait() andDispatchGroup.notify() After successful processing.
  • Operation queue: instantiate an operation object for each request, then add these objects to the operation queue, and determine the execution order according to the dependency relationship between them.
  • Synchronous dispatchqueue: avoid data competition through synchronous queue and nslock mechanism, and realize synchronous secure access in asynchronous multithreading.
  • Third party libraries: futures / promises and responsive programming provide a higher level of concurrency abstraction.

In many years of practice, I realized that there are some defects in the above methods. In addition, it is difficult to use these libraries correctly.

Challenges in concurrent programming

It’s hard to think with concurrent thinking: most of the time, we read the code the way we read the story: from the first line to the last line. If the logic of the code is not linear, it may cause some difficulty for us to understand. In the single thread environment, debugging and tracking the program execution of multiple classes and frameworks has been a very headache, which is unthinkable in the multi thread environment.

Data contention: in a multithreaded concurrent environment, data read operations are thread safe, while write operations are non thread safe. If multiple threads write to a memory at the same time, data competition will occur, leading to potential data errors.

It’s not easy to understand the dynamic behavior in a multithreaded environment, and it’s even more troublesome to find the threads that cause data competition. Although we can solve the problem of data competition through mutex mechanism, the maintenance of mutex mechanism is very difficult for the code that may be modified.

Difficult to test: many problems in concurrent environment will not appear in the development process. Although Xcode and llvm provide tools such as thread sanitizer to check these problems, it is still very difficult to debug and trace these problems. Because in the concurrent environment, in addition to the code itself, the application will also be affected by the system.

A simple way to deal with concurrency

Considering the complexity of concurrent programming, how can we solve multiple requests in parallel?

The simplest way is to avoid writing parallel code. Instead, multiple requests are linearly concatenated

let session = URLSession.shared

session.dataTask(with: request1) { data, response, error in
 // check for errors
 // parse the response data

 session.dataTask(with: request2) { data, response error in
  // check for errors
  // parse the response data

  // if everything succeeded...
  callbackQueue.async {
   completionHandler(result1, result2)

In order to keep the code concise, many details are ignored, such as error handling and request cancellation. However, there are some hidden problems in the linear sorting of unrelated requests. For example, if the server supports the HTTP / 2 protocol, we don’t use the HTTP / 2 protocol to process multiple requests through the same link, and linear processing also means that we don’t make good use of the processor’s performance.

On the misconception of urlsession

To avoid possible data contention and thread safety issues, I rewrite the above code as nested requests. In other words, if it is changed to concurrent request, the request will not be nested, two requests may write to the same block of memory, and data competition is very difficult to reproduce and debug.

A feasible way to solve this problem is through the lock mechanism: only one thread is allowed to write to the shared memory in a period of time. The execution process of lock mechanism is also very simple: request lock, execute code, release lock. Of course, there are some skills to use the lock mechanism correctly.

But according to the documentation of urlsession, there is a simpler solution for concurrent requests.

init(configuration: URLSessionConfiguration,
   delegate: URLSessionDelegate?,
   delegateQueue queue: OperationQueue?)


queue : An operation queue for scheduling the delegate calls and completion handlers. The queue should be a serial queue, in order to ensure the correct ordering of callbacks. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.

This means that all instance objects of urlsession include URLSession.shared Singleton callbacks are not executed concurrently unless you explicitly pass a concurrent queue to the parameter queue.

Urlsession extends concurrent support

Based on the above new understanding of urlsession, let’s expand it to support thread safe concurrent requests (completion code address).

enum URLResult {
 case response(Data, URLResponse)
 case error(Error, Data?, URLResponse?)

extension URLSession {
 func get(_ url: URL, completionHandler: @escaping (URLResult) -> Void) -> URLSessionDataTask

// Example

let zen = URL(string: "")!
session.get(zen) { result in
 // process the result

First, we use a simple urlresult enumeration to simulate the different results we can get in the urlsessiondatask callback. This enumeration type helps us to simplify the processing of multiple concurrent request results. For the sake of brevity, I didn’t post it hereURLSession.get(_:completionHandler:) Method, which uses the get method to request the corresponding URL and execute it automaticallyresume() Finally, the execution results are encapsulated as urlresult objects.

func get(_ left: URL, _ right: URL, completionHandler: @escaping (URLResult, URLResult) -> Void) -> (URLSessionDataTask, URLSessionDataTask) {

This API code accepts two URL parameters and returns two urlsessiondatatask instances. The following code is the first part of the function implementation:

 precondition(delegateQueue.maxConcurrentOperationCount == 1,
  "URLSession's delegateQueue must be configured with a maxConcurrentOperationCount of 1.")

Because the concurrent operationqueue object can still be passed in when the urlsession object is instantiated, we need to use the above code to exclude this situation.

var results: (left: URLResult?, right: URLResult?) = (nil, nil)

func continuation() {
 guard case let (left?, right?) = results else { return }
 completionHandler(left, right)

This code continues to be added to the implementation, where a tuple variable results representing the returned result is defined. In addition, we also define another tool function inside the function to check whether both requests have completed the result processing.

let left = get(left) { result in
 results.left = result

let right = get(right) { result in
 results.right = result

return (left, right)

Finally, this code is added to the implementation, in which we request the two URLs respectively, and return the result once the request is completed. It’s worth noting that we have implemented it twicecontinuation() To determine whether all requests are completed:

  • First executioncontinuation() Because one of the requests is not completed and the result is nil, the callback function will not be executed.
  • In the second execution, both requests are completed, and the callback is executed.

Next, we can test this code with a simple request:

extension URLResult {
 var string: String? {
  guard case let .response(data, _) = self,
  let string = String(data: data, encoding: .utf8)
  else { return nil }
  return string

URLSession.shared.get(zen, zen) { left, right in
 guard case let (quote1?, quote2?) = (left.string, right.string)
 else { return }

 print(quote1, quote2, separator: "\n")
 // Approachable is better than simple.
 // Practicality beats purity.

Parallel paradox

I found that the simplest and most elegant way to solve the parallel problem is to use as little concurrent programming as possible, and our processors are very suitable for executing linear code. However, if a large code block or task is divided into several small code blocks and tasks executed in parallel, it will make the code easier to read and maintain.


The above is the whole content of this article, I hope the content of this article has a certain reference value for your study or work, if you have any questions, you can leave a message to exchange, thank you for your support for developer.

By Adam sharp on September 21, 2017

Translation: bignerdcoding. If you have any mistakes, please point them out. Original link