[Linux] thread synchronization and mutual exclusion

Time:2022-4-30


target

Producer consumer model
Understanding of mutual exclusion and synchronization
Mutex, semaphore, conditional variable


Multithreading

We all know the necessity of mutual exclusion. If there is no constraint between threads and simulate the logic of ticket grabbing against a global variable, there may be no problem in the short term, but sometimes the number of votes is reduced to – 1.

Test:

takesleep(1);Put onticket--Before that, we simulated some preparations before grabbing tickets, and it is easy to simulate problems here, because after a thread sleeps, the thread will probably be put into the waiting queue. When switching other threads, the ticket value will be very small, but it is not available at this timeticket--, multiple threads enter at the same time for modification.

#include<stdio.h>
#include<unistd.h>
#include<pthread.h>
#include<unistd.h>
int ticket= 100;
void* Routine(void* args)
{
  while(ticket> 0)
  {
    if(ticket> 0)
    {
    sleep(1);

    ticket--;
     printf("thread is :%p ,ticket is :%d\n",pthread_self(),ticket);
    }
    else{
      break;
    }

  }
}

int main()
{
#define NUM 5
  pthread_t tids[NUM];

  for(int i = 0; i < NUM;++i)
  {
    pthread_create(&tids[i],nullptr,Routine,nullptr);
  }

  for(int i = 0; i < NUM;++i)
  {
    pthread_join(tids[i],nullptr);
  }
  return 0;
}

result:

Reduced to a negative number, which is not in line with the scene in real life

[Linux] thread synchronization and mutual exclusion
For whether the subtraction is atomic:
This process is not atomic. Looking at the assembly code, you can also see that it is implemented in three steps. As long as the inter process switching is carried out in any step, the obtained value may be problematic. Register data will be saved during context switching of each process,That is, after the previous process put the ticket – into memory, it will not affect the register context data of other processes. That is, the reading and writing operations of two processes to a critical resource will cause ambiguity.
[Linux] thread synchronization and mutual exclusion
When can it be reduced to negative:
When the if judges, a thread has been entered when the ticket is 1
There are two situations:

  • But at this time, the thread switches, causing another thread to enter,At this time, they do not switch threads during the process of ticket –, the number of tickets will be reduced to negative only when the operation of ticket – is completed directly.
  • If the process of ticket – is switched, the ticket will only be reduced to 0.

[Linux] thread synchronization and mutual exclusion

Summary:
In the case of multi-threaded switching, data crossing is very likely to occur. The impact of multi-threaded switching is often from kernel state to user state. The operating system will decide whether to switch threads, which is the time point of signal delivery. However, it is not necessary to switch threads. Some system interfaces that need to block threads are usually used. The operating system will think that the efficiency of thread switching at this time is high, so it can switch threads.

To avoid this,We introduce the concept of lock

ptrhead_mutex_init/pthread_mutex_destroy


Locks can be initialized in two ways. Pthread can be used when defining locks as global or static objects_ MUTEX_ Initializer to initialize,Such initialized locks do not need to be destroyed by themselves.

pthread_ mutex_ The lock initialized by init needs to cooperate with pthread_ mutex_ Destroy to release. The defined lock is opened on the heap.
[Linux] thread synchronization and mutual exclusion
[Linux] thread synchronization and mutual exclusion

pthread_mutex_lock

pthread_ mutex_ Lock is a blocking wait.
pthread_ mutex_ Trylock failed to apply for lock successfully and returned an error directly

Observe what is wrong with the following code:

#include<stdio.h>
#include<unistd.h>
#include<pthread.h>
#include<unistd.h>
pthread_mutex_t mutex;
int ticket= 100;
void* Routine(void* args)
{
  while(1)
  {
    pthread_mutex_lock(&mutex);
    if(ticket> 0)
    {

      usleep(10000);
    ticket--;
    printf("ticket is :%d,pthread is:%p\n",ticket,pthread_self());
    }
    else{
      printf("break ...\n");
      break;
    }
    pthread_mutex_unlock(&mutex);
  }
  return nullptr;
}
int main()
{
#define NUM 5
  pthread_mutex_init(&mutex,NULL);
  pthread_t tids[NUM];
  for(int i = 0; i < NUM;++i)
  pthread_create(&tids[i],nullptr,Routine,nullptr);

  for(int i = 0;i < NUM;++i)
  pthread_join(tids[i],nullptr);
  pthread_mutex_destroy(&mutex);
  return 0;
}

result:

The process did not exit normally, but was blocked. Because the lock was not released when the first thread broke, the subsequent threads were stuck!!!

Therefore, when coding, you should pay attention to releasing the lock for each path. The idea of raii is to release locks by using the life cycle of objects.

[Linux] thread synchronization and mutual exclusion

Summary:

With a lock, there will be no error, but the speed of access becomes slower.

Thinking: why can’t you make mistakes with locks?

First of all, with a lock, the thread can still be switched. The thread can be switched at any time, but the switched thread has no lock resources and will be in pthread_ mutex_ Lock is blocked, even if the switch is blocked. Only one thread has lock resources. When it switches back, the context is restored and the lock resources are released after execution. That is, the thread with the lock will not enter the critical area again in the process of executing the critical area, which indirectly realizes atomicity!!

The lock provided by Linux is called mutex.

[Linux] thread synchronization and mutual exclusion

The essence of atomicity


Principle analysis of atomicity:
It can be seen from the above that data inconsistency will occur in multiple threads if a value is simply repeated and self incremented.
In order to realize the operation of mutex, most of the architecturesThe command of swap or exchange is provided, the function of these commands is to exchange the data of registers and memory units. This action is an instruction to ensureAtomicity
There are also bus cycles for accessing memory. When the exchange instructions on one processor are executed, the other processor can only wait for the bus cycle.

A piece of pseudo code to realize atomicity:
[Linux] thread synchronization and mutual exclusion

Mutex is a variable defined in memory. Suppose it is int, and the variable is called mutex.

Application process: both movb and xchgb are atomic. Xchgb will not be executed by multiple processors because the bus cycle accesses memory and only one CPU is allowed to access it. Here%Al is register data, all threadsEach private oneofMutex mutex is essentially a space in memory that can be read by all threads!!

[Linux] thread synchronization and mutual exclusion

Release process:

[Linux] thread synchronization and mutual exclusion
Note: mov is the process of copying data without changing the original data. Xchgb is the process of exchanging data and really obtaining mutex! Those who can perform unlock have been locked. Unlock is not atomic, but also possible!!
It can be seen from the above that if a lock is repeatedly obtained by a thread, it will also be suspended.

How to realize one-step exchange of exchange instructions?
Xchgb assembly principle, the concept of timing, an instruction cycle, when accessing the bus, the assembly instruction is placed on the bus at a specific time point, and the bus can be locked. Even if xchgb is implemented by multiple statements, it locks the bus and will not be interfered by other threads when executed alone.

Thread safe vs reentrant functions

  • Reentrant must be thread safe, and thread safety is not necessarily reentrant.
  • Thread safety refers to whether threads will interact with each other, and whether accessing some functions, data and regions will cause thread problems.
  • Reentrant emphasizes the state of the function. For example, whether it can be entered by multiple execution streams at the same time without problems.

Suppose there is a scenario where a function locks critical resources internally. When a thread accesses the critical area, it receives a signal that needs to enter the function again, which is equivalent to entering the function again in the context of the process. As we said above, a thread cannot repeatedly obtain a lock. It’s also a deadlock.

deadlock

Deadlock refers to a permanent waiting state in which each process in a group occupies resources that will not be released, but is in a state of permanent waiting due to mutual application for resources that will not be released to be used by other processes.

Four necessary conditions for deadlock

  • mutual exclusion : a resource can only be used by one execution flow at a time
  • Request and hold conditions: when an execution flow is blocked by requesting resources, the obtained resources are kept
  • Conditions of non deprivation: the resources obtained by an execution flow cannot be forcibly deprived until they are used up
  • Cycle waiting condition: Several execution flows form a head to tail circular waiting relationship for resources

Methods to solve deadlock:
Breaking one of the four necessary conditions can solve the deadlock.
Generally, mutual exclusion is inevitable, because it is the deadlock problem caused by mutual exclusion.
Request and hold conditions: request your own and do not release your own. If you can release your own and give it to the opposite, it will not cause deadlock.
No deprivation condition: if others don’t give locks, I won’t rob others’ locks.
Cycle waiting conditions: as shown in the figure, closed loop!And try to ensure that the order of applying for locks is the same, and the coding suggestions.
[Linux] thread synchronization and mutual exclusion
Once a deadlock occurs, the above four conditions will appear.
Deadlock detection algorithm, or banker algorithm, is some algorithm to avoid deadlock.

synchronization


What is synchronization?
Synchronization: on the premise of ensuring data security, threads can access critical resources in a specific order, so as to effectively avoid starvation
The title is the same
Why do I need synchronization?
In the example of ticket grabbing, if oneThe ability of threads to compete for locks is particularly strong, yesCause other threads to be hungry。 Although there is nothing wrong with this function, it is unreasonable.

Since other threads are blocked, it takes time to wake up the thread from the blocking queue, and the thread that has just released the lock has more advantages over other threads in obtaining the lock.

Role of synchronization:
Synchronization is to solve the problem of unreasonable resource allocation, it’s not about solving problems with errors. Let the thread apply for locks in an orderly manner.

Conditional variable


What are conditional variables?
When a thread accesses a variable mutually exclusive, it may find that it can do nothing until other threads change state. At this time, if the thread’s non blocking polling detection conditions are unreasonable, the condition variable is introduced to wake up the thread when the conditions are met.

Conditional variables are tools for synchronization.
The native thread library provides an object that describes the state of critical resources. In the past, we were constantly applying for and detecting locks because we didn’t know the state of critical resources. This is a way of polling and consumes CPU resources. Therefore, we need to use some means to get to the state of critical resources – condition variable.

That is, a mechanism is used to remind whether there are resources under condition variables. If there are resources, just wake up the thread under the condition variable.Equivalent to a bell.

Why use mutually exclusive variables?

  • Conditional waiting is a means of synchronization between threads. If there is only one thread and the conditions are not met, it will not be met all the time, so there must be oneThreads change shared variables through certain operations, so that the previously unsatisfied conditions are satisfied, andFriendly noticeThe thread waiting on the condition variable.
  • The conditions will not suddenly become satisfied for no reason, and will inevitably be involvedChanges in shared data。 So be sure to protect it with a mutex lock. Shared data cannot be obtained and modified safely without mutual exclusion.
  • pthread_ cond_ After wait enters the function, it will check whether the condition variable is equal to 0? be equal to,Just change the mutex to 1 until cond_ Wait returns, changes the conditional quantity to 1, and restores the mutex to its original stateTo put it bluntly, if the condition is not met, the lock will be released actively and wait under the condition variable until the condition is met and the lock is obtained again!!

pthread_cond_init/pthread_cond_destroy

Initialize and destroy condition variables. Condition variables assigned by macros do not need to be destroyed manually.
The second variable of the initialization function is the attribute of the lock, which is generally set to null
[Linux] thread synchronization and mutual exclusion

pthread_cond_signal/pthread_cond_broadcast

pthread_ cond_ Signal wake upSpecify the thread to wait under the condition variable (one)
pthread_ cond_ Broadcast wake upAll threads waiting under condition variables (one batch)。 Broadcast means broadcast.

pthread_ cond_ There is a waiting queue in T, in which the thread to be executed is linked in the queue when pthread_ cond_ Signal can take out the of the head for operation.
[Linux] thread synchronization and mutual exclusion

pthread_cond_wait/pthread_cond_timewait

pthread_ cond_ Timewait is to wake up and apply for a conditional variable when the time comes.
And pthread_ cond_ Wait is to wait in the specified condition variable.
It will enter the waiting queue of cond until someone passes pthread_ cond_ Signal / broadcast wakes up the thread.
[Linux] thread synchronization and mutual exclusion
The second parameter is a layer of understanding of locks:
The second variable is a lock, assuming pthread_ cond_ Broadcast wakes up a batch of threads at a time. If you access the critical area at the same time, you must strive for the mutex lock of the second parameter again, so as to protect the critical resources.

This is his understanding, but if it is pthread_ cond_ So what’s the use of signal?
The second parameter is the second level understanding of locks:
Because the thread waiting under the condition variable may be in the critical area when the condition variable is not satisfied, it is necessary toGive up the lock to another producer / consumer, the thread will not wake up from the condition variable until the other party notifies that the condition variable is satisfied,Apply for mutex again

experiment:

One thread controls other threads. Here T2 is the control thread, or pthread can be used_ cond_ Broadcast wakes up in batches.

#include<stdio.h>
#include<unistd.h>
#include<pthread.h>
#include<unistd.h>

pthread_cond_t cond;
pthread_mutex_t mutex;
void* t1(void* args)
{
  while(1)
  {
    pthread_cond_wait(&cond,&mutex);
    printf("%s is running!\n",args);
  }
  return nullptr;
}

void* t2(void* args)
{
  //Control other threads
  while(1)
  {
    sleep(1);
    pthread_cond_signal(&cond);
  }
  return nullptr;
}
int main()
{
  pthread_mutex_init(&mutex,nullptr);
  pthread_cond_init(&cond,nullptr);
  pthread_t tid1;
  pthread_t tid2;
  pthread_t tid3;
  pthread_t tid4;
  pthread_create(&tid1,nullptr,t1,(void*)"thread1");
  pthread_create(&tid2,nullptr,t1,(void*)"thread2");
  pthread_create(&tid3,nullptr,t1,(void*)"thread3");
  pthread_create(&tid4,nullptr,t2,(void*)"thread4");

  pthread_join(tid1,nullptr);
  pthread_join(tid2,nullptr);
  pthread_join(tid3,nullptr);
  pthread_join(tid4,nullptr);
  return 0;
}

result:

Threads are synchronized and run orderly.

[Linux] thread synchronization and mutual exclusion

Producer consumer model


Examples in life
In real life, there are consumers, supermarkets and suppliers.
Consumers do not deal directly with suppliers. Consumers usually only deal with supermarkets, while suppliers deal directly with supermarkets.

Such a model is actually the producer consumer model, with the addition of the supermarketbuffer, it can decouple producers and consumers and greatly improve efficiency. Decoupling, support concurrency, support uneven busy and idle, and adjust the pace of producers and consumers.

Supermarkets are actually a critical area for producers and consumers

Producer consumer core:
To learn this model, we need to understand three relationships, two roles and one trading place.
Three relationships: producer and producer (mutually exclusive), consumer and consumer (mutually exclusive), producer and consumer (synchronous and mutually exclusive).
Two roles: producer and consumer
One trading place: buffer zone

Pipeline is also a producer consumer model.

Production and consumption model based on blocking queue


The queue has an upper limit. If the queue does not meet the conditions of production and consumption, production and consumption will be blocked.

becauseThe same mutex that consumers and producers shareAt this time, suppose that when the consumer is waiting under the conditional variable, if the consumer is under the mutex lock, the thread will take away the lock together, and the producer will be blocked because it cannot apply for the lock, and the process will be blocked as a whole.

The simplest code implementation of producer consumer model:
We set one production and one consumption here. In fact, there are low water mark and high water mark. We only need to change the condition of signal.
Producers can decide when consumers will come, and consumers can also decide when to let producers come.Because consumers only know whether they can consume when they consume, and this is polling detection, and synchronization cannot be realized.

Pipeline is essentially based on blocking queue, and the principle is different.

block_queue.hpp

Where p_ cond,c_ Cond realizes the synchronization between producers and consumers, mutex realizes the mutual exclusion between producers and consumers, P_ Mutex realizes the mutual exclusion between producers, C_ Mutex realizes the mutual exclusion between consumers.

#pragma once
#include<iostream>
#include<queue>
#define NUM 10
template<class T>
class BlockQueue
{
  private:
    std::queue<T> q;// Critical resources
    int cap;// Identifies the upper limit of the queue. It will not be accessed. It is not a critical resource
    pthread_cond_t p_cond;//produtor cond 
    pthread_cond_t c_cond;//consumer cond
    pthread_ mutex_ t mutex;// Producer - > consumer
    pthread_ mutex_ t p_ mutex;// Producer - > producer
    pthread_ mutex_ t c_ mutex;// Consumer - > consumer
  public:
    BlockQueue()
      :cap(NUM)
    {
      pthread_cond_init(&p_cond,nullptr);
      pthread_cond_init(&c_cond,nullptr);
      pthread_mutex_init(&mutex,nullptr);
      pthread_mutex_init(&p_mutex,nullptr);
      pthread_mutex_init(&c_mutex,nullptr);
    }
    ~BlockQueue()
    {
      pthread_cond_destroy(&p_cond);
      pthread_cond_destroy(&c_cond);
      pthread_mutex_destroy(&mutex);
      pthread_mutex_destroy(&p_mutex);
      pthread_mutex_destroy(&c_mutex);
    }

    //Get data from blocking queue
    void Get(T* out)
    {
      pthread_mutex_lock(&c_mutex);
      pthread_mutex_lock(&mutex);
      //False Awakening
      while(q.size() == 0)
      {
        //Consumers should not consume
        pthread_cond_wait(&c_cond,&mutex);
      }
      *out = q.front();
      q.pop();
      pthread_mutex_unlock(&mutex);
      //At this time, there is room for producers
      pthread_cond_signal(&p_cond);
      pthread_mutex_unlock(&c_mutex);
    }

    //Drop data from blocking queue
    void Put(const T& in)
    {
      pthread_mutex_lock(&p_mutex);
      pthread_mutex_lock(&mutex);
      while(q.size() == cap)
      {
        //The producer should not produce at this time
        pthread_cond_wait(&p_cond,&mutex);
      }
      q.push(in);
      pthread_mutex_unlock(&mutex);
      //At this time, data can be consumed. It can be placed in front of and behind unlock
      //Put it later to ensure that the thread can directly fight for mutex lock resources after being awakened, because I have released it.
      pthread_cond_signal(&c_cond);
      pthread_mutex_unlock(&p_mutex);
    }
};

Note: the upper lock is OK without adding it.

test.cc

#include<iostream>
using namespace std;
#include"block_queue.hpp"
#include<pthread.h>
#include<unistd.h>

void* t1(void* args)
{
  //Producer
  int count = 0;
  BlockQueue<int>* bq = (BlockQueue<int>*)args;
  while(1)
  {
    bq->Put(count);
    count ++;
    count %= 100;
    printf("consumer :%d\n",count);
  }
}

void* t2(void* args)
{
  //Consumer
  BlockQueue<int>* bq = (BlockQueue<int>*)args;
   while(1)
   {
    sleep(3);
     int x= 0;
     bq->Get(&x);
     printf("thread is %p,count:%d\n",(int*)pthread_self(),x);
   }
}


int main()
{
  BlockQueue<int>* bq = new BlockQueue<int>();
  pthread_t tid;
  pthread_t tid2;
  pthread_create(&tid,nullptr,t1,(void*)bq);
  pthread_create(&tid2,nullptr,t2,(void*)bq);


  pthread_join(tid,nullptr);
  pthread_join(tid2,nullptr);
  return 0;
}

result:
block_ queue. Pthread can best be embodied in HPP_ cond_ Why should wait take a mutex? The thread waits under the condition variable in the critical area. At this time, it needs to release the mutex to avoid deadlock.

[Linux] thread synchronization and mutual exclusion

That is, if the condition variable is blocked in the critical area, it will also wake up in the critical area. At this time, the thread will meet the condition variable and wait until it wakes up after the mutex lock competition.

Consumers are most aware of whether there is space, while producers are most aware of whether there is data.

False wake exists in condition variable.
For example, if you wake up with a broadcast, but there are only a few resources, and multiple threads are blocked under condition variables, some threads may consume all the resources, while other threads get the lock and leave the if statement.
pthread_ cond_ If wait is a function, the function may fail to call.
Under single CPU core, false wake-up may be low; In multi-core or multi CPU, there is cache information inside each CPU, the condition variables will be cached inside the CPU, the condition variables of each CPU will be updated, and the condition variables of each thread will be met. That is, multiple threads are awakened, and the resources are not necessarily sufficient.

Above codeBased on the blocking queue, there is no problem between producers and producers, and between consumers and consumers, because access to critical resources has been locked.

POSIX semaphore


What is a semaphore
Semaphore: essentially a counter that describes the number of critical resources.

When to use semaphores
When our critical resource can be regarded as multiple copies, multiple threads can access it at the same time, as long as the accessed area is not the same.

In the previous example, we know thatMutex can ensure the security of critical resourcesConditional variables can help us know the state of critical resources, andSemaphore can describe the counter of critical resource number。 In the previous example, we have assumed that critical resources must be mutually exclusive, and only one execution flow is allowed to access critical resources at a specific time!
But in fact, critical resources are not necessarily allowed to be accessed by one thread at the same time.

Based on the ring queue, it is necessary for producers and producers to maintain mutual exclusion, and consumers and consumers to maintain mutual exclusion.

If any thread wants to access one of the critical resources, it must first apply for a semaphore and release the semaphore after use.

The application semaphore is equivalent to the use of critical resources, and the essence of semaphore is also the predetermined mechanism of resources.

If you need to apply for semaphore resources,The premise is that all threads must see semaphores, which are also critical resources. That is, the application for releasing PV must be atomic.

P. V operation pseudo code: mainly expounds the principle

int sem = NUM;
int arr[NUM];
pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER;
//Apply for resources
void P()
{
	pthread_mutex_lock(&lock);
	if(sem > 0)
	sem--;
	Else release the lock and hang
	
	pthread_mutex_unlock(&lock);
}
//Release resources
void V()
{
	pthread_mutex_lock(&lock);
	sem++;	
	pthread_mutex_unlock(&lock);
}
int main()
{
	P(sem);
	//Access critical resources
	V(sem);
}

Semaphore type: SEM_ t. Later, our interfaces operate on the variables defined by this semaphore type.

sem_init/sem_destroy

[Linux] thread synchronization and mutual exclusion

Initialize anonymous semaphores / destroy anonymous semaphores.

The second parameter is whether you want to share. 0 means sharing between threads.
The third parameter represents the initial value of the semaphore counter.
Return value 0 success, – 1 failure.

sem_wait/sem_trywait/sem_timewait

[Linux] thread synchronization and mutual exclusion

sem_ Wait indicates that the application semaphore, semaphore –, fails and is blocked;
sem_ Trywait fails and an error is returned.
sem_ Timewait will reapply the semaphore at a certain time.

sem_post

[Linux] thread synchronization and mutual exclusion

The essence is semaphore + +, and this process will not be blocked.

Production and consumption model of circular queue


rule

  • Producers can’t put consumers in a circle, and consumers can’t surpass producers. Otherwise, it will cause data insecurity, read garbage data or overwrite useful data.

  • When not empty and dissatisfied, producers and consumers can go hand in hand. And it should be in parallel most of the time, which isRing queue efficiencyThe reason for this.
    When empty, producers should be allowed to execute first to block consumers; conversely; This is synchronization.

  • Producers pay attention to grid resources and consumers pay attention to data resources. Producers are most aware of whether there are data resources, while consumers are most aware of whether there are grid resources.

  • Only when it is empty or full will the semaphore suspend the corresponding thread. At other times, producers and consumers run in parallel.
    And it is impossible for consumers to execute at the beginning. When obtaining the semaphore, the value of the semaphore is 0, and it will hang under the semaphore.

Assuming that the initial grid resource is num and the data resource is 0, SEM can be defined_ t sem_ space = NUM,sem_ t sem_ data = 0;

Producer P (sem_space), V (sem_data)
Consumers have p (sem_data) and V (sem_space), which are the SEM of consumers at the beginning_ If the data is 0, the consumer is suspended and waiting for the producer to produce.

It can determine when to process the semaphore through condition judgment, so as to realize the high and low waterline.

Producer P means that the consumer must be in V, the consumer is in P and the producer must be in V?? FALSE!!!! Only p_ Index and C_ This is true when the index is the same. Most of the time, the two can not be related to each other.

Task.hpp

#pragma once
#include<iostream>
using namespace std;
#include<pthread.h>
//Task, add a number from 1 to top
class Task
{
  public:
  Task()
  {}

  Task(int t):top(t)
  {}

  int RunTask()
  {
    int res = 0;
    for(int i = 1;i <= top;++i)
    {
      res+=i;
    }
    return res;
  }

  void Print()
  {
    printf("pthread:%p running task 1~%d",pthread_self(),top);
    fflush(stdout);
  }
  private:
    int top;
};

ringqueue.hpp

#include<iostream>
#include<vector>
#include<pthread.h>
using namespace std;
#include <semaphore.h>
#define NUM 5

template<class T>
class RingQueue
{
  private:
    vector<T> _rq;
    int _ num;// Using counter to realize circular queue
    sem_t c_sem;
    sem_t p_sem;
    size_t c_index;
    size_t p_index;
    pthread_mutex_t p_lock;
    pthread_mutex_t c_lock;
  public:
    RingQueue()
      :_rq(NUM)
       ,_num(0)
       ,c_index(0)
       ,p_index(0)
  {
    //All empty cells at the beginning
    sem_init(&c_sem,0,0);
    sem_init(&p_sem,0,NUM);

    pthread_mutex_init(&p_lock,nullptr);
    pthread_mutex_init(&c_lock,nullptr);
  }
    ~RingQueue()
    {
      sem_destroy(&c_sem);
      sem_destroy(&p_sem);

      pthread_mutex_destroy(&p_lock);
      pthread_mutex_destroy(&c_lock);
    }

    void Get(T* out)
    {
      //Application semaphore
      //Consumer application semaphore
      sem_wait(&c_sem);

      //Multi producer mutual exclusion
      pthread_mutex_lock(&p_lock);
      //At this time, there must be resources for consumers
      *out = _rq[c_index];
      //There must be an empty grid for the producer at this time
      sem_post(&p_sem);
      //c_ Index update does not need to be applied in semaphores
      c_index++;
      c_index %= NUM;
      //c_ Index becomes a critical resource for producers
      pthread_mutex_unlock(&p_lock);
    }

    void Put(const T& in)
    {
      //Producer application semaphore
      sem_wait(&p_sem);
      pthread_mutex_lock(&c_lock);
      //At this time, there must be resources for producers
      _rq[p_index] = in;
      //There must be room for consumers at this time
      sem_post(&c_sem);
      p_index++;
      p_index %= NUM;
      pthread_mutex_unlock(&c_lock);
    }
};

test.cc

#include"ringqueue.hpp"
#include<pthread.h>
#include<cstdlib>
#include<unistd.h>
#include"Task.hpp"
void* Productor(void* args)
{
  RingQueue<Task>* rq = (RingQueue<Task>*) args;
  int count = 500;
  while(1)
  {
    Task t(count);
    rq->Put(t);
    printf("pthread :%p,count is :%d\n",pthread_self(),count);
    count ++;
    count %= 1000;
  }
}

void* Consumer(void* args)
{
  RingQueue<Task>* rq = (RingQueue<Task>*) args;
  while(1)
  {
    sleep(1);
    //Get task from ring queue
    Task t;
    rq->Get(&t);

    //Run task
    int res = t.RunTask();
    t.Print();
    printf(" result is :%d\n",res);
  }
}

int main()
{
  pthread_t c;
  pthread_t p;
  RingQueue<int>* rq = new RingQueue<int>();
  pthread_create(&c,nullptr,Consumer,(void*)rq);
  pthread_create(&p,nullptr,Productor,(void*)rq);

  pthread_join(c,nullptr);
  pthread_join(p,nullptr);

  return 0;
}

result:

[Linux] thread synchronization and mutual exclusion

Process analysis:

When should I lock it
In the case of single production and single consumption, there is no need to pthread_mutex_t.
Multiple producers and consumers need to lock because of this p_ Index and C_ Index is shared by all threads.

Where is it better to add / remove mutex?
Locking can usually be done in SEM_ The back of wait will be better, because SEM_ Wait can usually allow batch threads to enter. At this time, we can strive for another lock. All incoming locks are qualified to access critical resources. The efficiency will be a little lower outside, and the semaphore is equivalent to no use. It is equivalent to that threads that cannot compete for locks can also compete for semaphores first, so thatWaiting times overlapYes, it will be more efficient.
Unlocking is usually added at the end to protect critical resources.

Recommended Today

JS generate guid method

JS generate guid method https://blog.csdn.net/Alive_tree/article/details/87942348 Globally unique identification(GUID) is an algorithm generatedBinaryCount Reg128 bitsNumber ofidentifier , GUID is mainly used in networks or systems with multiple nodes and computers. Ideally, any computational geometry computer cluster will not generate two identical guids, and the total number of guids is2^128In theory, it is difficult to make two […]