Go synergy simple learning

Time:2021-8-15

What is a collaborative process?

  1. A coroutine is similar to a thread, but lighter than a thread. A program startup will occupy a process, and a process can have multiple threads, and a thread can have multiple coprocedures.
  2. A process contains at least one main thread, and a main thread can have more sub threads. Threads have two scheduling strategies: time-sharing scheduling and preemptive scheduling.
  3. For the operating system, the thread is the smallest execution unit and the process is the smallest resource management unit. The thread generally has five states: initialization, runnable, running, blocking and destruction.
  4. Coprocesses are executed in user mode and are not managed by the operating system kernel. They are completely scheduled and controlled by the program itself.
  5. The creation, switching, suspension and destruction of a collaboration are all memory operations.
  6. A coroutine belongs to a thread. It is executed in a thread. Collaborative scheduling strategy: collaborative scheduling.

####Goroutine of go

Go is a very concurrency friendly language. It provides a simple syntax for two mechanisms: goroutine and channel.

Goroutine is a lightweight thread. Go supports native coroutines at the language level
The co process cost of go is smaller than that of thread. It is about 2KB. According to the program cost requirements, the thread must specify that the stack size is fixed
Goroutine is implemented through GPM scheduling model. GPM scheduling model

Simple use of coroutine native support

package main

import (
 "fmt"
 "time")

func main()  {
 Fmt.println ("test")
 //Start asynchronous here

go func() {
    time.Sleep(time.Microsecond*10)
    Fmt.println ("test 3")
 }()
 Fmt.println ("test 3")
 //Delay main program exit
time.Sleep(time.Microsecond*100)
}

Go is a multi-threaded version process, which can use multi-core CPU. Multiple processes can be scheduled at the same time, resulting in concurrency problems.

What is the execution result of the following code. Press normal to print 1-20

package main

import (
    "fmt"
    "time"
)
var count =0

func main()  {
    for i:=0;i<=20;i++ {
        go func() {
            count ++
            fmt.Println(count)
        }()
    }
   time.Sleep(time.Microsecond*100)
}
$go run main.go // first execution
1
3
5
2
14
18
19
10
11
12
13
6
15
16
17
4
8
20
9
7
21
$go run main.go // the second execution
1
3
18
2
5
7
8
19
10
11
12
13
14
15
16
17
4
9
20
21
6

The result of each execution is different. During the write operation, multiple co processes are written at the same time, resulting in messy printing of data
Reading variables from variables is the only safe way to handle variables concurrently. You can have as many readers as you want, but writes must be synchronized. There are many ways to do this, including using some true atomic operations that depend on a special CPU instruction set. However, a common operation is to use mutexes.

Lock the coroutine when writing data

package main

import (
   "fmt"
   "sync"
   "time"
)

var (
   lock sync.Mutex
   count =0
)

func main()  {
   for i:=0;i<=20;i++ {
       go func() {
           lock.Lock()
           defer lock.Unlock()
           count ++
           fmt.Println(count)
       }()
   }

   time.Sleep(time.Microsecond*100)
}

We lock the self increment of the count variable to ensure that only one coroutine writes the result to the desired result at the same time.

$ go run main.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

It seems that we have solved the concurrency problem, but it also violates the original intention of concurrent programming. And it is easy to cause deadlock. When using a single lock, there is no problem, but if you use two or more locks in your code, it is easy to encounter a dangerous situation. When collaboration a owns lock locka and wants to access lock lockb, and collaboration B owns lock lockb and needs to access lock locka.

passageway

Channel is a powerful mechanism for multi process scheduling resource sharing, and it is a shared pipeline for transferring data between processes. One collaboration process can transfer data to another collaboration process through pipeline, so only one collaboration process can access data at any point in time

Create a pipe

c := make(chan int)

The type of this channel is Chan int. Therefore, to pass the channel to the function, our function signature looks like this:

func worker(c chan int) { ... }

The pipeline supports two operations

//Receive data
CHANNEL <- DATA
//Send data
VAR := <-CHANNEL

Use for to receive or send pipeline data

for instance

package main

import (
  "fmt"
  "time"
  "math/rand"
)

func main() {
  c := make(chan int)
  for i := 0; i < 5; i++ {
    worker := &Worker{id: i}
    go worker.process(c)
  }

  for {
    c <- rand.Int()
    time.Sleep(time.Millisecond * 50)
  }
}

type Worker struct {
  id int
}

func (w *Worker) process(c chan int) {
  for {
    data := <-c
    fmt.Printf("worker %d got %d\n", w.id, data)
  }
}

Buffer channel

The sending and receiving processes of unbuffered pipes are blocked. You can also create a buffered pipe.
Sending data to the buffer channel is blocked only when the buffer is full. Similarly, data received from the buffer pipe is blocked only when the buffer is empty.
A buffer pipe can be created by passing another parameter representing the capacity (specifying the size of the buffer) to the make function.
To make a pipeline buffered, the capacity in the above syntax should be greater than 0. The capacity of unbuffered pipes defaults to 0

ch := make (chan type, capacity)

select

Even with buffering, at some point we need to start deleting messages. We can’t run out of memory to make the worker easy. To achieve this, we use go’s select:

Function of select

Go provides a keywordselect。 adoptselectCan monitorchannelThe data flow above is determined byselectStart a selection criteria bycaseStatement description.
selectUsage restrictions, eachcaseStatement must be an IO operation. Therefore, generally, select needs to be used in conjunction with the collaboration pipeline
for instance:

for {
  select {
  case c <- rand.Int():
    //The optional code is here
  default:
    //You can leave this blank to silently delete data
    fmt.Println("dropped")
  }
  time.Sleep(time.Millisecond * 50)
}

overtime

for {
  select {
  case c <- rand.Int():
  //Relevant operations are performed after the specified time, mainly for synchronization. For example, if the data returned by the request interface exceeds the response, an error is reported to prevent the collaboration from being blocked all the time
  case <-time.After(time.Millisecond * 100): 
    fmt.Println("timed out")
  Default // if there is no default, select will be blocked    
  }
  time.Sleep(time.Millisecond * 50)
}