Detailed explanation of Linux shared memory implementation mechanism
Memory sharing:The shared memory of two different processes a and B means that the same physical memory is mapped to their respective process address spaces. Process a can immediately see the update of process B to the data in shared memory, and vice versa. Because multiple processes share the same block of memory, some synchronization mechanism is necessary, which can be used for both mutex and semaphore.
Efficiency:An obvious benefit of using shared memory communication is its efficiency, because processes can read and write memory directly without any data copy. For communication modes such as pipeline and message queue, four copies of data are needed in kernel and user space, while shared memory only copies data twice : one copy is from input file to shared memory area, and the other copy is from shared memory area to output file. In fact, when processes share memory, they do not always unmap after reading and writing a small amount of data. When there is new communication, the shared memory area is re established. Instead, the shared area is kept until the communication is completed, so that the data content is kept in the shared memory and not written back to the file. Content in shared memory is often written back to the file when unmapping. Therefore, the efficiency of using shared memory communication is very high.
Shared memory implementation mechanism
Shared memory is to realize inter process communication by mapping the same block of memory to different process spaces. The shared memory itself does not have any mutual exclusion and synchronization mechanism, but when multiple processes read and write to the same memory at the same time, the content of the memory will be destroyed. Therefore, in practice, the synchronization and mutual exclusion mechanism need to be completed by the user.
Here are a few system call functions:
(1) Create shared memory
Parameter: key is an output type parameter
Size: size should be an integer multiple of 1024 (4K alignment)
Shmflg: permission flag
(2) Mapping shared memory to its own memory space: shmat
Shmat is a spatial mapping. Through the shared memory created, before it can be accessed by the process, it needs to map this segment of memory to the user process space. Shmaddr is used to specify the address location of the shared memory mapped to the current process. In order to change the settings useful, shmflag must be set to the SHM ﹤ RND flag. In most cases, it should be set to null pointer (void *) 0 to let the system automatically select the address, so as to reduce the dependence of the program on the hardware. In addition to the above settings, shmflag can also be set to SHM only to make the mapped address read-only.
Return value: if the call is successful, the first byte of the mapping address will be returned, otherwise – 1 will be returned.
(3) Unmapping: shmdt
The parameter is the address space to be released.
(4) Control shared memory
Let’s first look at the structure of the third parameter:
The second parameter is the option of CMD: IPC ﹣ stat: get the state of shared memory, and copy the shmid ﹣ DS structure of shared memory to buf
IPC set: change the state of shared memory, copy uid, GID, mode of buf to the shmid DS structure of shared memory
IPC? Rmid: delete this shared memory
BUF: total memory management structure
Features of shared memory:
(1) Shared memory is to allow two processes that do not want to close to access the same memory
(2) Shared memory is the most efficient way to share and transfer data between two running processes
(3) The shared memory between different processes is usually arranged as the same physical memory
(4) Shared memory does not provide any mutual exclusion and synchronization mechanism, and semaphores are generally used to protect critical resources.
(5) Simple interface
Features of all interprocess communication:
Pipes are divided into named pipes and anonymous pipes. Anonymous pipeline can only communicate in one way and can only be used between processes with kinship. It is often used in parent-child processes. When a process creates a pipeline and calls fork to create a child process, the parent process closes the read end and the child process closes the write end to realize one-way communication. The pipeline is byte stream oriented, with mutual exclusion and synchronization mechanism, and its life cycle follows the process.
Named pipeline and anonymous pipeline: named pipeline allows two unrelated processes
Semaphore is a counter, which can be used to control the access of multiple threads to shared resources. It is not used to exchange a large number of data, but used for synchronization between multiple threads. It is often used as a locking mechanism to prevent other processes from accessing resources when a process accesses resources. Therefore, semaphore is mainly used as a means of synchronization between processes and different threads of the same process.
(3) Message queuing
Message queue is a linked list of messages, which is stored in the kernel and identified by message queue identifier. Message queue overcomes the characteristics of little signal transmission information, the pipeline can only carry unformatted byte stream and the buffer is limited. Message queuing is a kind of resource sharing method between different processes in UNIX One mechanism is that UNIX allows different processes to send formatted data flow to any process in the form of message queue. Processes with operation permission to message queue can use msgget to complete the operation control of message queue. By using message type, processes can read information in sequence or arrange priority order for messages.
(4) Shared memory
Shared memory is to map a section of memory that can be accessed by other processes. This section of shared memory is created by one process, but can be accessed by multiple processes. Shared memory is the fastest IPC mode. It is specially designed for the low efficiency of other IPC modes. It is often used in combination with other mechanisms, such as semaphores, to achieve synchronization between processes.
The above is the detailed introduction of Linux shared memory implementation mechanism. You can refer to the following, and if you have any questions, you can leave a message on this site for discussion. Thank you for reading, hope to help you, thank you for your support!