In brief, when the CPU reads the memory, it does not directly go to the memory to read, which will cause the data to be read very slowly. The CPU will read the required data from the first level cache, while the first level cache will read the data from the memory in the form of cache line. When the data in the first level cache needs to be replaced, the data in the cache will be replaced in the second level cache, and then the data in the first level cache will be replaced in memory.
Suppose our cache behavior is 64 bytes, 512 rows (32K in total). So how to map the size of 32K to hundreds of M or several gigabytes of memory?
The location of the cache reading physical memory is not arbitrary, but fixed. Then the mapping is based on the size of the cache. Here is a set of 32K sizes for mapping
Let’s assume that the physical address of the memory where we read data is 0x1000000,
Then the 0 cache line will read the 64 byte size (i.e. cache line size) of 0x1000000 ~ 0x1000040, and the cache row 0 cannot read the data of 0x1000040 ~ 0x1000080, and this address can only be read from the first cache line. The next address read by cache line 0 can only be data from 0x1008000 to 0x1008040, and so on.
Now we can explain why the slab should be colored
For example, the CPU is reading and writing the address of 0x10000008. Suddenly, an address pointer points to 0x10008008 and needs to read the address of the memory of 0x10008008. The CPU detects a conflict because the valid address space of 64 bytes of data on the 0 th cache row is 0x10000000 ~ 0x100000040, and the physical memory under another address segment also needs to use the 0th cache line, The CPU performs the write back operation, transfers the 64 byte data block on the current 0 th cache line to the physical memory of 0x10000000 ~ 0x10000040, and then transfers the 64 byte data on the physical memory segment of 0x10008000 ~ 0x10008040 to the No.0 cache line, thus completing a switch after the conflict.
If we need to cross read the data above the two blocks for 1000 times, then we need to remove and update the cache constantly. Moreover, the speed of reading memory is much faster than that of reading cache, which will cause a lot of time consumption.
The solution is to add an offset before the second block of read data to move it to the first cache line. The two blocks of data can be read on the 0 and 1 rows of the cache row respectively, so we will not cause unnecessary data exchange when we read the data.
Shading adds an offset.