Redis: 13. Expanding redis

Time:2021-2-14

brief introduction

When the amount of data increases or the number of read-write requests increases, a redis server may not be able to store all the data or process all the read-write requests. Therefore, it is necessary to expand redis to ensure that redis can process the read-write requests normally when it can store all the data pairs.P227

Extended read performanceP227

Several ways to improve performanceP228
  • Use short structure: make sure the maximum length of the compressed list is not too large
  • Structure selection based on query type

    • Don’t use lists as collections
    • Instead of getting the entire hash and sorting it in the client, use ordered sets
  • Compression before massive object storage: reduce the network bandwidth required for reading and writing. Compared with lz4, gzip and bzip2 compression algorithms, choose the best compression algorithm for storage data compression effect and performance
  • Pipelining and connection pooling: pipelining is described in replication, troubleshooting, transactions, and performance optimization

The simplest way to extend read performance is to add a read-only server (replication, fault handling, transaction and performance optimization) Server becomes a slave server and its operation principle and management method), and only writes to the master server (by default, trying to write to a slave server will cause an error, even if it is the master server of other slave servers).P228

Add slave serverP228
  • Add in the configuration file:slaveof <master-host> <master-port>
  • Send to the running redis server:SLAVEOF <master-host> <master-port>

Can be sent from the serverSLAVEOF NO ONECommand to disconnect it from the primary server.P228

When a master server has a large number of slave servers, they will use up most of the bandwidth when they were synchronized before, resulting in higher delay of the master server and even disconnection between the master server and the slave server.P229

Solution to resync problem from server P229

  • Building a tree like slave server group: reduce the amount of data that the master server needs to pass to the slave server by building a secondary slave server
  • Compress the network connection: Using SSH tunnel with compression to connect can significantly reduce the bandwidth (pay attention to the options provided by SSH to make SSH connection automatically connect after disconnection)
Fail overP230

Redis sentinel can be used with the replication function of redis, and can fail over the offline primary server. Redis sentinel is a redis server running in a special mode. It monitors a series of master servers and their slave servers, and sends messages to the master server by sendingPUBLISHOrders andSUBSCRIBECommand to the master and slave serversPINGCommand, each sentinel process can independently identify the available slave servers and other sentinels. When the master server fails, all sentinels monitoring the master server will select a sentinel based on the common information, and select a new master server from the existing slave servers. Then, the selected sentinel will let the remaining slave servers copy the new master server (by default, sentinel will migrate the slave servers one by one, but this number can be modified through configuration options).P230

Redis sentinel also provides an optional fail over notification function, which can perform configuration update and other operations by calling scripts provided by users.P230

Expand write performance and memory capacityP230

Reduce memory consumption and write dataP231
  • Reduce the amount of data the program needs to read
  • Migration of irrelevant functions to other servers
  • Before writing redis, try to aggregate in memory (which can be applied to analysis and statistical calculation)
  • Use locks or Lua scripts insteadWATCH/MULTI/EXECaffair
  • Using AOF persistence will store all the data written. You can consider configuring to rewrite AOF or using RDB

When the use of the above method can not continue to reduce memory and improve performance, it means that we have encountered the bottleneck caused by using only one machine, so we need to slice the data to multiple machines. We introduce the method of using a fixed number of segments to make the scoring scheme meet the expectations of the next few years, assuming that the number of segments is 256. In the early stage, when the amount of data is very small, it is not necessary for each redis server to use an independent machine. Multiple redis servers can share one machine, or each redis server can use multiple redis databases. (Note: when running multiple redis servers on each machine, make sure to listen to different ports, and make sure that the server writes different snapshot files / AOF files.)P231

The slicing method can be directly used to reduce the memory occupation. First, use the hash function to calculate a digital hash value, and then use the number of slices to calculate which connection is currently used. That is, instead of slicing the key, it is converted to slicing the connection.P234

If you feel that the performance of complex queries is limited by the redis single thread design, and the machine has more computing cores, more communication network resources, and more disk I / O for storing snapshot files and AOF files, you can consider running multiple redis servers on a single machine. (of course, you should also pay attention to: make sure that multiple redis servers on a machine listen to different ports, and make sure that the servers write different snapshot files / AOF files.)P234

person one is in love with

If network I / O becomes a bottleneck, the multi threading feature of redis 6.0 can also be considered. The multithreading feature is mainly to improve the performance of the read-write buffer, because this part of the time is relatively large, while the command execution part still uses single thread processing. This can not only improve the overall performance, but also keep the design simple without introducing new concurrency problems.

For some globally unique data, such as unique access counter, an additional connection can be used to store similar data.

Extended complex queryP234
Expand search queriesP235

The various search methods mentioned in the implementation of content search, directional advertising and job search all use similar methodsSUNIONSTORE, SINTERSTORE, SDIFFSTORE, ZINTERSTORE, ZUNIONSTOREAnd these commands need to write redis, so the read-only slave server described above will not be able to handle these searches.P235

In order to perform the above search, you need to turn on the write function to the slave server. In the redis configuration file,slave-read-onlyOption controls whether the slave server can be written. The default value isyes. So as long as theslave-read-onlySet tonoAnd restart the slave server, the above search can be performed normally.P235

When the machine has enough memory, and it performs read-only operations (or these operations will not modify the underlying data used by other queries), adding a slave server can help us achieve scale out.

Expand search index sizeP235

In order to connect and fragment the search query, we must first connect and fragment the search index to ensure that for each indexed document, all the data of the same document will be stored in the same connection fragment.P236

The actual process of fragment search is roughly divided into the following three operations:

  • Write a query program that can be executed on a single partition to search and get the search results to be sorted
  • Execute the above-mentioned query program in all partitions
  • Merge the query results of each partition, and then select the desired part of the results

Note: since it is impossible to determine which partition each piece of data in the paging result comes from, in order to ensure that the returned data is in the[start, start + num]Within, the program needs to get the[0, start + num]And then select the final result in memory.P236

person one is in love with

There are actually two forms of fragmentation

  • Partitioning keys: suitable for a large number of similar keys, but the amount of data corresponding to each key is not large
  • Slice the data: it is suitable for each key pair to deal with a large amount of data

Key fragmentation is basically bound to connection fragmentation, because a large number of keys can only be used in the case of multiple connection pairs; data fragmentation can be converted into multiple keys in a single connection or connection fragmentation.

This article starts with the official account: full Fu machine (click to view the original), open source in GitHub:reading-notes/redis-in-action
Redis: 13. Expanding redis

Recommended Today

asp.net Application of regular expression

1. Balanced group / recursive matching (?’ Group ‘), which is called the corresponding content of group, and counts it on the stack;(?’- Group ‘), and count the corresponding content named group out of the stack(?!) Zero width negative look ahead assertion. Since there is no suffix expression, attempts to match always failRegular example:,{0,1}”5″:\[[^\[\]]*(((?’Open’\[)[^\[\]]*)+((?’-Open’\])[^\[\]]*)+)*(?(Open)(?!))\],{0,1} Test […]