- Second kill this kind of large concurrent writing scene, directly sub database sub meter open dry?
- Is it hard to deal with the traffic peak of seckill?
- Don’t fool me with Taobao level seconds
Characteristics of seckilling activities
I dare say that all the students who have done e-commerce will encounter the “high concurrency” activities such as “seckill” and “time limited purchase”. There are also a lot of solutions for seckill on the market, such as sub database, sub table, cache, message queue, etc., but any technology “smart point” that can be thought of will be basically written in a paragraph. I think we should carefully analyze the characteristics of the business and determine which technology “hot spots” need to be adopted according to the business volume of the system. For example, a system with a daily life of 100000 uses such technical means as sub database, sub table, cache, message queue, current limiting, degradation and so on. Although the function has achieved the expected goal, it has achieved the goal There may be some waste in real resources, but only one current limiting method is needed technically. I just want to say that those “optimization” means, such as sub database and sub table, must be implemented according to the actual business.
To get to the point, the business scenario of seckill has obvious characteristics:
- With short-term peak traffic characteristics, that is: there will be a large number of requests in a short period of time
- The requested data is hot, that is, a large number of requests for the same data
- The success and efficiency of requests is low, that is, only a small number of requests may successfully process the business
- The peak traffic of the request occurs before the order is placed, i.e. there is very little traffic peak in the payment phase
Static resources refer to the resources that almost do not change, such as pictures, videos, audio, HTML pages, etc. these resources are processed in a way similar to cache, and they should be placed as close to users as possible. For example, when the cache of the browser is expired, it is recommended to get it from CDN. CDN is the simplest and most effective way to deal with the peak of static resource access Solution, what if there is no CDN? At least the server that requests these static resources should be physically separated from the background business server to avoid affecting the normal business because of the static resources. For example: long ago, I liked a website for storing images, CSS and JS separately for each project. The advantage of this website is stateless, and it can achieve fool like horizontal expansion.
As for static resource cache update, I think you can baidu, there will be a lot of answers.
If the product manager in charge of the seckill activity is an excellent product manager, it will not design such a system: the user can click the seckill to immediately give whether the order is successful or not. Students who are familiar with distributed systems will surely think that if you want to ensure such data consistency, there will be some sacrifice in usability. In particular, for business like seckill, I think usability has higher priority than consistency, so almost all seckill systems will use base theory to design systems, and final consistency is adopted for consistency. In the user’s opinion, click the second button will pop up a waiting prompt, in technology we call it: asynchronous processing. The most obvious perception of asynchronous processing for users is that it will not get the result immediately, but wait for a period of time. In fact, such a design is also a trade-off between technology and business, which is a concession made by the business.
As for the need to enter a verification code or answer to some questions before the second kill, it can also be regarded as some concessions made by the business. Why concession? For users, the most ideal second kill scenario is: a click of the second kill button will give the result immediately, but it is too difficult in technology. So, let’s meet each other and everyone will be better, right?
The first skill: limiting current
For the peak traffic of seckill, current limiting is the most direct means of peak clipping. The restricted request can be returned directly, and the client prompts the request. It can be imagined that when the 10000 / s request volume is reduced to 100 / s, it is estimated that the system can resist after a little optimization. As for the current limiting strategy, there will be many different ways according to the business, such as:
- Limit the number of requests for the same user. For example, each user can only request once every 10 seconds
- Limit the number of requests for the same IP. For example, each IP can only request once every 10 seconds
As for the current limiting algorithm, I wrote an article to introduce it before, and its performance is good
The second move: message queue
When it comes to peak traffic, every programmer can deal with the data as a fast buffer. From the perspective of its usage scenarios, it can be regarded as a balancer between low-speed devices and high-speed devices. Peak shaving using message queue is an obvious asynchronous process.
When applied to the scenario of seckill, a large number of requests will enter the message queue first. It not only flattens the peak traffic, but also asynchronize the process of placing orders in the second kill. As long as the requests are temporarily stored in the queue, the consumer can consume slowly. However, it should be noted that if the consumption speed is much slower than the message delivery speed, the whole system performance may be affected.
In addition to peak shaving, I always think that the most important role of message queuing is system decoupling, which decouples the order and payment business, and the order and payment business can be expanded independently with the carrying capacity of its own system.
The third move: cache
Why add caching? Don’t forget, in addition to a large number of users placing orders for this write operation, there are also a larger number of users requesting the result of the order reading operation. When the user clicks the “second kill” button, the system will pop up a waiting prompt box. Many systems are constantly training users for the results of placing orders. I have also written a cache article before. I once mentioned that the biggest role of cache is to provide fast response to read operations. The whole seckill system can do this:
- The user clicks the order button, and the request passes through the current limiting component. If the request is successful, it will enter the order ordering phase (here you can enter the message queue to place an order asynchronously)
- Whether the server uses redis cache or other cache components, it stores the user information (or order information) of the successful order
- If the order is placed in the client cache, the query is not successful
- If the server places an order successfully, it writes data to the cache. When the user queries again, it will prompt that the order is successful.
Although the process is very simple, in fact, there are many details in the whole process, such as: how to set the cache expiration time? Can the status of an order be introduced? How to ensure the consistency of cache data and database data?
In addition to the above information data cache, the information data of goods can also be put in the cache. Since the number of read requests is relatively large, we can consider using cache copy to improve the overall throughput.
Write it at the end
In fact, after message queuing and current limiting are applied in many systems, it is enough for the second kill service, and other schemes such as sub database and sub table can be determined according to their own business volume. The simpler the system is, the better it is. Not every system needs Taobao’s architecture.
More wonderful articles