Let’s talk about an interesting topic today How to optimize the concurrency of distributed locks in the scenario of thousands of orders per second?
First of all, let’s look at the background of this problem?
Some time ago, a friend was interviewing outside. Then one day, he came to me and said: there is a good domestic e-commerce company. The interviewer gave him a scene question:
If a distributed lock is used to prevent inventory oversold when placing an order, but it is a high concurrency scenario with thousands of orders per second, how to optimize the distributed lock for high concurrency to deal with this scenario?
He said that he didn’t answer at that time because he didn’t do anything and didn’t have any ideas. In fact, when I heard this interview question, I thought it was a bit interesting, because if I came to interview candidates, I would have given them a wider range.
For example, let the interviewees talk about the inventory oversold solutions under the scenario of high concurrency and seckill of e-commerce, the advantages and disadvantages of various solutions and practice, and then talk about the topic of distributed lock.
Because there are many technical solutions to the inventory oversold problem, such as pessimistic lock, distributed lock, optimistic lock, queue serialization, redis atomic operation, and so on.
However, since the interviewer brother is limited to using distributed locks to solve inventory oversold, I guess I just want to ask a question: how to optimize the concurrency performance of distributed locks in high concurrency scenarios.
I think the interviewer’s question angle is acceptable, because in actual production, distributed lock ensures the accuracy of data, but its natural concurrent ability is a little weak.
It happened that I did the distributed lock optimization scheme in the high concurrency scenario in other scenarios of my project, so I just took advantage of this friend’s interview question to talk about the high concurrency optimization idea of distributed lock.
How does inventory oversold come into being?
Let’s take a look first. If we don’t use distributed locks, what does the so-called oversold of e-commerce inventory mean? Let’s take a look at the picture below
This figure is actually very clear. Suppose that the order system is deployed on two machines, and different users have to buy 10 iPhones at the same time, and send a request to the order system. Then each order system instance went to the database to check, the current iPhone inventory is 12.
When the two brothers saw it, they were happy. The inventory of 12 sets was larger than the quantity of 10 sets they wanted to buy! Thus, each order system instance sends SQL to the database to place an order, and then 10 inventory items are deducted. One of them reduces the inventory from 12 units to 2 units, and the other reduces the inventory from 2 units to – 8 units.
Now it’s over, the inventory is negative! No 20 iPhones for two users! What can we do.
How to solve inventory oversold problem with distributed lock?
How can we solve the oversold problem with distributed lock? In fact, it’s very simple. Recall the implementation principle of the distributed lock we talked about last time
For the same lock key, only one client can get the lock at the same time, and other clients will fall into infinite waiting to try to get the lock. Only the client who gets the lock can execute the following business logic.
The code looks like the above. Now let’s analyze why this can avoid oversold of inventory?
You can follow the sequence number of the above step to see it again, and you will understand it immediately. As can be seen from the above figure, only one instance of the order system can successfully add a distributed lock, and then only one instance can check the inventory, judge whether the inventory is sufficient, place an order to deduct the inventory, and then release the lock.
After the lock is released, another order system instance can be locked. Next, check the inventory and find that there are only 2 units in stock. The inventory is insufficient and cannot be purchased. The order fails. The inventory will not be reduced to – 8.
Is there any other solution to the oversold problem?
Of course! For example, pessimistic lock, distributed lock, optimistic lock, queue serialization, asynchronous queue decentralization, redis atomic operation, and many other schemes, we have our own set of optimization mechanism for inventory oversold.
But as I said before, this article talks about the concurrent optimization of distributed locks, not about the solution of inventory oversold. Inventory oversold is just a business scenario.
Distributed lock scheme in high concurrency scenario
OK, now let’s see what’s wrong with the distributed lock scheme in the high concurrency scenario?
It’s a big problem! Brother, I don’t know if you see it. Once the distributed lock is added, the order request for the same commodity will cause all clients to lock the inventory lock key of the same commodity.
For example, if you place an order for the iPhone, you must place an order for the iPhone_ “Stock” is the key to lock. As a result, the order request for the same commodity must be serialized and processed one by one. Let’s go back and look at the above figure again and again. We should be able to figure out this problem.
Suppose that after the lock is added and before the lock is released, the process of checking the inventory, creating an order and deducting the inventory is of high performance. If the whole process takes 20 milliseconds, it should be good.
Then, one second is 1000 milliseconds, which can only hold 50 requests for this commodity in sequence. For example, if 50 requests are placed for the iPhone in a second, then each request will be processed for 20 ms, one by one, and the last 1000 ms will just finish processing 50 requests.
Let’s take a look at the picture below to deepen our feeling.
So here, we at least understand the defects of simply using distributed lock to deal with inventory oversold.
The drawback is that when multiple users place orders for the same product at the same time, it will be based on the distributed lock serialization processing, which makes it impossible to process a large number of orders for the same product at the same time.
This solution may be acceptable if it deals with the ordinary small e-commerce system with low concurrency and no second kill scenario. Because if the concurrency is very low, there are less than 10 requests per second, and there is no scenario of instant high concurrency to kill a single product, in fact, it is rare to place 1000 orders for the same product in a second, because the small e-commerce system does not have that scenario.
How to optimize distributed locks with high concurrency?
Well, finally, what should we do now?
The interviewer said, I’m stuck now. Inventory oversold is solved by distributed lock, and thousands of orders are placed for an iPhone every second. How to optimize?
Now, according to the calculation, you can only process 50 orders for iPhones in one second.
In fact, it’s very simple to say. I believe many people have seen the source code and underlying principles of concurrent HashMap in Java, and they should know that the core idea is Block lock!
The data is divided into many segments, each segment is a separate lock, so when multiple threads come to modify the data concurrently, they can modify the data of different segments concurrently. Needless to say, only one thread can modify the data in concurrent HashMap at the same time.
In addition, a new longadder class is added in Java 8, which is also an optimization for atomiclong before Java 7. It solves the problem that CAS class operations in high concurrency scenarios use optimistic locking ideas, which will lead to a large number of threads repeating loops for a long time.
Longadder also adopts a similar idea of segmented CAS operation. If it fails, it will automatically migrate to the next segment for CAS.
In fact, the optimization idea of distributed lock is similar. Previously, we implemented this solution in another business scenario, not in the problem of inventory oversold.
But the business scenario of inventory oversold is good and easy to understand, so let’s use this scenario. Let’s take a look at the picture below
In fact, this is locking by sections. If you have 1000 iPhones in stock, you can split them into 20 stock segments. If you want, you can create 20 stock fields in the database table, such as stock_ 01，stock_ 02. For things like this, you can also put 20 inventory keys in places like redis.
In a word, it is to take apart your 1000 pieces of stock. Each stock segment is 50 pieces of stock, such as stock_ 01 corresponding to 50 pieces, stock_ 02 corresponds to 50 pieces in stock.
Then, 1000 requests per second, OK! At this point, you can actually write a simple random algorithm, each request is randomly in 20 segmented inventory, choose one to lock.
bingo！ That’s good. At the same time, there can be up to 20 order placing requests executed together. Each order placing request locks an inventory segment. Then, in the business logic, you can operate on the segment inventory in the database or redis, including checking inventory, judging whether the inventory is sufficient, and deducting inventory.
What does that amount to? It is equivalent to a 20 millisecond, which can process 20 order requests simultaneously. Then, in one second, it can process 20 * 50 requests in turn = 1000 requests for iPhone orders.
Once a certain data has been segmented, there is a pit that we must pay attention to: if a certain order request, click lock, and then find that the inventory in the segmented inventory is insufficient, what should we do at this time?
At this time, you have to release the lock automatically, and then immediately change the next segment inventory, try to lock again and then try to deal with it. This process must be realized.
Is there any deficiency in the distributed lock concurrency optimization scheme?
There must be some shortcomings. The biggest one is that you don’t find it. It’s very inconvenient! The implementation is too complicated.
- First of all, you have to store a piece of data in sections. An inventory field is good, but now it is divided into 20 sections;
- Secondly, every time you deal with inventory, you have to write your own random algorithm and randomly select a segment to deal with;
- Finally, if there is not enough data in one segment, you have to automatically switch to the next segment for processing.
This process is to manually write code to achieve, or a little workload, quite troublesome.
However, we do use distributed locks in some business scenarios, and then we have to optimize lock concurrency, and we further use the segmented locking technology. Of course, the effect is very good, and the concurrency performance can increase dozens of times at a time.
The subsequent improvement of the optimization scheme
Take the inventory oversold scenario as an example, if you play like this, you will make yourself miserable!
Once again, the inventory oversold scenario here is just a demonstration scenario. When we have a chance, we will talk about other solutions of inventory oversold under the high concurrency seckill system architecture.
After three things ❤️
If you think this content is quite helpful to you, I’d like to invite you to do me three small favors:
Like, forward, have your “praise and comment”, is my creation power.
Pay attention to the official account of “Java didi” and share original knowledge without any time.
At the same time, we can look forward to the following articles