Author: qinyutong, chengyueqiang
Consensus mechanism is a distributed consensus algorithm for blockchain transactions. With the continuous promotion of blockchain technology, consensus mechanism, as the core of blockchain, has attracted more and more attention. Consensus mechanism plays an important role in protecting data consistency. In this paper, eight common consensus mechanisms are selected. According to the principle of the mechanism, the role in the operation process, algorithm flow and advantages and disadvantages, the mechanism of workload proof, rights and interests proof, capacity proof and so on are introduced in detail. At the same time, this paper also makes a comparative analysis of similar mechanisms. So as to deepen people’s understanding of consensus mechanism and accelerate the development of blockchain technology.
Blockchain is the underlying technology of bitcoin, similar to database ledger, while consensus mechanism is the core of rules in decentralized distributed ledger, which determines many important characteristics of blockchain, such as security, scalability and decentralization.
Consensus mechanism refers to the process of reaching a unified agreement on the state of the network in a decentralized way. Also known as consensus algorithm, it helps to verify and verify that information is added to the ledger to ensure that only real transactions are recorded on the blockchain . Therefore, consensus mechanism is responsible for updating data state in distributed network safely. Rules that have been hard coded into the protocol ensure that a unique data source can always be found and agreed on in the global computer network. These rules protect the entire network, enabling a network without trust, without central data or mediation.
Consensus mechanism is an important means to decide which participating node should be used for bookkeeping and ensure the safe completion of transactions.  Consensus mechanism needs to balance the relationship between efficiency and security. The more complex the security measures are, the slower the corresponding processing time is. In order to improve the processing speed, simplifying the complexity of security measures is a very important step.
Consensus mechanism satisfies both consistency and effectiveness. Consistency means that all honest nodes keep the same part of the blockchain prefix, while validity means that the information released by an honest node will eventually be recorded in its own block by all honest nodes . The consensus mechanism ensures that the blockchain is fault-tolerant, so it is reliable and consistent. Unlike centralized systems, users do not have to trust anyone in the system. The protocol rules embedded in the blockchain consensus mechanism can ensure that only valid and real transactions can be recorded in the public transparent account book. The protocol rules embedded in the network ensure that the status of the public ledger is always updated with the change of public consensus.
An important advantage of decentralized blockchain is the allocation of authorization, and anyone can participate on the same basis. The consensus mechanism can ensure that there is no discrimination in the blockchain, so as to achieve fair distribution. Because the public blockchain has the feature of open source, anyone can supervise and verify whether the underlying source code is fair to all participants in the network .
Consensus mechanisms can achieve this by encouraging good behavior and, in some cases, punishing bad ones. For example, in the proof of workload mechanism, bitcoin is awarded to miners to guarantee and verify every transaction. Any computing and security maintenance requires a lot of computing power and money, and consensus mechanism can make these resources better used for working for the system, rather than for the system.
Common consensus mechanisms are: pow (proof of workload), POS (proof of rights and interests), dpos (delegation of rights and interests), pbft (Practical Byzantine fault-tolerant algorithm), pool (verification pool), etc.
2. POW: proof of work
In 2008, POW received attention for the first time in the bitcoin white paper. POW relies on the machine to carry out mathematical operation (and or operation, to calculate a random number that meets the rules) to obtain the accounting weight , and sends the data to other nodes in the whole network, which is verified by other nodes and stored after reaching a consensus. POW was originally an economic term, which refers to the measurement method set by the system to achieve a certain goal. Simple understanding is a proof that you have done a certain amount of work. The whole process of monitoring work is usually extremely inefficient, and it is a very efficient way to prove that the corresponding workload has been completed through the certification of the results of the work.
For example, 10 +? =12. Whoever works out the answer first will gain.
In a word: the more you do, the more you get
Figure 1: diagram of how POW works
2.1 term interpretation
Hash function hashHash function is a one-way encryption function. Hash algorithm can obtain any number of data and return a fixed length string , which is completely unique for specific input.
NonceA random number that can only be used once.
MinersAn independent transaction processor in cryptocurrency network, whose goal is to verify transactions. Also known as full node or node.
workerThere must be a certain amount of work to be done, which is given by the work verifier;
Workers cannot find a way to finish their work quickly;
Workers cannot “create jobs” on their own and must be released by verifiers.
VerifierCan quickly test whether the workload is up to standard.
1. The algorithm is simple and easy to implement.
2. There is no need to exchange additional information between nodes to reach a consensus (free access between nodes).
3. It takes a lot of cost to destroy the system.
4. All nodes of the whole network are required to participate and completely decentralize.
1. At present, bitcoin has attracted most of the global computing power. The new blockchain must find a different hash algorithm. It is difficult to use the same computing power as in the past to get the same security guarantee.
2. A lot of resources are wasted.
3. The period of reaching consensus is long, which is not suitable for commercial application (it is easy to produce bifurcation, need to wait for multiple confirmations, and it is difficult to shorten the confirmation time of blocks).
4. There will never be finality, which needs checkpoint mechanism to compensate for finality.
2.5 application examples
In bitcoin, POW is used to confirm the validity of the block (as long as the workload consumed by the CPU can meet the proof mechanism of the workload, then the information of the block cannot be changed unless the corresponding workload is completed again)
2.6 problem interpretation
Why does POW consume energy seriously?
——Because every guess of a miner requires a computer to generate a certain amount of energy. At present, the hash rate of the whole bitcoin network is ~ 17000000 Th / s, that is, 1700 million guesses per second. This energy demand is roughly the same as Hungarian consumption.
3. Pos: proof of stake
In 2012, POS was first proposed as the introduction of peercoin. POS is an upgrade consensus mechanism of pow, which does not need to consume electricity for calculation. According to the difficulty of obtaining the accounting right of each node, it makes it inversely proportional to the rights and interests held by the node, and reduces the mining difficulty in equal proportion, so as to speed up the search for random numbers. In order to ensure its simplicity, there are no miners in POS, but validators instead. It is still based on hash operation competition to obtain accounting weight, and the fault tolerance is the same as pow. POS is based on the amount of digital currency currently owned by miners. It is a system of interest distribution according to the amount and time of money you hold. In POS mode, your “mining” income is directly proportional to the age of your currency, and has nothing to do with the computing performance of the computer.
For example, POS is analogous to money in our hands. The more money we have, the more rights we get in life. In a word: the more you hold, the more you get
3.1 term interpretation
ValidatorsVerifier, to verify the transaction, the verifier must bet a certain number of tokens. The size of the stack determines the possibility of the verifier to verify the next block.
1. To a certain extent, it shortens the time for reaching consensus and improves the transfer efficiency.
2. It is no longer necessary to consume a lot of energy and calculate the mining capacity.
1. Mining is still needed. In essence, it does not solve the pain point of commercial application.
2. All the confirmations are just a probability expression, not a deterministic thing. Theoretically, there may be other attacks.
3. The degree of decentralization makes it easy for the strong to keep strong, and the large holders of money earn interest by holding money, thus leading to monopoly problems.
3.4 problem interpretation
After POS has improved the energy consumption of pow, what other issues need to be considered?
——The most controversial issue among many issues is that if too much weight is given to a lot of wealth or old nodes, the network will quickly become unfair.
Table 1: comparison of POW and POS
4. Dpos: delegated proof of stack
The main difference with POS is that the node elects a number of agents. After authorizing votes to the agents, the agents verify and account. The wallet is the status monitor. Its compliance supervision, performance, resource consumption and fault tolerance are similar to POS. Similar to the board of directors voting, the coin holders cast a certain number of nodes, and the nodes select agents to verify and account for them.
For example, if holder a supports agent 50 coins and holder B supports agent 10 coins, then the voting weight of a is five times that of B.
In a word: nodes elect several agents, which are verified and accounted for by agents
1. Select a group of block producers: the election process is decided by token holders. The performance of the elected producers will affect the work of the whole network, and then affect the interests of token holders.
2. Scheduling production
4.2 algorithm flow (taking normal operation and a few bifurcations as examples)
1. Normal operation: under normal conditions, the block producers produce in sequence with an interval of 3S. No one is missing production. Here is the longest chain (the arrow points to the previous block).
Figure 2: normal operation
2. A few bifurcations: when no more than 1 / 3 of the nodes are malicious or unable to work, resulting in a bifurcation, as shown in Figure B below, this branch generates a block every 8 seconds, while the normal working node generates two blocks every 8 seconds . The reason is that according to the order of a, B, C, a… each node has to wait for the corresponding time before it can block. According to the longest chain principle, the system still works.
Figure 3: a few bifurcations
1. Greatly reduce the number of nodes participating in verification and bookkeeping, which can achieve consensus verification at the second level.
2. Through the affirmative voting system, we can ensure that even if a person has 50% of the valid voting rights, he can not choose a blocker alone to ensure the security of the algorithm.
3. Dpos can continue to work when most developers have problems.
1. The whole consensus mechanism still depends on token, and many commercial applications do not need token.
2. Weak centralization and low degree of decentralization.
4.5 application examples
Eosio: eosio is maintained by the proof of trust (dpos) system, which was originally created by Larimer and is still used by steelit. Dan Larimer used dpos for the first time at bitshares, and it immediately became one of the fastest blockchains. Later BM introduced dpos to Steele, which is considered to be one of the fastest and most stable blockchain projects. It processes more than 1 million transactions a day and uses only one percent of its capacity.
5. Pbft: practical Byzantine fault tolerance
Pbft is a state machine replication algorithm, which generally includes three protocols: agreement, checkpoint and view change. The (n-1) / 3 fault tolerance is provided under the premise of ensuring liveness and safety. In distributed computing, different computers try to reach a consensus through message exchange. However, sometimes, the coordinator / commander or member / tenant may exchange wrong messages due to system errors, which may affect the final system consistency . The Byzantine general problem is based on how many wrong computers there are to find a possible solution. Although an absolute answer cannot be found, it can only be used to verify whether a mechanism is effective. The possible solution to Byzantine problem is: consistency is possible when n ≥ 3F + 1. Where n is the total number of computers and F is the total number of computers with problems. After the information is exchanged among computers, each computer lists all the information obtained and takes most of the results as the solution.
In a word: each “general” runs calculation or operation according to internal state and new news, so as to reach individual decision. Individual will decide to share and determine consensus decision according to all decisions.
Conformance protocolThere are at least several stages: request, pre prepare and reply. Depending on the protocol design, it may include the stages of prepare and commit.
Checkpoint protocolThe algorithm sets periodic checkpoint protocol to synchronize the servers in the system to a certain same state, and the system will set a check-point time point. At this time, the log can be processed regularly, resources can be saved and the server status can be corrected in time.
View changeChange of agreementWhen a replica node i detects that the primary node is malicious or offline, it will generate a timeout mechanism, start the view change, change the view number V to V + 1, and no longer accept other message requests except checkpoint, view change and new view.
PSI based on fully homomorphic encryptionFully homomorphic encryption technology enables the mathematical operation of plaintext to be carried out directly on the ciphertext without first decrypting the ciphertext. Early full homomorphic encryption is very inefficient, but its performance has only improved in recent years. The PSI protocol uses the fully homomorphic encryption technology. The party with a smaller set encrypts its own set and sends it to the other party. The other party is responsible for finding the intersection of two sets based on the ciphertext, and then sends the result to the other party for decryption. Using fully homomorphic encryption to implement psi can achieve relatively small interaction complexity, but its computational complexity is usually very high, resulting in low efficiency of the protocol. Therefore, reducing the computational complexity without sacrificing too much interaction complexity is the main challenge of PSI protocol based on homomorphic encryption.
5.2 algorithm flow
1. Client initiated message: client c sends operation request to master node < request, O, t, C > 0
o: Request to perform state machine operation
t: Timestamp appended by client
c: Client flag
Request: contains message content m and message digest D (m)
2. Pre preparation stage: in the pre preparation stage, the master node verifies the signature of the received client request message, and assigns a sequence number n to the received client request based on the current view v, and then sends the pre preparation message < < pre-prepare, V, N, d >, M > to all backup node groups
v: View number m: request message sent by client D: summary of request message M N: sequence number of pre prepared message
The purpose of preparing the message is to confirm that the request has been assigned the sequence number N in view v, so that the message can still be traced during the process of view change.
3. Preparation phase: when replica node i receives the pre-prepare message from the master node, it needs to perform the following verification:
Whether the pre-prepare message signature of the master node is correct.
Whether the current replica node has received a pre-prepare message under the same V and the number is n, but the signature is different.
Whether the summary of D and M is consistent.
Whether n is in the interval [h, H].
After the verification is passed, replica node i sends the prepare message < prepare, V, N, D, I > to all replica nodes, and saves the pre preparation message and preparation message in node i for recovering the unfinished request operation during the view change process. When node i receives the preparation message of more than 2F + 1 nodes, it can enter the confirmation phase. The view number V, message sequence n, and digest D of these messages are checked.
4. Confirmation phase: the node entering the confirmation phase sends a < commit, V, N, D, I > confirmation message to other nodes, including the master node. When node i receives the 2F + 1 m confirmation message and meets the corresponding conditions, the message M changes to the committed local state, and the node executes the request of M. After completing the M request, each replica node sends a reply to the client. Enter the reply phase.
5. Respond to the client: node i returns < reply, V, t, C, I, R > to the client, R: request the operation result. If the client receives F + 1 identical reply messages, it indicates that the request initiated by the client has reached the consensus of the whole network. Otherwise, the client needs to determine whether to resend the request to the primary node.
Figure 4: Note: Replica 0 is the primary node, replica 3 is the failed node, and C is the client.
1. The operation of the system can be separated from the existence of currency. Pbft algorithm consensus that each node is composed of business participants or regulators, and the security and stability are guaranteed by business stakeholders.
2. The time delay of consensus is about 2 ~ 5 seconds, which basically meets the requirements of commercial real-time processing.
3. High consensus efficiency and high throughput can meet the demand of high-frequency transaction volume.
4. It is more energy-saving and environment-friendly without using the power consumption mode with workload certification.
1. Limited by the number of nodes and the need for election or permission, the scalability and decentralization are weak.
2. Low fault tolerance, when 1 / 3 or more bookkeepers stop working, the system will not be able to provide services; when 1 / 3 or more bookkeepers work together and all other bookkeepers are divided into two network islands, malicious bookkeepers can cause the system to bifurcate, but it will leave cryptographic evidence.
The verification pool mechanism is based on the combination of traditional distributed consistency technology and data verification mechanism. It enables the second level consensus verification without token on the basis of mature distributed consistency algorithms (Paxos, raft).
1. It can work without token. On the basis of the mature distributed consistency algorithm (Paxos, raft), the second level consensus verification is realized.
2. Improve the running speed of the application, improve the efficiency and reduce the cost.
1. The degree of decentralization is not as good as bitcoin.
Multi center business model is more suitable.
7. POC: proof of capacity
In 2015, the concept was formally defined by dziembowski et al. POC shows that someone has a legitimate interest in a service, such as sending mail, by allocating a certain amount of memory or disk space to address the challenges provided by service providers. Although ateniese’s paper name is “proof of space”, it is actually a POW protocol using MHF (memory hard function, a hash algorithm whose calculation cost depends on memory). POC is a way to save computing time by caching a large amount of data.
For example, fill a hard drive with a lottery ticket, generate a random number, and then check the person who matches the most. If you have the most matching number, you will win the prize.
In a word: the larger the storage space, the more income you will receive (with and only actual labor, can we achieve results)
7.1 term interpretation
Hash value shabal shabalThe algorithm is a very slow algorithm, allowing the input of any length of sequential sequence, or even an empty sequence. It is also suitable for byte streams of any length, but for security reasons, the suitable length should be less than 27 bits. The input length can be a multiple of any integer value and 8. If a bit sequence is given, its index number is left and right, that is, the index of the first bit is 0. Left and right are used to describe an ordered sequence of bits: the first bit in the sequence is called the leftmost bit, and the last bit is called the rightmost bit.
PlottingThe file that stores a large number of pre computed hash values can still participate in the block creation process after the memory is full of hash values. This process is called drawing.
7.2 algorithm flow
1. Hard drive drawing: depending on the size of the hard drive, it may take days or even weeks to create a unique drawing file. Drawing uses a very slow hash value called shabal. Unlike SHA-256 hashes, shabal is a hash that bitcoin miners use quickly. Because the shabal hashes are hard to calculate, we have to pre calculate them and store them on the hard disk. This process is called drawing hard drives.
2. Actual mining of blocks: calculate the number of buckets between 0 and 4085. Suppose your calculation is 42. Then you’re going to scoop 42 scoops of nonce 1 and use the scoop data to calculate a period of time, which is called the cut-off time . Repeat this process for all nonces on the hard disk. After calculating all the due dates, you will choose the smallest deadline. The deadline means “the number of seconds that must elapse since the last block was forged before you are allowed to forge a block.”. If no one else has forged a block during this time, you can forge a block and get a block reward.
Figure 5: working process diagram of POC
1. You can use any ordinary hard disk, so other miners won’t get an advantage from buying specialized equipment, such as mining bitcoin with ASIC.
2. The energy efficiency of hard disk drive is 30 times higher than that of ASIC based mining.
3. Capacity proof is more decentralized, because everyone has a hard drive. You can even mine from the hard disk of your Android phone.
4. Miners don’t need to upgrade their equipment. Old hard disks can store data like new ones.
5. After mining, the hard disk drive can be removed and used for the original purpose.
1. The general evidence of production capacity exploitation may lead to another arms race. Today people use TB hard drives, but we will eventually see Pb, exabytes and zettabytes.
2. Capacity proving is a relatively new technology, which has not been strictly tested and challenged in the real world.
3. At present, the data drawn by hard disk drive is useless except for mining purpose. However, there are plans to use hard disks as redundant storage for important open source information. The hard disk can store maps, Wikipedia articles, or other valuable information.
4. There are already malware mining bitcoin on people’s computers. If capacity proof becomes popular, you may see malware conspiring with people’s hard drives.
7.5 problem interpretation
Why is POC a consensus mechanism for low power consumption?
——In the environment of capacity proof, due to the inherent insensitivity of mechanical hard disk to electric power, the power cost is no longer a stepping stone for miners to join the network in the short term. The wide adaptability of mechanical hard disk further reduces the difficulty of miners to join the network. At present, 1 ~ 2T hard disk commonly used in home computers can become mining equipment. Furthermore, the capacity proof does not store network data, even if the hard disk is damaged, it will not lead to the loss of network content, thus avoiding the impact of data loss on network usability in filecoin spatiotemporal proof.
Figure 6: comparison of POC and pow
Paxos was first published by Lamport in 1888 in the part time partnership. Paxos algorithm solves the problem of distributed consistency, that is, how each process in a distributed system can reach an agreement on a certain value (resolution).
Paxos algorithm runs in an asynchronous system that allows downtime. It does not require reliable message delivery and can tolerate message loss, delay, disorder and repetition. It uses most mechanisms to ensure the fault tolerance of 2F + 1, that is, the system with 2F + 1 nodes allows at most f nodes to fail at the same time . One or more proposers can initiate a proposal. Paxos algorithm makes one of all proposals reach an agreement in all processes. A consensus is reached when the majority in the system simultaneously approves the proposal. At most, an agreement can be reached on a specific proposal.
ProposerMake a proposal. The proposal information includes the proposal ID and the value of the proposal.
AcceptorParticipate in decision making and respond to proposers’ proposals. After receiving the proposal, the proposal can be accepted. If the proposal is accepted by the majority of acceptors, the proposal is said to be approved.
LearnerDo not participate in decision-making, learn the latest agreed proposal (value) from proposers / acceptors.
8.2.1 safety principle
Make sure you don’t do anything wrong.
1. Only one value can be approved for a vote on an instance, and an approved value cannot be overridden by another value; (if a value is approved by most acceptors, then this value can only be learned).
2. Each node can only learn the values that have been approved, but cannot learn the values that have not been approved.
8.2.2 survival principle
As long as most servers survive and can communicate with each other, the following things should be done in the end:
1. A proposed value will eventually be approved.
2. If a value is approved, other servers will eventually learn the value.
8.3 algorithm flow
1. The first stage: prepare stage. Proposer sends prepare request to acceptors, and acceptors promise to prepare request received.
2. The second stage: accept stage. After receiving the promise promised by most acceptors, the proposer sends a proposal request to the acceptors, and the acceptors accept the received proposal requests.
3. Learn stage. After the proposer receives the acceptance from most acceptors, it indicates that the acceptance is successful, and the resolution is formed, which will be sent to all learners.
8.3.1 algorithm description
1. Prepare: the proposer generates a globally unique and incremental proposal ID (timestamp plus server ID can be used) to send prepare requests to all acceptors. Here, there is no need to carry the proposal content, just carry the proposal ID.
2. Proposal: after receiving the promise responses from most acceptors, the proposer selects the value of the proposal with the largest proposal ID from the responses as the proposal initiated this time. If the proposal values of all responses are null, you can decide the proposal value at will. Then it carries the current proposal ID and sends a proposal request to all acceptors.
3. After receiving prepare request, acceptors make “two promises and one response”.
4. Accept: after receiving the proposal request, the acceptor will accept and persist the current proposal ID and proposal value without violating its previous commitment.
5. Learn: after the proposer receives the acceptance from most acceptors, the resolution is formed and sent to all learners.
Figure 7: Paxos algorithm process
8.3.2 two commitments and one response 
1. Two commitments: no longer accept prepare requests whose proposal ID is less than or equal to the current request; no longer accept proposal requests whose proposal ID is less than or equal to the current request.
2. A response: reply the value and proposal ID of the proposal with the largest proposal ID among the accepted proposals without violating the previous commitment. If not, return null value.
1. Efficient, node communication does not need to verify the identity signature.
2. Paxos algorithm has strict mathematical proof, and the system design is exquisite.
3. Fault tolerant performance: it is allowed that half of acceptor failures and any number of Proposer failures can run. Once the value is determined, even if less than half of the acceptors fail, the value can be obtained and will not be modified.
1. Engineering practice is very difficult, to achieve industrial performance requires different degrees of engineering optimization, and sometimes the deviation of engineering design will cause the collapse of the whole system.
2. It is only applicable to permissive systems (private chain), and can only accommodate fault nodes, not corrupt nodes.
3. Hold CFT (crash fault tolerance), and do not support Byzantine fault tolerance.
8.6 problem interpretation
The relationship between multi Paxos and Paxos?
——The naive Paxos algorithm determines a value through multiple rounds of prepare / accept process, which we call an instance. Multi Paxos uses Paxos algorithm to determine many values, and the order of these values is completely consistent in each node. In general, it is to determine a global order .
Raft was originally a consensus algorithm for managing replication logs. It is a protocol established for real-world applications, focusing on the landing and comprehensibility of the protocol. Raft is a strong consensus protocol to reach consensus under non Byzantine failures, and is a consistency algorithm for managing replication logs. Its primary design purpose is to be easy to understand, so it has chosen a very simple and clear solution in the way of conflict handling. The basic guarantee of Paxos is that there must be a public member in any two legal sets. In addition to improving the performance of the whole system, the reliability of the system is also an important feature of the distributed system. Providing reliability can be understood as the failure of one or more machines in the system will not make the system unavailable (or lose data). The key to ensure the reliability of the system is multiple copies (that is, data needs to be backed up). Once there are multiple copies, the consistency problem between multiple copies will be faced for a long time. Consistency algorithm is used to solve the problem of data consistency between multiple replicas in distributed environment.
The most famous consistency algorithm in the industry is Paxos (chubby’s author once said: there is only one consistency algorithm in the world, which is Paxos).
But Paxos is notoriously hard to understand, and raft was created to explore a more easily understood consistency algorithm.
Raft requires the system to have at most one leader at any time and only leaders and followers during normal operation.
Leader leaderAccept the client request and synchronize the request log to the follower. When the log is synchronized to most nodes, the follower is told to submit the log. All changes to the system will go through the leader first, and a log entry will be written for each modification. The process after the leader receives the modification request is as follows. This process is called log replication
Figure 8: log replication
FollowerAll nodes start with the state of follower. If the leader message is not received, it will become candidate status. Accept and persist the log synchronized by the leader. After the leader tells the log can be submitted, submit the log.
CandidateTemporary role in the leader election process. It will “pull votes” from other nodes. If the majority of votes are obtained, it will become the leader.
Figure 9: relationship between three roles
9.2 algorithm flow
1. Leader election: when follower does not receive the leader’s heartbeat message within the election timeout, it will be converted to candidate status. In order to avoid election conflict, the timeout is a random number between 150 and 300 ms. In general, in raft system:
Any server can be a candidate candidate, which sends a request to the other server follower to elect itself.
The other servers agree and issue OK. Note: in this process, if a follower fails to receive the request for election, the candidate can choose himself or herself. As long as the majority of votes reaches n / 2 + 1, the candidate can still become a leader.
The candidate becomes the leader leader, who can give instructions to the voters, the follower, for example, to keep accounts.
· notification of billing by heartbeat.
Once the leader collapses, one of the followers becomes a candidate and issues an invitation to vote.
After the follower agrees, it becomes the leader and continues to take charge of accounting and other guidance work.
2. Log replication log replication: the accounting process of raft is completed according to the following steps:
Figure 10: Bookkeeping process
Repeat the process for each new transaction. In this process, if network communication failure occurs and the leader cannot access most of the followers, then the leader can only normally update those followers that it can access. Since most of the server followers do not have a leader, they will re elect a candidate as the leader, and then the leader will act as a representative to deal with the outside world. If the outside world requires it to add new transaction records, the new leader will inform most followers according to the above steps. When the network communication is restored, the original leader will become a follower. In the lost connection stage, any update of the old leader cannot be considered as confirmation, and all updates must be rolled back to receive new updates from the new leader.
1. It is easier to understand and implement than Paxos algorithm.
2. Raft is as efficient as Paxos, and raft is equivalent to (Multi -) Paxos in efficiency.
3. It is suitable for permissive systems (private chain), which can only accommodate failed nodes (CFT), and does not support malicious nodes. The largest fault tolerant node is (n-1) / 2, where n is the total number of nodes in the cluster. By strengthening the position of the leader, the whole protocol can be divided into two parts: when the leader is in, the leader synchronizes the log with the follower; if the leader fails, select a new leader.
4. Emphasize the uniqueness protocol of legal leader. They describe the protocol process directly from the leader’s degree, and demonstrate the correctness from the leader’s point of view.
1. It is only applicable to permissive systems (private chain), and can only accommodate failed nodes (CFT), not malicious nodes.
Table 2: comparison of raft and multi Paxos
 Beccuti, J., and Jaag, C. The bitcoin mining game:On the optimality of honesty in proof-of-work consensus mechanism. WorkingPapers (2017).
 From Wikipedia, t. f. e. Paxos (computer science). https://en.wikipedia.org/wiki…_(computer_science).
 From Wikipedia, t. f. e. Proof of work. https://en.wikipedia.org/wiki…_of_work.
 Lamport, L. Paxos madesimple. https://www.microsoft.com/en-…
 Team, E. Proof of capacity explained: Theeco-friendly mining algorithm. https://www.coinbureau.com/ed…
 Du Jiangtian. Hash algorithm in blockchain workload proof mechanism. Computer programming skills and maintenance_ No.394_ , 4 (2018), 42–44.
 Yang Yuguang, and Zhang Shuxin. Review and research for consensus mechanism of block chain information security research_ 004_ , 4 (2018), 369–379.
 Liang Bin. The consensus mechanism of blockchain technology from “bitcoin mining”. China financial computer, 9 (2016), 45 – 46
 Gan Jun, Li Qiang, Chen Zihao, and Zhang Chao. Improvement of block chain practical Byzantine fault tolerant consensus algorithm. Computer applications, 7 (2019)
 Zhu Chaofan, Guo Jinwei, and Cai Peng. Implementation and optimization of distributed consistency algorithm based on Paxos. Journal of East China Normal University_ ( Natural Science_ ) , 10 (2019), 168–177.
 Cheng Shuzhi, Shi Wenxuan, and Liu Liting. Overview of blockchain technology
 Han Xuan, and Liu Yamin. Research on consensus mechanism in blockchain technology. Information network security, 9 (2017)
 Huang Jiacheng, Xu Xinhua, and Wang Shichun. Improvement scheme of consensus mechanism for proof of rights and interests. Computer applications, 7 (2019), 2162 – 2167