Zheng Shuang can also understand the principle of zookeeper distributed lock

Time:2021-4-7

introduce

In many scenarios, data consistency is an important topic. In a stand-alone environment, we can solve it through the concurrency API provided by Java. In a distributed environment (network failure, message duplication, message loss and other problems are encountered), it is much more complex, such as e-commerce inventory deduction, seckill activity, cluster timing task execution and other scenarios that require mutual exclusion of processes . This paper mainly discusses how to use zookeeper to implement distributed lock, and compares the advantages and disadvantages of some other distributed lock schemes. As for what to use, it depends on your own business scenario, and there is no absolute solution.

What is distributed lock?

Distributed lock is a way to control synchronous access to shared resources between distributed systems.

The key points of realizing distributed lock are as follows:
  • Lock reentry (recursive calls should not be blocked to avoid deadlock)
  • Lock timeout (to avoid deadlock, dead loop and other unexpected situations)
  • Lock blocking
  • Lock feature support (blocking lock, reentrant lock, fair lock, interlock, semaphore, read-write lock)
Note on using distributed lock:
  • The overhead of distributed lock (distributed lock is usually not used when it can be used, and optimistic lock can be used in some scenarios)
  • Granularity of locking (controlling the granularity of locking can optimize the performance of the system)
  • How to lock

Common implementation of distributed lock scheme

database

Unique index based on database table

The simplest way is to create a lock table directly. When we want to lock a method or resource, we add a record to the table and delete the record when we want to release the lock. Add a uniqueness constraint to a field. If multiple requests are submitted to the database at the same time, the database will guarantee that only one operation can succeed. Then we can think that the thread that successfully operated has obtained the lock of the method and can execute the content of the method body.

But it will lead to the problems of single point, no failure time, no blocking, no reentry and so on.

Exclusive lock based on Database

If you use the InnoDB engine of MySQL and add for update after the query statement, the database will add an exclusive lock to the database table during the query process (query by unique index). We can think that the thread that obtains the exclusive lock can obtain the distributed lock connection.commit () operation to release the lock.

It will lead to the problems of single point database, non reentry, unable to guarantee the use of row lock and exclusive lock, so it may not be submitted for a long time, resulting in the occupation of database connection.

Advantages and disadvantages

  • advantage:

With the help of database, it is easy to understand.

  • shortcoming

It will introduce more problems and make the whole scheme more and more complex

The operation of database needs a certain amount of overhead, and there are some performance problems

Using row level lock of database is not necessarily reliable, especially when our lock table is not big

cache

Compared with the scheme of distributed lock based on database, the scheme based on cache will perform better in performance. At present, there are many mature cache products, including redis, memcached, tail and so on.

Distributed locking based on setnx () and expire () methods of redis

The meaning of setnx is set if not exists, which mainly has two parameters setnx (key, value). This method is atomic. If the key does not exist, the current key is set successfully and 1 is returned. If the current key already exists, the current key is set failed and 0 is returned.

To set the expiration time, it should be noted that the setnx command can not set the timeout time of the key, but can only set the key through expiration().

Redis has a command that can implement atomic instructions with the same effect as setnx and expire, which is not mentioned in the explanation of the blogger. Command:
set k1 v1 ex 10 nx. Of course, in order to realize the above two commands atomic operation, Lua script can also be used.

Distributed lock based on redlock

Redlock is a redis distributed lock in cluster mode, which is based on N completely independent redis nodes (generally n can be set to 5)

Distributed lock based on redisson

Redisson is the official distributed lock component of redis

Advantages and disadvantages

  • advantage

Good performance

  • shortcoming

There are too many factors to consider in the implementation,
It is not very reliable to control the failure time of the lock through the timeout

Zookeeper

General idea

When each client locks a method, it generates a unique temporary ordered node under the specified node directory corresponding to the method in zookeeper (zk has the function of automatically generating ordered nodes). It’s very simple to judge whether to obtain the lock or not, just need to judge the smallest one in the ordered nodes. When releasing the lock, just delete the temporary node. At the same time, it can avoid the deadlock problem caused by service downtime

The core principle of zookeeper to realize distributed lock

Exclusive lock

Exclusive lock, also known as write lock or exclusive lock. If transaction T1 applies an exclusive lock to data object O1, only transaction T1 is allowed to read or update O1 during the whole locking period, and no other transaction can operate on this data object until T1 releases the exclusive lock.

The core of exclusive lock is to ensure that there is only one transaction obtaining the lock, and after the lock is released, all transactions waiting to obtain the lock can be notified.

The strong consistency of zookeeper can ensure the global uniqueness of creating nodes in the case of distributed high concurrency. We can use zookeeper to realize exclusive lock.

Reading and writing are mutually exclusive, writing mutually exclusive and reading mutually exclusive

There are three core steps: defining lock, acquiring lock and releasing lock

  • Determinate lock

A lock is represented by a data node on zookeeper

  • Get lock

The client creates a temporary node representing the lock by calling the Create method. If it is created successfully, it is considered that the client has obtained the lock. Failed to create. The lock is considered occupied. At the same time, let the node that has not obtained the lock register watcher monitoring on the node, so that it can monitor the change of the lock node in real time and obtain the lock again.

  • Release the lock

If the client that has obtained the lock is down or abnormal, the temporary node on zookeeper will be deleted and the lock will be released.

After the normal execution of the business logic, the client takes the initiative to delete the temporary node created by itself.

2. Shared lock

Shared lock, also known as read lock. If transaction T1 applies a shared lock to data object O1, the current transaction can only read O1, and other transactions can only apply a shared lock to this data object until all the shared locks on the data object are released.

The difference between shared lock and exclusive lock is that after the exclusive lock is added, the data object is only visible to the current transaction, while after the shared lock is added, the data object is visible to all transactions.

Summary, reading and sharing, reading and writing mutually exclusive. All read requests do not lock resources and share resources. Lock resources when you read and write.

The implementation principle is also three core steps

  • Determinate lock

A lock is represented by a data node on zookeeper, which is similar to a lock/Lockpath / [host name] – request type serial numberTemporary order node for

  • Get lock

The client creates a temporary sequential node representing the lock by calling the Create method. If it is a read request, the/Lockpath / [host name] – r-sequence numberNode, if it is a write request, it is created/Lockpath / [host name] – w-serial numbernode

The logic of obtaining shared lock is as follows:

  1. After the node is created, all the child nodes under the / lockpath node are obtained, and the watcher monitoring of the child node change is registered for the node
  2. Determine your own node number
  3. For the read request, if there is a write request smaller than its own serial number, it will wait to continue to acquire the lock, if not, it will acquire the shared lock and operate the resource. For the write request, if there is a read / write request smaller than its own serial number, it will wait to acquire the lock, otherwise it will acquire the lock.
  4. After receiving the watcher notification, repeat step 1
  • Release the lock

Consistent with exclusive lock logic.

3. The generation of herding

The first step of “judging the read-write order” in the implementation of shared lock is: after the node is created, all the child nodes under the / lockpath node are obtained, and the watcher monitoring of the child node change is registered for the node. In this way, any time the client removes the shared lock, zookeeper will send the watcher notification of child node change to all machines, and there will be a large number of “watcher notification” and “child node list acquisition” in the system This operation is repeated, and then all nodes judge whether they are the node with the smallest serial number (write request) or whether the child node with smaller serial number is the read request (read request), so as to continue to wait for the next notification.

However, many of these repeated operations are “useless”. In fact, each lock competitor only needs to pay attention to whether the node whose serial number is smaller than itself exists.

When the cluster scale is relatively large, these “useless” operations will not only have a huge impact on the performance of zookeeper and the network, what’s more, if more than one client releases the shared lock at the same time, the zookeeper server will send a large number of event notifications to the other clients in a short time – this is the so-called “herd effect”.

Improved distributed lock implementation:

  1. The client calls the Create method to create a/Lockpath / [host name] – request type serial numberTemporary order node for
  2. The client calls the getchildren method to get the list of all created child nodes (no watcher is registered here)
  3. Read request: register watcher listening to the last write request node whose serial number is smaller than its own, write request: register watcher listening to the last node whose serial number is smaller than its own (the logic of whether there is a lock is the same as above)
  4. Wait for watcher to listen and continue to step 2

Implementation of distributed lock with curator client

Apache curator is an open source client of zookeeper. It provides Abstract encapsulation of various application scenarios (such as shared lock service, master election, distributed counter, etc.) of zookeeper. Next, we will use the classes provided by curator to implement distributed lock. There are five classes related to distributed lock provided by curator, which are as follows:

  1. Shared reentrant lock
  2. Shared lock shared non reentrant lock
  3. Shared reentrant read write lock
  4. Shared semaphore semaphore
  5. Multi shared lock

About error handling: it is highly recommended to use connectionstatelistener to handle connection state changes. When you connect to lost, you no longer have a lock.

1. Re entrant lock

Shared reentrant lock, global reentrant lock, all clients can request, the same client has the lock at the same time, can obtain multiple times, will not be blocked. It is implemented by the class interprocessmutex

//Construction method
public InterProcessMutex(CuratorFramework client, String path)
public InterProcessMutex(CuratorFramework client, String path, LockInternalsDriver driver)
//Acquire the lock through acquire and provide a timeout mechanism:
public void acquire() throws Exception
public boolean acquire(long time, TimeUnit unit) throws Exception
//Undo lock
public void makeRevocable(RevocationListener<InterProcessMutex> listener)
public void makeRevocable(final RevocationListener<InterProcessMutex> listener, Executor executor)

Define a fakelimitedresource class to simulate the shared resource. The resource can only be used by one thread at a time, and the next thread can only use it until the end of use. Otherwise, an exception will be thrown.

public class FakeLimitedResource {
    private final AtomicBoolean inUse = new AtomicBoolean(false);

    //Simulate resources that can only be operated on a single thread
    public void use() throws InterruptedException {
        if (!inUse.compareAndSet(false, true)) {
            //This exception cannot be thrown if the lock is used correctly
            throw new IllegalStateException("Needs to be used by one client at a time");
        }
        try {
            Thread.sleep((long) (100 * Math.random()));
        } finally {
            inUse.set(false);
        }
    }
}

The following code will create n threads to simulate the nodes in the distributed system, and the system will control the synchronous use of resources through interprocessmutex.

Each node will initiate 10 requests to completeRequest lock — access resource — request lock again — release lock — release lockIt’s a process of change.

The client requests the lock through acquire and releases the lock through release. Several locks will be released if the client obtains them.

This shared resource can only be used by one thread at a time. If control synchronization fails, an exception will be thrown.

public class SharedReentrantLockTest {
    private static final String lockPath = "/testZK/sharedreentrantlock";
    private static final Integer clientNums = 5;
    Final static fakelimitedresource resource = new fakelimitedresource(); // shared resource
    private static CountDownLatch countDownLatch = new CountDownLatch(clientNums);

    public static void main(String[] args) throws InterruptedException {
        for (int i = 0; i < clientNums; i++) {
            String clientName = "client#" + i;
            new Thread(new Runnable() {
                @Override
                public void run() {
                    CuratorFramework client = ZKUtils.getClient();
                    client.start();
                    Random random = new Random();
                    try {
                        final InterProcessMutex lock = new InterProcessMutex(client, lockPath);
                        //Each client requests 10 shared resources
                        for (int j = 0; j < 10; j++) {
                            if (!lock.acquire(10, TimeUnit.SECONDS)) {
                                Throw new IllegalStateException (j + ". + ClientName +" cannot get mutex lock ");
                            }
                            try {
                                System.out.println (j + ". + ClientName +" mutual exclusive lock obtained ");
                                resource.use (); // use resources
                                if (!lock.acquire(10, TimeUnit.SECONDS)) {
                                    Throw new IllegalStateException (j + ". + ClientName +" cannot get mutual exclusive lock again ");
                                }
                                System.out.println (j + ". + ClientName +" mutual exclusive lock has been acquired again ");
                                lock.release (); // if you apply for a lock several times, you must release it several times
                            } finally {
                                System.out.println (j + ". + ClientName +" release mutex ");
                                lock.release (); // always release in finally
                            }
                            Thread.sleep(random.nextInt(100));
                        }
                    } catch (Throwable e) {
                        System.out.println(e.getMessage());
                    } finally {
                        CloseableUtils.closeQuietly(client);
                        System.out.println (ClientName + "client down! ");
                        countDownLatch.countDown();
                    }
                }
            }).start();
        }
        countDownLatch.await();
        System.out.println (end! ");
    }
}

When the console prints the log, you can see that the synchronous access control to the resource is successful, and the lock is reentrant

0. Client # 3 has acquired the mutex
0. Client # 3 has acquired the mutex again
0. Client # 3 release mutex
0. Client # 1 has acquired a mutex
0. Client#1 has acquired the mutex again
0. Client # 1 release mutex
0. Client # 2 has acquired the mutex
0. Client # 2 has acquired the mutex again
0. Client # 2 release mutex
0. Client # 0 has acquired the mutex
0. Client # 0 has acquired the mutex again
0. Client # 0 release mutex
0. Client # 4 has acquired mutex
0. Client # 4 has acquired the mutex again
0. Client # 4 release mutex
1. Client # 1 has obtained the mutex
1. Client#1 has acquired the mutex again
1. Client#1 releases the mutex
2. Client # 1 has acquired the mutex
2. Client#1 has acquired the mutex again
2. Client#1 releases the mutex
1. The client # 4 has acquired the mutex
1. Client # 4 has acquired the mutex again
1. Client # 4 release mutex
1. The client # 3 has acquired the mutex
1. Client # 3 has acquired the mutex again
1. Client # 3 releases the mutex
1. Client # 2 has acquired the mutex
1. Client # 2 has acquired the mutex again
1. Client # 2 release mutex
2. The client # 4 has acquired the mutex
2. Client # 4 has acquired the mutex again
2. Client # 4 releases the mutex
....
....
Client ා 2 client is closed!
9. Client # 0 has obtained the mutex
9. Client # 0 has acquired the mutex again
9. Client # 0 releases the mutex
9. The mutex has been acquired by client # 3
9. Client # 3 has acquired the mutex again
9. Release mutex by client # 3
Client # 0 client is closed!
8. The client # 4 has acquired the mutex
8. Client # 4 has acquired the mutex again
8. Client # 4 release mutex
9. The client # 4 has acquired the mutex
9. Client # 4 has acquired the mutex again
9. Client # 4 release mutex
Client ා 3 client shutdown!
Client # 4 client is closed!
end!

At the same time, when we check the zookeeper node tree during the running of the program, we can find that each lock request actually corresponds to a temporary sequence node

[zk: localhost:2181(CONNECTED) 42] ls /testZK/sharedreentrantlock
[leases, _c_208d461b-716d-43ea-ac94-1d2be1206db3-lock-0000001659, locks, _c_64b19dba-3efa-46a6-9344-19a52e9e424f-lock-0000001658, _c_cee02916-d7d5-4186-8867-f921210b8815-lock-0000001657]
2. Do not re-enter the lock

Shared lock is similar to shared reentrant lock, but it is non reentrant. This non reentrant lock is implemented by the class interprocesssemaphoremutex, and its usage is similar to the above.

Replace the interprocessmutex in the above program with the non reentrant lock interprocesssemaphoremutex. If you run the above code again, you will find that the thread is blocked on the second acquire until the timeout, that is, the lock is not reentrant. The console output log is as follows:

0. Client # 2 has acquired the mutex
0. Client # 1 cannot get mutex
0. Client # 4 cannot get mutex
0. Client # 0 cannot get mutex
0. Client # 3 cannot get mutex
Client # 1 client is closed!
Client # 4 client is closed!
Client ා 3 client shutdown!
Client # 0 client is closed!
0. Client # 2 release mutex
0. Client#2 cannot get the mutex again
Client ා 2 client is closed!
end!

Comment the second code to get the lock, then the program can run normally

0. Client # 1 has acquired a mutex
0. Client # 1 release mutex
0. Client # 2 has acquired the mutex
0. Client # 2 release mutex
0. Client # 0 has acquired the mutex
0. Client # 0 release mutex
0. Client # 4 has acquired mutex
0. Client # 4 release mutex
0. Client # 3 has acquired the mutex
0. Client # 3 release mutex
1. Client # 1 has obtained the mutex
1. Client#1 releases the mutex
1. Client # 2 has acquired the mutex
1. Client # 2 release mutex
....
....
9. The client # 4 has acquired the mutex
9. Client # 4 release mutex
9. Client # 0 has obtained the mutex
Client ා 2 client is closed!
9. Client # 0 releases the mutex
9. The mutex has been acquired by client#1
Client # 0 client is closed!
Client # 4 client is closed!
9. Client#1 releases the mutex
9. The mutex has been acquired by client # 3
Client # 1 client is closed!
9. Release mutex by client # 3
Client ා 3 client shutdown!
end!
3. Re entrant read / write lock

Shared Reentrant Read Write Lock, re entrant read-write lock, a read-write lock manages a pair of related locks, one is responsible for read operation, the other is responsible for write operation; read operation can be used by multiple processes when the write lock is not used, but the write lock does not allow read (block) when it is used; this lock is re entrant; a thread with a write lock can re-enter the read lock, but the read lock cannot enter the write lock, which also means that Write lock can be degraded to read lock, for example, request write lock — > read lock — > release write lock; upgrade from read lock to write lock will not work.

Re entrant read-write lock is mainly implemented by two classes: interprocessreadwritelock and interprocessmutex. When using it, first create an instance of interprocessreadwritelock, and then get the read lock or write lock according to your needs. The type of read-write lock is interprocessmutex.

It can be understood as the shared lock we analyzed above

public static void main(String[] args) throws InterruptedException {
        for (int i = 0; i < clientNums; i++) {
            final String clientName = "client#" + i;
            new Thread(new Runnable() {
                @Override
                public void run() {
                    CuratorFramework client = ZKUtils.getClient();
                    client.start();
                    final InterProcessReadWriteLock lock = new InterProcessReadWriteLock(client, lockPath);
                    final InterProcessMutex readLock = lock.readLock();
                    final InterProcessMutex writeLock = lock.writeLock();

                    try {
                        //Note that you can only get the write lock first and then the read lock, not the other way around!!!
                        if (!writeLock.acquire(10, TimeUnit.SECONDS)) {
                            Throw new IllegalStateException (ClientName + "cannot get write lock");
                        }
                        System.out.println (ClientName + "write lock obtained");
                        if (!readLock.acquire(10, TimeUnit.SECONDS)) {
                            Throw new IllegalStateException (ClientName + "cannot get read lock");
                        }
                        System.out.println (ClientName + "read lock obtained");
                        try {
                            resource.use (); // use resources
                        } finally {
                            System.out.println (ClientName + "release read / write lock");
                            readLock.release();
                            writeLock.release();
                        }
                    } catch (Exception e) {
                        System.out.println(e.getMessage());
                    } finally {
                        CloseableUtils.closeQuietly(client);
                        countDownLatch.countDown();
                    }
                }
            }).start();
        }
        countDownLatch.await();
        System.out.println (end! ");
    }
}

Console print log

Client#1 has got a write lock
Client#1 has got read lock
Client#1 releases the read / write lock
Client#2 has got a write lock
Client#2 has got read lock
Client#2 releases the read / write lock
Client#0 has got a write lock
Client#0 has got read lock
Client#0 releases the read / write lock
Client#4 has got a write lock
Client # 4 has got read lock
Client # 4 releases the read / write lock
Client#3 has got a write lock
Client#3 has got read lock
Clientා3 releases the read / write lock
end!
4. Semaphore

Shared semaphore is similar to semaphore in JDK. Semaphore in JDK maintains a set of permissions, which is called lease in cubator.

There are two ways to determine the maximum lease number of semaphore, which is completed when the constructor initializes. The first method is determined by the maxleaves given by the user, and the second method uses the sharedcounterreader class.

  public InterProcessSemaphoreV2(CuratorFramework client, String path, int maxLeases)
  public InterProcessSemaphoreV2(CuratorFramework client, String path, SharedCountReader count)

The main implementation classes of semaphores are

Interprocesssemaphorev2 - semaphore implementation class
Lease - lease (single signal)
Sharedcounterreader - counter used to calculate the maximum number of leases

You can request multiple leases by calling the acquire method. If semaphore’s current lease is not enough, the request thread will be blocked, and the overload method of timeout is also provided.

public Lease acquire() throws Exception
public Collection<Lease> acquire(int qty) throws Exception
public Lease acquire(long time, TimeUnit unit) throws Exception
public Collection<Lease> acquire(int qty, long time, TimeUnit unit) throws Exception

Calling acquire will return a lease object. The client must close these lease objects in finally, otherwise the lease will be lost. However, if the client session is lost for some reason, such as crash, the leases held by these clients will be automatically closed, so that other clients can continue to use these leases. The lease can also be returned through the following methods: one or more.

public void returnLease(Lease lease)
public void returnAll(Collection<Lease> leases) 

A demo program is as follows

public class SharedSemaphoreTest {
    private static final int MAX_LEASE = 10;
    private static final String PATH = "/testZK/semaphore";
    private static final FakeLimitedResource resource = new FakeLimitedResource();

    public static void main(String[] args) throws Exception {
        CuratorFramework client = ZKUtils.getClient();
        client.start();
        InterProcessSemaphoreV2 semaphore = new InterProcessSemaphoreV2(client, PATH, MAX_LEASE);
        Collection<Lease> leases = semaphore.acquire(5);
        System.out.println (number of leases obtained:+ leases.size ());
        Lease lease = semaphore.acquire();
        System.out.println (the "acquisition of a single lease");
        resource.use (); // use resources
        //Apply again to get 5 leaves. At this time, there are only 4 leaves left. If it is not enough, it will time out
        Collection<Lease> leases2 = semaphore.acquire(5, 10, TimeUnit.SECONDS);
        System.out.println ("get lease, if timeout will be null:" + leases2));
        System.out.println (the "release lease");
        semaphore.returnLease(lease);
        //Apply again to get 5, this time just enough
        leases2 = semaphore.acquire(5, 10, TimeUnit.SECONDS);
        System.out.println ("get lease, if timeout will be null:" + leases2));
        System.out.println ("release all leases in collection");
        semaphore.returnAll(leases);
        semaphore.returnAll(leases2);
        client.close();
        System.out.println ("end!");
    }
}

Console print log

Number of leases obtained: 5
Get a single lease
Get the lease. If it times out, it will be null: null
Release lease
Get the lease. If it times out, it will be null:[ org.apache.curator . framework.recipes.locks . [email protected] ,  org.apache.curator . framework.recipes.locks . [email protected] ,  org.apache.curator . framework.recipes.locks . [email protected] ,  org.apache.curator . framework.recipes.locks . [email protected] ,  org.apache.curator . framework.recipes.locks . [email protected] ]
Release all leases in the collection
end!

The four locks mentioned above are fair locks. From the perspective of zookeeper, it is fair that each client obtains the lock in the order of request.

5. Multiple locks

Multi shared lock is a container for locks. When acquire is called, all locks will be acquired. If the request fails, all locks will be released. Similarly, when release is called, all locks are released (failure is ignored). Basically, it is the representative of group lock, and the request release operation on it will be passed to all the locks it contains.

It mainly involves two categories

Interprocessmultilock - implements classes on the selected objects
Interprocesslock - distributed lock interface class

Its constructor needs to contain a set of locks, or a set of zookeeper paths, which are used in the same way as shared lock

public InterProcessMultiLock(CuratorFramework client, List<String> paths)
public InterProcessMultiLock(List<InterProcessLock> locks)

A demo program is as follows

public class MultiSharedLockTest {
    private static final String lockPath1 = "/testZK/MSLock1";
    private static final String lockPath2 = "/testZK/MSLock2";
    private static final FakeLimitedResource resource = new FakeLimitedResource();

    public static void main(String[] args) throws Exception {
        CuratorFramework client = ZKUtils.getClient();
        client.start();

        Interprocesslock lock1 = new interprocessmutex (client, lockpath1); // the lock can be re entered
        Interprocesslock lock2 = new interprocesssemaphoremutex (client, lockpath2); // the lock cannot be re entered
        //Group lock
        InterProcessMultiLock lock = new InterProcessMultiLock(Arrays.asList(lock1, lock2));
        if (!lock.acquire(10, TimeUnit.SECONDS)) {
            Throw new IllegalStateException ("cannot acquire multiple locks");
        }
        System.out.println ("acquired multiple locks");
        System.out.println ("is there a first lock?" + lock1. Isacquiredinthisprocess());
        System.out.println ("is there a second lock?" + lock2. Isacquiredinthisprocess());
        try {
            resource.use (); // resource operation
        } finally {
            System.out.println (release multiple locks));
            lock.release (); // release multiple locks
        }
        System.out.println ("is there a first lock?" + lock1. Isacquiredinthisprocess());
        System.out.println ("is there a second lock?" + lock2. Isacquiredinthisprocess());
        client.close();
        System.out.println ("end!");
    }
}

Do you have any questions after reading? Might as well leave a word to discuss together, discuss!! Collated from the Internet, unknown source, article slightly changed. If there is any infringement, please contact in time!

Recommended Today

Third party calls wechat payment interface

Step one: preparation 1. Wechat payment interface can only be called if the developer qualification has been authenticated on wechat open platform, so the first thing is to authenticate. It’s very simple, but wechat will charge 300 yuan for audit 2. Set payment directory Login wechat payment merchant platform( pay.weixin.qq . com) — > Product […]