Thoroughly understand mybatis cache (Part one)

Time:2021-3-29

In web application, cache is an indispensable component. Usually, we use redis or memcached and other caching middleware to intercept a large number of requests to the database and reduce the pressure on the database. As an important component, mybatis naturally provides corresponding support internally. By adding caching function at the framework level, the pressure of the database can be reduced, and the query speed can be improved, which can kill two birds with one stone. Mybatis cache structure is composed of first level cache and second level cache, which are implementation classes using cache interface. Therefore, in the following chapters, I will first introduce the source code of several cache implementation classes, and then analyze the implementation of level 1 and level 2 cache.

The main contents of this paper are as follows

Thoroughly understand mybatis cache (Part one)

Mybatis cache architecture

Mybatis cache related classes are all in the cache package directory. We have mentioned them in the previous article, and we will talk about them in detail today. There is a top-level interface cache, and only one default implementation class, perpetual cache.

Here is the class diagram of cached:

Thoroughly understand mybatis cache (Part one)

Since perpetual cache is the default implementation class, let’s start with it.

PerpetualCache

The object of perpetual cache will be created, so this is called basic cache. But the cache can have many additional functions, such as recycling policy, logging, timing refresh, etc. if necessary, you can add these functions to the basic cache, and if you don’t like them, you can’t add them. Here is not thought of a design pattern – decorator design pattern. Permanent cache is equivalent to concrete component in decoration mode.

Decorator mode is to add functions to objects without changing the original objects, which provides a more flexible alternative than inheritance, that is, to extend the functions of the original objects.

In addition to caching, mybatis also defines many decorators and implements the cache interface. Many additional functions can be realized through these decorators.

How are these caches classified?

All caches can be roughly classified into three categories: basic class cache, elimination algorithm cache and decorator cache.

The following is a detailed description and comparison of each cache:

Thoroughly understand mybatis cache (Part one)

Cache implementation class source code

Source code of perpetual cache

Perlualcache is a cache class with basic functions, which uses HashMap to implement the cache function. Its source code is as follows:

`public class PerpetualCache implements Cache {
    
      private final String id;
/ / use map as cache
      private Map<Object, Object> cache = new HashMap<>();
    
      public PerpetualCache(String id) {
        this.id = id;
      }
    
      @Override
      public String getId() {
        return id;
      }
    
      @Override
      public int getSize() {
        return cache.size();
      }
/ / store the key value pair to “HashMap”
      @Override
      public void putObject(Object key, Object value) {
        cache.put(key, value);
      }
/ / find cache items
      @Override
      public Object getObject(Object key) {
        return cache.get(key);
      }
/ / remove cache entries
      @Override
      public Object removeObject(Object key) {
        return cache.remove(key);
      }
/ / clear cache
      @Override
      public void clear() {
        cache.clear();
      }
/ / part of the code is omitted
    }`

The above is the complete code of perpetual cache, which is also called basic cache. It is very simple. Next, we decorate this class through decoration class to enrich its functions.

LruCache

Lrucache, as the name suggests, is a cache implementation class with LRU (least recently used) algorithm.

In addition, mybatis also provides fifocache with FIFO policy. However, LFU (least frequently used) cache is not provided. It is also a common cache algorithm. If you are interested, you can expand it by yourself.

Next, let’s look at the implementation of lrucache.

`public class LruCache implements Cache {
    
        private final Cache delegate;
        private Map<Object, Object> keyMap;
        private Object eldestKey;
    
        public LruCache(Cache delegate) {
            this.delegate = delegate;
            setSize(1024);
        }
        
        public int getSize() {
            return delegate.getSize();
        }
    
        public void setSize(final int size) {
            /*
* initialize keymap. Note that the type of keymap is inherited from LinkedHashMap,
* and override the “removeeldestentry” method
             */
            keyMap = new LinkedHashMap<Object, Object>(size, .75F, true) {
                private static final long serialVersionUID = 4267176411845948333L;
    
/ / override the “removeldestentry” method of LinkedHashMap
                @Override
                protected boolean removeEldestEntry(Map.Entry<Object, Object> eldest) {
                    boolean tooBig = size() > size;
                    if (tooBig) {
/ / get the key value of the cache item to be removed
                        eldestKey = eldest.getKey();
                    }
                    return tooBig;
                }
            };
        }
    
        @Override
        public void putObject(Object key, Object value) {
/ / store cache entries
            delegate.putObject(key, value);
            cycleKeyList(key);
        }
    
        @Override
        public Object getObject(Object key) {
/ / refresh the position of key in keymap
            keyMap.get(key);
/ / get the corresponding cache item from the decorated class
            return delegate.getObject(key);
        }
    
        @Override
        public Object removeObject(Object key) {
/ / remove the corresponding cache item from the decorated class
            return delegate.removeObject(key);
        }
/ / clear cache
        @Override
        public void clear() {
            delegate.clear();
            keyMap.clear();
        }
    
        private void cycleKeyList(Object key) {
/ / store “key” in “keymap”
            keyMap.put(key, key);
            if (eldestKey != null) {
/ / remove the corresponding cache item from the decorated class
                delegate.removeObject(eldestKey);
                eldestKey = null;
            }
        }
/ / omit part of the code
    }`

It can be seen from the above code that the key map attribute of LRU cache is the key to the implementation of LRU policy. The attribute type inherits from LinkedHashMap and covers the removeeldestentry method. LinkedHashMap can maintain the insertion order of key value pairs. When a new key value pair is inserted,

The tail node inside the LinkedHashMap points to the newly inserted node. The head node points to the first inserted key value pair, that is, the key value pair that has not been accessed for the longest time. By default, LinkedHashMap maintains only the insertion order of key value pairs. To implement LRU caching based on LinkedHashMap, the accessorder property of LinkedHashMap needs to be set to true by construction method. In this case, LinkedHashMap will maintain the access order of key value pairs.

For example, in the above code, such a sentence is executed in the GetObject method keyMap.get (key) to refresh the position of the key value pair corresponding to the key in the LinkedHashMap. LinkedHashMap will move the key value pair corresponding to the key to the tail of the linked list, and the tail node represents the node that has been accessed or inserted for the longest time. In addition to setting accessorder to true, you need to override the removeeldestentry method. LinkedHashMap calls this method when inserting a new key value pair to decide whether to remove the old key value pair after inserting the new key value pair.

In the above code, when the capacity of the decorated class exceeds the specified capacity of keymap (passed in by the constructor), keymap will remove the key that has not been accessed for the longest time and save it to eldestkey. Then, the cyclekeylist method will transfer the eldestkey to the removeobject method of the decorated class to remove the corresponding cache item.

BlockingCache

Blockingcache implements the blocking feature, which is based on Java reentry lock. At the same time, blocking cache only allows one thread to access the cache item of the specified key, and other threads will be blocked.

Now let’s take a look at the source code of blockingcache.

`public class BlockingCache implements Cache {
    
        private long timeout;
        private final Cache delegate;
        private final ConcurrentHashMap<Object, ReentrantLock> locks;
    
        public BlockingCache(Cache delegate) {
            this.delegate = delegate;
            this.locks = new ConcurrentHashMap<Object, ReentrantLock>();
        }
    
        @Override
        public void putObject(Object key, Object value) {
            try {
/ / store cache entries
                delegate.putObject(key, value);
            } finally {
/ / release the lock
                releaseLock(key);
            }
        }
    
        @Override
        public Object getObject(Object key) {
/ / please / / request a lock
            acquireLock(key);
            Object value = delegate.getObject(key);
/ / if the cache hits, release the lock. Note that a miss does not release the lock
            if (value != null) {
/ / release the lock
                releaseLock(key);
            }
            return value;
        }
    
        @Override
        public Object removeObject(Object key) {
/ / release the lock
            releaseLock(key);
            return null;
        }
    
        private ReentrantLock getLockForKey(Object key) {
            ReentrantLock lock = new ReentrantLock();
/ / store the < key, lock > key value pairs in “locks”
            ReentrantLock previous = locks.putIfAbsent(key, lock);
            return previous == null ? lock : previous;
        }
    
        private void acquireLock(Object key) {
            Lock lock = getLockForKey(key);
            if (timeout > 0) {
                try {
/ / try to lock
                    boolean acquired = lock.tryLock(timeout, TimeUnit.MILLISECONDS);
                    if (!acquired) {
                        throw new CacheException(“…”);
                    }
                } catch (InterruptedException e) {
                    throw new CacheException(“…”);
                }
            } else {
/ / lock
                lock.lock();
            }
        }
    
        private void releaseLock(Object key) {
/ / get the lock corresponding to the current key
            ReentrantLock lock = locks.get(key);
            if (lock.isHeldByCurrentThread()) {
/ / release the lock
                lock.unlock();
            }
        }
        
/ / omit part of the code
    }`

As mentioned above, when querying the cache, the GetObject method first obtains the lock corresponding to the key and locks it. If the cache hits, the GetObject method will release the lock, otherwise it will always be locked. If the GetObject method returns null, it indicates a cache miss. At this time, mybatis will query the database and call the putobject method to store the query results. At the same time, the putobject method will unlock the lock corresponding to the specified key, so that the blocked thread can resume running.

The above description is a bit wordy, but the comments of blockingcache class are relatively simple and clear. Here is a quote:

`It sets a lock over a cache key when the element is not found in cache.
This way, other threads will wait until this element is filled instead of hitting the database.
`

When the element corresponding to the specified key does not exist in the cache, blockingcache will lock it according to lock. At this time, other threads will enter the waiting state until the element corresponding to the key is filled into the cache. Instead of having all threads access the database.

In the above code, the logic of the removeobject method is very strange. Only the releaselock method is called to release the lock, but the removeobject method of the decorated class is not called to remove the specified cache item. Why? You can think about it first. The answer will be analyzed when analyzing the logic related to L2 cache.

CacheKey

In mybatis, the purpose of introducing cache is to improve query efficiency and reduce database pressure. Since mybatis introduced cache, have you thought about the values of key and value in the cache? You can easily answer the content of value, which is the result of SQL query.

What is key? Is it a string or something? If it is a string, then the first thing you can think of is to use SQL statements as keys. But it’s not right

For example:

`SELECT * FROM author where id > ?
`

The results of d > 1 and ID > 10 may be different, so we can’t simply use SQL statements as keys. Therefore, we can see from the runtime parameters that affect the query results. In addition, pagination query will result in different query results, so key should also cover pagination parameters. To sum up, we can’t use simple SQL statements as keys. You should consider using a composite object that can cover factors that can affect query results. In mybatis, the composite object is cachekey.

Let’s take a look at its definition.

`public class CacheKey implements Cloneable, Serializable {
    private static final int DEFAULT_MULTIPLYER = 37;
    private static final int DEFAULT_HASHCODE = 17;
/ / multiplier, default to 37
    private final int multiplier;
/ / cachekey ﹣ hashcode, which integrates various influence factors
    private int hashcode;
/ / checksums
    private long checksum;
/ / several influence factors
    private int count;
/ / impact factor set
    private List<Object> updateList;
    public CacheKey() {
        this.hashcode = DEFAULT_HASHCODE;
        this.multiplier = DEFAULT_MULTIPLYER;
        this.count = 0;
        this.updateList = new ArrayList<Object>();
    }
/ / omit other methods
}
`

As mentioned above, except that the multiplier is constant, other variables will be modified during the update operation.

Let’s take a look at the code for the update operation.

`/*Whenever an update operation is performed, it indicates that a new influence factor participates in the calculation/
    public void update(Object object) {
            int baseHashCode = object == null ? 1 : ArrayUtil.hashCode(object);
/ / self increasing count
        count++;
/ / calculate the check sum
        checksum += baseHashCode;
/ / update basehashcode
        baseHashCode *= count;
    
/ / calculate hash code
        hashcode = multiplier * hashcode + baseHashCode;
    
/ / save impact factors
        updateList.add(object);
    }`

Hashcode and checksum will become more complex and random when new influence factors are constantly involved in the calculation. In this way, the collision rate can be reduced and the cachekeys can be distributed more evenly in the cache. Cachekey will eventually be stored in HashMap as a key, so it needs to override the equals and hashcode methods.

Let’s take a look at the implementation of these two methods.

`public boolean equals(Object object) {
/ / check whether it is the same object
    if (this == object) {
        return true;
    }
/ / check whether the ﹣ object ﹣ is a ﹣ cachekey
    if (!(object instanceof CacheKey)) {
        return false;
    }
   final CacheKey cacheKey = (CacheKey) object;
    
/ / check whether the hash codes are equal
    if (hashcode != cacheKey.hashcode) {
        return false;
    }
/ / check whether the checksums are the same
    if (checksum != cacheKey.checksum) {
        return false;
    }
/ / check whether the coutn is the same
    if (count != cacheKey.count) {
        return false;
    }
/ / if all the above tests are passed, the following is a comparison of each impact factor
    for (int i = 0; i < updateList.size(); i++) {
        Object thisObject = updateList.get(i);
        Object thatObject = cacheKey.updateList.get(i);
        if (!ArrayUtil.equals(thisObject, thatObject)) {
            return false;
        }
    }
    return true;
}
    
public int hashCode() {
/ / returns the hash code variable
    return hashcode;
}
`

The detection logic of the equals method is strict. It detects multiple member variables in the cachekey and ensures that they are equal. The hashcode method is relatively simple, just return the hashcode variable.

This is the first analysis of cachekey. Cachekey will be used in the first and second level cache, and we will see its figure next.

OK, finally pull out the source code of the source code cache implementation class.

Recommended Today

Swift advanced (XV) extension

The extension in swift is somewhat similar to the category in OC Extension can beenumeration、structural morphology、class、agreementAdd new features□ you can add methods, calculation attributes, subscripts, (convenient) initializers, nested types, protocols, etc What extensions can’t do:□ original functions cannot be overwritten□ you cannot add storage attributes or add attribute observers to existing attributes□ cannot add parent […]