Memory cache artifact in go zero business collection.Cache

Time:2020-11-26

Transferred fromgo-zeroclickView original text
Go zero microservice library addresshttps://github.com/tal-tech/go-zero

Through collection.Cache Cache

Many out of the box tools are provided in the go zero microservice framework. Good tools can not only improve the performance of the service, but also improve the robustness of the code, avoid errors, achieve the unification of code style, and make it easy for others to read. This series of articles will introduce the use and implementation principles of the tools in the go zero framework

In process caching toolcollection.Cache

When doing server development, I believe you will encounter the situation of using cache. The simple cache encapsulation provided by go zerocollection.CacheThe simple usage is as follows

//Initializes the cache, where withlimit specifies the maximum number of caches
c, err := collection.NewCache(time.Minute, collection.WithLimit(10000))
if err != nil {
  panic(err)
}

//Set cache
c.Set("key", user)

//Get cache, OK: does it exist
v, ok := c.Get("key")

//Delete cache
c.Del("key")

//Get the cache. If the key does not exist, func will be called to generate the cache
v, err := c.Take("key", func() (interface{}, error) {
  return user, nil
})

The functions of cache include

  • The cache is automatically disabled, and the expiration time can be specified
  • Cache size limit, you can specify the number of caches
  • Cache addition, deletion and modification
  • Cache hit rate statistics
  • Concurrent security
  • Buffer breakdown

Implementation principle:
Use timingcache automatically(https://github.com/tal-tech/g…Management

timingWheel, err := NewTimingWheel(time.Second, slots, func(k, v interface{}) {
        key, ok := k.(string)
        if !ok {
            return
        }

        cache.Del(key)
})

The cache size limit adopts LRU elimination strategy. When adding new cache, it will check whether it has exceeded the limit. The specific code is implemented in keylru

func (klru *keyLru) add(key string) {
    if elem, ok := klru.elements[key]; ok {
        klru.evicts.MoveToFront(elem)
        return
    }

    // Add new item
    elem := klru.evicts.PushFront(key)
    klru.elements[key] = elem

    // Verify size not exceeded
    if klru.evicts.Len() > klru.limit {
        klru.removeOldest()
    }
}

Cache hit rate statistics is to implement cachestat in code, automatically count when cache hit is lost, and print hit rate and QPS status used regularly

The specific effect of printing is as follows

cache(proc) - qpm: 2, hit_ratio: 50.0%, elements: 0, hit: 1, miss: 1

Buffer breakdown contains the use of syncx.SharedCalls (https://github.com/tal-tech/g…The implementation is to request the same key at the same time. We will continue to supplement the shared calls later. The specific implementation is as follows:

func (c *Cache) Take(key string, fetch func() (interface{}, error)) (interface{}, error) {
    val, fresh, err := c.barrier.DoEx(key, func() (interface{}, error) {
        v, e := fetch()
        if e != nil {
            return nil, e
        }

        c.Set(key, v)
        return v, nil
    })
    if err != nil {
        return nil, err
    }

    if fresh {
        c.stats.IncrementMiss()
        return val, nil
    } else {
        // got the result from previous ongoing query
        c.stats.IncrementHit()
    }

    return val, nil
}

This paper mainly introduces the cache tool in go zero framework, which is very practical in practical projects. Good use of tools for improving service performance and development efficiency are very helpful, I hope this article can bring you some harvest.