web technology sharing | LRU cache elimination algorithm


Before understanding LRU, we should understand cache. Everyone knows that computers have cache memory, which can temporarily store the most commonly used data. When the cache data exceeds a certain size, the system will recycle it to free up space to cache new data, but The cost of retrieving data from the system is relatively high.

Cache requirements:

  • Fixed Size: The cache needs to have some limit to limit the memory usage.
  • Fast access: Cache insert and lookup operations should be fast, preferably O(1) time.
  • Replace entries if memory limit is reached: caches should have efficient algorithms for evicting entries when memory is full

If provided acache replacement algorithmTo assist management, delete the least used data according to the set memory size, and actively release the space before the system recycles, which will make the entire retrieval process very fast, so the LRU cache elimination algorithm appears.

LRU principle and implementation

LRU (Least Recently Used) cache elimination algorithmIt is proposed that data that has been frequently accessed recently should have higher retention, and those data that are not frequently accessed should be eliminated, that is,The most recently used data will be used again with a high probability, and the data that has not been accessed for the longest time will be discarded, the purpose is to make it faster to get data later, for exampleVueofkeep-liveA component is an implementation of LRU.

web technology sharing | LRU cache elimination algorithm

The central idea of ​​the implementation is divided into the following steps:

  • New data is inserted at the head of the linked list.
  • Whenever the cache hits (that is, the cache data is accessed), the data is moved to the head of the linked list.
  • When the cache memory is full (when the linked list is full), the data at the end of the linked list is eliminated.


An example is used here to illustrate the process of LRU implementation. For details, pleaserefer here

web technology sharing | LRU cache elimination algorithm

  1. At the beginning, the memory space is empty, so it is no problem to enter A, B, and C in sequence
  2. When D is added, there is a problem, the memory space is not enough, so according to the LRU algorithm, A stays the longest in the memory space, choose A, and eliminate it
  3. When B is referenced again, B in the memory space is active again, and C becomes the one that has not been used for the longest time in the memory space.
  4. When E is added to the memory space again, the memory space is insufficient at this time, and the C that has been in the memory space for the longest time is selected to be eliminated from the memory. At this time, the object stored in the memory space is E->B->D

Realize LRU based on doubly linked list and HashMap

The common LRU algorithm is based onDoubly linked listandHashMapAchieved.

Doubly linked list: It is used to manage the order of cache data nodes. New data and cache hit (recently accessed) data are placed in the Header node, and the tail nodes are eliminated according to the memory size.

HashMap: Store the data of all nodes, and when the LRU cache hits (for data access), it intercepts and performs data replacement and deletion operations.

Doubly linked list

Doubly linked listIt is one of the many linked lists, and the linked list usesChain storage structure, each element in the linked list, we call itdata node

Each data node contains adata fieldandpointer fieldpointer fieldCanDetermine the order between nodes, and update the order of the linked list by updating the pointer field of the data node

Each doubly linked listdata nodecontains adata fieldand twopointer field

  • proirpoint to the previous data node;
  • dataThe data of the current data node;
  • nextpoint to the next data node;

web technology sharing | LRU cache elimination algorithm

pointer fieldDetermine the order of the linked list, then the doubly linked list has a bidirectional pointer field, and the data nodes are not single pointing, but bidirectional pointing. which isproir pointer fieldpoint to the previous data node,next pointer fieldPoint to the next data node.

web technology sharing | LRU cache elimination algorithm

In the same way:

  • A singly linked list has only one pointer field.
  • A circular (circular) linked list has a bidirectional pointer field, and the pointer field of the head node points to the tail node, and the pointer field of the tail node points to the head node.
Special nodes: Header and Tailer nodes

There are two special nodes in the linked list, then evenHeadernode andTailernodes, respectivelyhead nodeandtail nodehead nodeIndicates the latest data or cache hit (recently accessed data),tail nodeIndicates a data node that has not been used for a long time and is about to be eliminated.

As an algorithm, everyone will pay attention to its time and space complexity O(n), based on the advantages of the bidirectional pointer field of the doubly linked list, in order to reduce the time complexity, so
In order to ensure that LRU new data and cache hit data are at the front of the linked list (Header), delete the last node when the cache is eliminated (Tailer), but also to avoid traversal from beginning to end when searching data, reduce the time complexity of the algorithm, and based on the advantages brought by the doubly linked list, you can change the number of individual data nodespointer fieldIn order to achieve the update of the linked list data, if the Header and Tailer nodes are provided as identifiers, the header insertion method can be used to quickly increase the nodes. According to the Tailer nodes, the order of the linked list can also be quickly updated when the cache is eliminated, avoiding traversal from the beginning to the end , to reduce the time complexity of the algorithm.

sort example

In the LRU linked list there are[6,5,4,3,2,1]6 data nodes, of which6The data node where isHeader(head) node,1The data node where isTailer(tail) node. If the data at this time3is accessed (cache hit),3It should be updated to the head of the linked list, and the thinking of an array should be deleted3, but if we take advantage of the bidirectional pointer of the doubly linked list, we can quickly implement the update of the linked list by the way:

  • 3is deleted when4and2There are no other nodes in between, that is,4ofnextPointer field points to2The data node where it is located; similarly,2ofproirPointer field points to2The data node where it is located.

web technology sharing | LRU cache elimination algorithm


As for why useHashMap, to summarize in one sentence mainly becauseHashMappassKeyThe acquisition speed will be much faster, reducing the time complexity of the algorithm.


  • When we get cached fromHashMapBasically, the time complexity is controlled at O(1) when obtaining from the linked list, and the time complexity is O(n) if one traverses from the linked list.
  • When we visit an existing node, we need to move this node toheaderAfter the node, you need to delete this node in the linked list at this time, and re-enter theheaderAdd a new node later. go first at this timeHashMapGet this node and delete the node relationship, avoid traversing from the linked list, and reduce the time complexity from O(N) to O(1)

Since the front end does not have a HashMap related API, we can useObjectorMapto replace.


Now let us use the data structure we have mastered to design and implement one, or refer toLeeCode 146 questions

Linked list node Entry

export class Entry<T> {

    value: T

    key: string | number

    next: Entry<T>

    prev: Entry<T>

    constructor(val: T) {
        this.value = val;

Double Linked List Double Linked List

main duty:

* Simple double linked list. Compared with array, it has O(1) remove operation.
* @constructor
export class LinkedList<T> {

    head: Entry<T>
    tail: Entry<T>

    private _len = 0

    * Insert a new value at the tail
    insert(val: T): Entry<T> {
        const entry = new Entry(val);
        return entry;

    * Insert an entry at the tail
    insertEntry(entry: Entry<T>) {
        if (!this.head) {
            this.head = this.tail = entry;
        else {
            this.tail.next = entry;
            entry.prev = this.tail;
            entry.next = null;
            this.tail = entry;

    * Remove entry.
    remove(entry: Entry<T>) {
        const prev = entry.prev;
        const next = entry.next;

        if (prev) {
            prev.next = next;
        else {
            // Is head
            this.head = next;
        if (next) {
            next.prev = prev;
        else {
            // Is tail
            this.tail = prev;
        entry.next = entry.prev = null;

    * Get length
    len(): number {
        return this._len;

    * Clear list
    clear() {
        this.head = this.tail = null;
        this._len = 0;

LRU core algorithm

main duty:

  • Add data to linked list and update list order
  • The order in which the linked list is updated on a cache hit
  • Memory overflow discards outdated linked list data

* LRU Cache


export default class LRU<T> {

    private _list = new LinkedList<T>()

    private _maxSize = 10

    private _lastRemovedEntry: Entry<T>

    private _map: Dictionary<Entry<T>> = {}

    constructor(maxSize: number) {

        this._maxSize = maxSize;



    * @return Removed value


    put(key: string | number, value: T): T {

        const list = this._list;

        const map = this._map;

        let removed = null;

        if (map[key] == null) {

            const len = list.len();

            // Reuse last removed entry

            let entry = this._lastRemovedEntry;

            if (len >= this._maxSize && len > 0) {

                // Remove the least recently used

                const leastUsedEntry = list.head;


                delete map[leastUsedEntry.key];

                removed = leastUsedEntry.value;

                this._lastRemovedEntry = leastUsedEntry;
            if (entry) {
                entry.value = value;
            else {
                entry = new Entry(value);

            entry.key = key;
            map[key] = entry;
        return removed;

    get(key: string | number): T {

        const entry = this._map[key];
        const list = this._list;

        if (entry != null) {
            // Put the latest used entry in the tail
            if (entry !== list.tail) {
            return entry.value;

    * Clear the cache
    clear() {
        this._map = {};

    len() {
        return this._list.len();

Other LRU algorithms

In addition to the above common LRU algorithms, with the complexity and variety of requirements, many optimization algorithms have been derived based on the idea of ​​LRU, such as:

reference link

web technology sharing | LRU cache elimination algorithm

Recommended Today

IntelliJ IDEA Deploy

IntelliJ IDEA is a commercially sold Java integrated development environment (Integrated Development Environment, IDE) tool software, developed by JetBrains Software Corporation (formerly known as IntelliJ), providing Apache 2.0 open licensed community version and proprietary software commercial Version, developers can choose what they need to download and use. –wikipedia 1. edit 1.1 Encoding Generally speaking, without […]