The cache will work! Haven’t you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Time:2020-11-28

When we talk about cache, my heart suddenly brightens up. Forced by the form of key value, I always feel light wind supporting the face and willows Yiyi. Everything is in my control. Like the impulse of a beautiful woman in her eyes, her face is full of beauty in her mind.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture
Do you know the design principle and impact of caching from the perspective of website architecture?

Garrulous

In the world of business, it is often said thatCash is king。 In the world of Internet, mobile Internet and even the whole software technology, what is similar to it isCache is king

Why do you say that?

Imagine if your complete network request (HTTP, soap, RPC, etc.) still has a cache in some part of the execution process, can you respond to the client in advance?

Why are many large and medium-sized companies in the interview, the application of cache, principle, high availability, and a series of issues, are thrown to you, so that you can not resist. That’s why.

What is caching?

cacheA copy of original data stored on a computer for easy access

--Wikipedia

Cache is a key technology in the system’s fast response. It is a set of things that are saved for future use. Between application development and system development, product managers often can not estimate, and it is also a non functional constraint in the design of technical architecture.

Application development I know, this system cache is what circumstance, small Zha elder brother?
Don’t worry. Look back

What is multilevel caching architecture?

As the name implies, the cache project is composed of multiple dimensions. Because caching has different meanings in different scenarios. The technical means used are also different.

According to the existing form of cache:

  1. Hardware cache (such as CPU, hard disk, etc.)
  2. Operating system cache
  3. Software cache

What is the system cache?

An operating system is a computer program that manages computer hardware and software resourcesThe speed of hardware and software is basically determined by the cache. The larger the cache capacity, the faster the corresponding hardware runs. Therefore, system cache is the function of accelerating execution when the operating system calls hardware resources (memory, files, etc.) and calls applications.

Conclusion: operating system memory calls involve the part with cache, which can be regarded as system cache

The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Software running needs to be built on the operating system, and the program needs to be loaded into memory,However, the operation memory of software is based on the mechanism of virtual memory mapping, is not a direct manipulation of physical memory. Virtual memory stores related resources in the form of block tables (tables of memory blocks).

Note: physical memory is composed of many square elements, which are the smallest unit of memory management. Each element has 8 small capacitors, which store 8 bits, i.e. 1 byte.
Is it similar to a disk block^_ ^ 。 Zha Zha Hui knows you

In order to improve the access speed of the system, theAddress mapping mechanismA small capacityAssociative registerBlock table.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

itThe number of pages used to hold the few active pages that are currently visited most frequently。 When a user needs to access data, according to theThe logical page number finds the corresponding memory block number in the block tableAnd then contact the address on the page to form the physical address.

Summary: when reading data, first find the logical page, check the memory block number, and get the physical layer memory address in the page

If there is no corresponding logical page number in the block table, the address mapping can still be operated through the page table in memory, only it gets yesFree block numberThe block number must be filled in the free area of the block table. If there is no free area in the block table, a row in the block table is eliminated according to the elimination algorithm, and a new page number and block number are filled in.

I remember that computers get their caches according to the principle of proximity. What about their priority?

The cache will select the most appropriate memory according to the storage speed,The closer the memory to the CPU, the faster the speed, the higher the cost per byte, and therefore the smaller the capacity

The hierarchy is as follows: register (nearest to CPU, register speed is the fastest), cache (cache is also hierarchical, there are L1, L2 cache), main memory (common memory), and local disk

The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Daily development often makes cache software, according to the location of the software system, can be divided

  • Client cache
  • Network cache
  • Server side cache

Multi level caching is similar to pyramid mode. It decreases from top to bottom. Similar to a funnel to filter traffic requests. If the vast majority of requests are offset in the part of the interaction between the client and the network, the pressure on the back-end service will be greatly reduced.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Why use multi-level cache architecture?

The fundamental is to provide high-performance services for the website, so that users have a better user experience. Get more performance space at less cost.

Talk about user experience

The term user experience was first widely recognized in the mid-1990s by the user experience designer Donald Norman.

Due to the progress of information technology in mobile and image processing, human-computer interaction (HCI) technology has penetrated into almost all fields of human activities. This leads to the expansion of the evaluation index of the system from simple usability to user experience.

User experience has received considerable attention in the development of human-computer interaction technologyThe traditional three usability indicators (i.e. efficiency, efficiency and basic subjective satisfaction) are similarAnd even more important in some ways.

What is user experience?

ISO 9241-210 defines user experience as“People’s cognitive impression and response to the product, system or service being used or expected to use”。 Therefore, user experience is subjective and focuses on practical application.

User experienceIt refers to all the feelings of users before, during and after using a product or system, including emotions, beliefs, preferences, cognitive impressions, physiological reactions, psychological reactions, behaviors and achievements.

The ISO standard also implies that usability can also be an aspect of the user experience,Usability standards can be used to evaluate some aspects of the user experience. “。 However, the ISO standard does not further elaborate the specific relationship between user experience and system availability. Obviously, these two concepts overlap each other.

Maybe this is the reason why the products are constantly tossing our technology. We have to understand a little bit. How about your product? Do you have the impulse of Da people

The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Factors affecting user experience

There are three factors influencing user experience

  1. User status
  2. system performance
  3. Environmental Science

System performance is the most critical factor for the user experience of software products.Because the subject of software performance is human,Different peopleFor the same software, there may be different subjective feelings, and the perspective of software performance is also different.

System performance is a non functional feature,It focuses not on a specific function, but on the timeliness of the function.

On system performance

System performance indicators generally includeResponse time, delay time, throughput, number of concurrent users and resource utilizationAnd so on.

response time

response timeIt meansThe time the system responds to a user request, which is consistent with people’s subjective perception of software performance, and completely records the processing time of the whole system.

Generally, the response time will have an exact value according to the business scenarios in different projects. For example, a request should be guaranteed within 100ms and 200ms.

How long does it take for your home page to respond?

Because a system usually provides many functions, and the processing logic of different functions is also very different, so the response time of different functions is not the same, even the same function in the case of different input data, the response time is not the same.

Therefore, we often say the response time usually refers to the software systemAverage response time of all functionsperhapsThe maximum response time of all functions.

Sometimes you need to be rightDiscussion on each or each group of functionsThe average response time and the maximum response time are given.

When discussing the software performance, we are more concerned about the development of the software itself“Response time”

For example, PHP response time is the time consumed from receiving nginx requests, completing business processing, and then responding to nginx. The user looks at the time it takes to send the request to see the page

The former is the response of the whole software itself, and the latter is the response time of user request. Different viewing angles

In this way, we can putUser perceived response timeDivided intoPresentation time and system response time

  • Presentation time:The time required for the client to render the page when receiving system data, that is, the page rendering load time
  • System response time: * * from the time when the client sends the request until

The time required for the server to respond to the client**
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

You can also"System response time"Further decomposed intoNetwork transmission time and “application delay time”

  • Network transmission time:The time of data transmission between client and server
  • Application delay time:The time required for the system to process the request service

When we talk about optimization in the future, we should start from the whole request link, focusing on the presentation, network transmission and application processing time

throughput

throughputIs the number of requests processed by the system per unit time.

Unit time is described by the project’s own planning response time, but 1s is often used to measure the number of successful requests.

How to calculate the throughput of my website? As a small Zha of me, or make up the class
To calculate throughput, you should first look at your time conversion and traffic situation.

  • Unit time division

Suppose that the advertisement page of a publishing system needs to meet the total number of visits of 500W in 30 minutes.
The average QPS is: 500W / (30 * 60) = 2778, about 3000 QPS / S (space to be reserved)

  • per day

Suppose that the average daily PV of the front page of an information classification website is about 8000w
The average QPS is: one day is calculated according to 4W seconds (not calculated at night), 8000w / 4W = 2000, about 2000qps.

Note:
Users don’t use the software all day, usually not at night or rarely. But also divided into business, you like live, take out, etc. However, 12 hours a day basically meets the maximum use time of a user.

The average QPS should be calculated according to the user’s usage time and total traffic. The peak flow is used to calculate the maximum QPS.

For non concurrent application systems, throughput and response time are inversely proportional. In fact, throughput is the reciprocal of response time.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Non concurrent applications are stand-alone applications, for products on the Internet or mobile Internet.

Number of concurrent users

Number of concurrent usersIt refers to the number of users who can use the system functions normally at the same time. The higher the value, the stronger the processing capacity.

Resource utilizationIt reflects the average occupancy of resources over a period of time.

From browser to web application server to databaseThrough the application of caching technology at all levels, the performance of the whole system will be greatly improved.

For example, if the cache is closer to the client, it takes less time to request content from the cache than from the source server, rendering speed is faster, and the system appears more sensitive. The reuse of cache data greatly reduces the bandwidth usage of users. In fact, it is also a kind of disguised money saving (if the traffic needs to be paid). At the same time, it ensures that the bandwidth request is at a low level and is easier to maintain.

So, use<span style=”color:#773098;font- weight:bold; “> caching technology can reduce the response time of the system, reduce the network transmission time and application delay time, thus improving the system throughput and increasing the number of concurrent users
The use of cache can also minimize the workload of the system. With the use of cache, it is not necessary to search repeatedly from the data source. The same piece of data created or provided by the cache makes better use of the system resources.

The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Therefore, cache is a commonly used and effective means of system tuning. No matter it is operating system or application system, cache strategy is everywhere. “Cache is king” essentially means that system performance is king, and user experience is king for users.

Website architecture cache evolution

start

The initial website may be a physical host, placed in IDC or rented by cloud server, which only runsApplication server and databaseThis is how lamp (Linux, Apache, mysql, PHP) is popular.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

development

As the website has certain characteristics, it has attracted some users to visit, and gradually find that the pressure of the system is increasing, and the response speed is getting slower and slower. At this time, it is obvious thatDatabase and ApplicationTherefore, the application server and the database server are physically separated and become two machines. Let them not affect each other to support higher flow.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

###Medium term
As more and more people visit the website, the response speed starts to slow down againThere are too many operations in the database, resulting in fierce competition for data connection, so the cache begins to appear.

It is not difficult to see that the database is often the first choice to consider optimization. After all, there is no cache request to directly connect to the database, and the database is the centralized place of data, and data query will also involve disk I / O. You can imagine the pressure

If you want to reduce the competition of database connection resources and the pressure on database reading through caching mechanism, you can choose from the following:

  • Static page cachingIn this way, we can reduce the pressure on the web server and the competition for database connection resources without modification.
  • Dynamic caching: cache the relatively static part of the dynamic page, so consider using a similarPage fragment caching strategy (dynamic caching through nginx and Apache configuration)

Static caching tends to cache static resources and browsers. Dynamic caching is to generate cache files after page access and provide them to subsequent requests. It’s kind of like a template engine
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

What about my database?

At the moment, the traffic is rising, mainly for access, and the write request will also increase. However, the performance bottleneck lies in the read operation of the database. It can be said that writing is not a big threat. If the write is not enough, you can only expand multiple instances to cope with the write shortage.

High growth period

With the continuous increase of traffic, the system starts to slow down again. What to do?

useData cacheIn the systemThe repeatedly obtained data information is loaded from the database to the local, while reducing the load of the database。 With the increase of system access again, the application server can’t bear the load again, so we start to increase the web server.

How to keep the data cache information synchronized in the application server?

For example: previously cached user data, this time usuallyStart to use cache synchronization mechanism and shared file system or shared storage。 After enjoying a period of high-speed traffic growth, the system slows down again.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Start database tuning, optimize the cache of the database itself, and then adopt the strategy of database cluster and sub database and sub table

The rules of sub database and sub table are complex. Consider adding a general framework to realize the data access of sub database and sub tableData access layer (DAL).

  • Cache synchronization mechanism: each web server will save a cache file, and the data need to be synchronized by caching mechanism.
  • Shared memory: shared memory refers to a parallel architecture in which two or more processors share the same main memory, such as redis.
  • Shared file system: the file system between two machines can be more closely combined, so that users on one host can use the file system of remote machine just like the file system of local machine. Like samba and NFS.

later stage

At this stage, problems may be found in the previous cache synchronization scheme,Because of the large amount of data, it is not possible to store the cache locally and then synchronize itThis will cause the time delay of synchronization, thus increasing the response time, data inconsistency and database coupling cache.
So the distributed cache finally came, transferring a large amount of data cache to the distributed cache.

What’s wrong with shared file systems or shared storage?

  • Shared storage: when multiple services access one storage, it is easy to cause insufficient performance of single instance and. Concurrent read and write may cause cache and data inconsistency.
  • File sharing: multiple services are under too much pressure due to high file I / O overhead. Performance can lead to degradation.

final

So far, the system has entered the stage of stepless scaling of large websites,When website traffic increases, the solution is to constantly add Web servers, database servers, and cache servers。 At this time, the system architecture of large-scale website evolves as shown in the figure.
The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

Throughout the development of website architecture, caching technology is often a panacea to relieve worries, which once again proves what cache is king.

Implementation scheme of client cache

The client side cache is simpler than the other side cache, which is usually andServer side and network side application or cacheIt can be used together.

In the Internet application, there are two categories according to the application.

  • B / S architecture:Page caching and browser caching
  • Mobile applications:Cache used by app itself

Page caching

What is page caching?
Page caching has two meanings:

  • The client will align the page itselfSome or all elementsCache. It is referred to as offline application cache
  • The server willStatic page or dynamic pageAnd then the client can use it. It is referred to as page self caching.

Page caching is to save previously rendered pages as static filesWhen users visit again, they can avoid the network connection, thus reducing the load and improving the performance and user experience.

along withSingle page application (SPA))In addition, HTML5 supports offline caching and local storage, so page caching of most BS applications can be ignored.

The method of using local cache in HTML5 is as follows:

localStorage.setItem ("MyKey", "zhazhahui")
localStorage.getItem ("MyKey", "zhazhahui")
localStorage.removeItem("mykey")
localStorage.clear()

What is a single page application?

Spa is a technology that uses single page in web design and uses JavaScript to operate DOM to realize various applications. In this mode, a system only loads resources once, and the subsequent operation and data interaction are carried out through routing and Ajax, and the page is not refreshed.

Common routing forms are as follows:http:.//xxx/shell.htm1#page1
It is obvious to use it under Vue. Such as mall activity page and login page are good spa landing practices.


HTML5 provides offline application caching mechanism,Make web applications available offline (accessible without network)This mechanism is widely supported in browsers, so you can safely use this feature to speed up page access. The steps to enable offline caching are as follows:

  1. Prepare the resource list manifest file that describes the page needs to be cached(manifest text/cache-manifest)

Note: the manifest file needs to be configured with the correct MIME type, that is, “text / cache manifest”. It must be configured on the web server.

For example: nginx needs to modify the mime.types File, add manifest file mapping:

  text/cache-manifest            manifest; 
  1. Add in the HTML of the page that needs to be used offlinemanifestProperty to specify the path to the cache manifest file. The workflow of offline caching is shown in the figure.

`
At present, the manifest attribute of HTML tag has been abandoned, and webback can be used.
`

The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

It can be seen from the figure that:

  1. When a browser accesses anmanifestIf the applied cache does not exist, the browser will load the document, get all the files listed in the manifest file, and generateInitial cache.
  2. When a subsequent request accesses the document again, the browser loads the page and the resources listed in the manifest file directly from the application cache. At the same time, the browser willwindow.applicationCacheObject sends an event representing the check to get the manifest file.
  3. If the currently cached copy of the manifest is up-to-date, the browser will window.applicationCache Object to end the update process by sending an event indicating that there is no need to update. If any cache resource is modified on the server side, the manifest file must be modified at the same time, so that the browser can know to retrieve the resource again.
  4. If the manifest file has changed, all the files listed in the file are retrieved and put into a temporary cache. For each file added to the temporary cache, the browser will window.applicationCache Object to send an event that represents an in progress event.
  5. Once all the files are successful, they are automatically moved to the real offline cache, and the window.applicationCache Object to send an event indicating that it has been cached. Since the document has already been loaded from the cache into the browser, the updated document will not be re rendered until the page is reloaded.

Note: the resource URL listed in the manifest file must use the same network protocol as the manifest itself. For details, please refer to W3C related standard documents.

The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

*

Browser cache

Browser caching works according to a set of rules agreed with the server. The working rules are simpleCheck to make sure the replica is up to date, usually only once in a session request

The browser will create a space on the hard disk to store the resource copy as a cache

Triggered by userBack operation or clickWhen you’ve seen links before, browser caching works. If you access the same picture in the system, the picture can be transferred from the browser cache and almost immediately appear.

HTTP1.0

For browsers, http1.0 provides some basic caching features, such as:

  • Set on the server sideHTTP header for expiresTo tell the client how long the cache is valid before re requesting the file
  • Request for approvalif-modified-sinceThe cache is used for conditional judgment.
  • Last modified: last modified is the response header, and the web container will inform the client of the last modification time

Each time the web container is requested, it will check whether the client cache resource is still valid. If it is invalid, the last modification time of the last server response will be sent to the server as the if modified since value for judgmentIf the file does not change, the server uses the<span style=”color:#773098;font- weight:bold; “> 304 not modified is used as the response header and an empty response body.After receiving the 304 response, the client can use the cached file version.

Why does the client send an HTTP request to determine whether the file is modified?

If the cache resource is valid, it does read the client cache directly without sending an HTTP request. However, there are some situations that need to be noted. For example, when the user presses F5 or clicks the refresh button, an HTTP request will be sent even for the URI with expired. Therefore, it is necessary to judge by last modified.

Moreover, the time of the client and the server may be different, which will result in the expiration of the client, but the server does not. If there is a judgment mechanism, the response will reduce the data transmission of the response body. Or the client simply expired, but the resources did not change. In this case, you need to check whether the file has changed.

The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

HTTP 1.1

HTTP 1.1 has been greatly enhanced, the caching system has been formalized, and entity tags * * e-tag and cache control have been introduced.

  • E-tag is the unique identification of a file or object.Each request carries the e-Stage parameter for access, and whether the file is updated.
  • Cache control: relative expiration. It is calculated from the time when the client receives the response time. How many seconds will the cache expire. The specific field information is as follows:

    • Max age: used to set how long the representations can be cached, in seconds;
    • S-maxage: the same as Max age, but only for proxy caching;

-Public: indicates that the response can be cached in any cache;

    • Private: it can only be used for individual users and cannot be cached by proxy server;
    • No cache: forces the client to send requests directly to the server, that is, every request must be sent to the server. The server receives the request and judges whether the resource has changed. If yes, it will return the new content, otherwise it will return 304.
    • No store: disable all caching
    • If none match: after sending the Etag field response to the client for the first time, the client will send an if none match at the same time to judge whether the data is sent or not, and its value is the value of Etag

    Use as web browsere-tagFor example, as shown in the figure below.

    The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

    When last modified / Etag is configured, when the browser accesses the resources of the unified URI again, it will send a request to the server asking whether the file has been modified. If not, the server will generate 304 not The modified response and an empty response body are sent to the browser, and the browser directly fetches the data from the local cache; if the data changes, the whole data is sent back to the browser.

    Summary

    Last modified / Etag and cache control / expires have different functions. The former is to ask whether the entity version has changed each time, and the latter is to directly check that the local cache is still in the valid time range, and if there is no request will be sent.

    • When both are used together,Priority of cache control / expiresIt is higher than last modified / Etag.

    When the local copy is still in the validity period according to cache control / expires, if not, it will send a request to the server again to ask for the last modified or e-tag.

    The functions of cache control and expires are the sameIt refers to the validity period of the current resource. It controls whether the browser fetches data directly from the browser cache or sends the request to the server again。 Only cache control has more choices and more detailed settings. If it is set at the same time, its priority is higher than expired.

    The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

    Generally, cache control / expires is used with last modified / Etag, because even if the server sets the cache time, when the user clicksRefreshWhen the button is pressed, the browser will ignore the cache and continue to send requests to the server. At this time, last modified / Etag can make good use of the server’s return code 304, thus reducing the response cost.

    By adding meta tags to the nodes of HTML pages, you can tell the browser that the current page is not cached and that each visit needs to be pulled from the server. The code is as follows:

    <META HTTP-EQUIV="Pragma" CONTENT="no-cache">  

    However, only some browsers support this usage, and the generalCache proxy serverBecause the proxy itself does not parse the content of HTML. Browser caching can greatly improve the experience of end users. When using the browser, users will have various operations, such as entering the address and pressing F5 to refresh. The impact of these behaviors on the cache is as follows.

    The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

    App side cache

    althoughHybrid programming has become fashionable and noisyBut mobile Internet is still the world of native applications. Regardless of the size of app, flexible caching not only greatly reduces the pressure on the server, but also facilitates users because of faster user experience.How to make app cache transparent to business components and update app cache data in timeIs the key to the successful application of APP cache.

    What is component transparency?

    That is, the component has no impact on the caller and does not require maintenance. Out of the box.

    What cache can be used by app cache?

    App can cache content inMemory, file, or local database (such as SQLite), but based onMemory cacheUse with caution.

    Local library operation

    How to use database cache by app:

    • After downloading the data file, the relevant information of the file (such as URL, path, download time, expiration time, etc.) is stored in the database
    • When the next download request is made, first query from the database according to the URL. If the current time is not expired, the local file will be read according to the path to achieve the effect of cache.

    advantage: method has the property of storing files flexibly, which provides great expansibility and can provide good support for other functions.
    shortcoming: if the information is stored too much, the storage capacity will be reduced. Therefore, it is necessary to select the appropriate main information storage according to the business

    Pay attention to the cleaning mechanism of database cache
    The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

    File operation

    For some interfaces in app, file caching can be used. This method uses the API of file operation to get the last modification time of the file, and judge whether it is expired with the current time, so as to achieve the cache effect.

    But it’s important to note that,Different types of files have different cache times。 For example:
    file type

    • Picture fileThe content of the cache is relatively unchanged until it is finally cleaned up, and the app can always read the image content in the cache.
    • configuration fileThe content in is likely to be updated and an acceptable cache time needs to be set. At the same time, the standard of cache time is different in different environments.

    network environment

    • In WiFi network environment, the cache time can be set a little shorter, one is faster network speed, the other is no traffic charges.
    • In the mobile data traffic environment, the cache time can be set longer to save traffic and better user experience.

    In IOS development, sdwebimage is a great image caching framework. The structure of main classes is as follows.
    The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

    Sdwebimage is a relatively large class library, which provides a uiimageview class to support remote loading of images from the network. It has the characteristics of cache management, asynchronous download, control and optimization of download times of the same URL. When using, you only need to import it in the header file
    #import"UIImageView+WebCache.h" You can call the asynchronous loading method:

    (void)setImageWithURL:(NSURL *)url placeholderImage:(UIImage *)placeholder options:(SDWebImageOptions)options; 

    The URL is the address of the image

    • The placeholderimage is the image displayed when the network image has not been loaded successfully
    • Sdwebimage options are related options.

    By default, sdwebimage ignores the cache settings in the header and saves the image with the URL as the key. The URL and the image are one-to-one mapped. When the app requests the same URL, sdwebimage will get the image from the cache. Set the third parameter to sdwebimagerefreshcached to update the image. For example:

    NSURL *url = [NSURL URLWithString:@"http://www.zhazhahui.com/image.png"];
    UIImage *defaultImage = [UIImage imageNamed:@"zhazhahui.png"];
    
    [self.imageView setImageWithURL:url placeholderImage:defaultImage options:SDWebImageRefreshCached]; 

    SDWebImageThere are two types of caching in

    • Disk cache
    • Memory cache

    The framework provides corresponding cleaning methods:

    [[[SDWebImageManager sharedManager] imageCache] clearDisk]; 
    [[[SDWebImageManager sharedManager] imageCache] clearMemory];

    It should be noted that in ios7, the cache mechanism has been modified. Only the sdwebimage cache is cleared by using the above two methods,The system’s cache was not clearedSo you can add the following code to the agent that cleans the cache:

    [[NSURLCache sharedURLCache] removeAllCachedResponses];

    last:

    • According to the existing mode, cache can be divided into three types: system, hardware and software.
    • Software according to the scene client, network, distributed cache
    • User experience * *: refers to all the feelings of users before, during and after using a product or system.
    • There are several performance indicators, such as the number of concurrent users, the number of concurrent users and the response rate
    • Website architecture speech, experienced the start, development, mid-term, master growth, late
    • Client cache is divided into page cache, browser cache and app cache

    The above is the sharing of this article, but the content is not finished, this is still an appetizer. If there is any help, please pay attention and share.

    You need to get the summary outline of this series directly related to [lotus boy Nezha] background reply to “distributed cache” for free!!!

    hum about

    In fact, the outline was written very early, but the presentation of the contents and the details of the principle have not known how to write in order to achieve a clear effect.

    This article took me a lot of time. Ah, the dish is the dish, Zha Zha Hui I also don’t look for the excuse… Zha is Zha

    Recently Zha Zha Hui created a technology exchange group, the theme is[feast of knowledge], we work together to overcome a difficult problem every week, and make full use of self-improvement, not just technical amount!!!

    Interested readers can scan Zha Zha Hui wechat QR code and make remarksAdd groupThat’s fine. It’s said that the people inside are good-natured. They are all talents.

    The cache will work! Haven't you heard of the architecture yet? Knowledge bottle of distributed multi-level cache architecture

    If you find something wrong or don’t understand in the process of reading, please leave a message at the bottom. Every message you leave, Zha Zha Hui will reply.

    If you have any help, you are welcome to pay attention to, share + collect, and search on wechat [lotus boy Nezha]

    Recommended Today

    Regular expression sharing for checking primes

    This regular expression is shown as follows: Regular expressions for checking prime numbers or not To use this positive regular expression, you need to convert the natural number into multiple 1 strings. For example, 2 should be written as “11”, 3 should be written as “111”, 17 should be written as “11111111111”. This kind of […]