Analyze the application of cache in Ruby China


First of all, let me show you the report of newrelic

Average response time in the last 24 hours

Those pages with high traffic (action)

Several actions of traffic:


Userscontroller ා show (worse, mainly because GitHub API requests are slow)

PS: before publishing this article, I slightly modified it. The GitHub request is put into the background queue for processing. The new result is as follows:



From the above report, at present, the response time of ruby China backend requests, excluding the user’s home page, is within 100ms, or even lower.

How do we do it?

Markdown cache
Fragment Cache
Data cache
Static resource cache (JS, CSS, picture)
Markdown cache

When the content is modified, even the result of markdown is saved to the database to avoid repeated calculation when browsing.

In addition, this thing is purposely not put into the cache, but into the database:

In order to be persistent, it is necessary to avoid a large number of losses when memcached is stopped;
Avoid using too much cache memory;

class Topic
 Field: body? Stores the original content for modification
 Field: body? HTML? Store the calculated results for display

 before_save :markdown_body
 def markdown_body
  self.body_html = MarkdownTopicConverter.format(self.body) if self.body_changed?
Fragment Cache

This is the most widely used caching scheme in Ruby China and the reason for the speed improvement.


<% cache([topic, suggest]) do %>
  <%= link_to(topic.replies_count,"#{topic_path(topic)}#reply#{topic.replies_count}",
     :class => "count state_false") %>
 ... omit content

<% end %>

Cache_key of topic is used as cache views / topics / {number} – {update time} / {suggest parameter} / {file content MD5} – > views / topics / 19105-20140508153844 / false / bc178d556ecaee49971b0e80b3566f12
Some places that involve displaying different statuses according to the user account, directly prepare the complete HTML and control the status through JS, such as the current “like” function.

<script type="text/javascript">
 var readed_topic_ids = <%= current_user.filter_readed_topics(@topics) %>;
 for (var i = 0; i < readed_topic_ids.length; i++) {
  topic_id = readed_topic_ids[i];
  $(".topic_"+ topic_id + " .right_info .count").addClass("state_true");

Another example


 <% cache([reply,"raw:#{@show_raw}"]) do %>
 <div><%= user_avatar_tag(reply.user, :normal) %></div>
    <%= user_name_tag(reply.user) %>
    <%= likeable_tag(reply, :cache => true) %>
    [% = link_to ("", edit_topic_reply_path (@ topic, reply),: class = > "Edit Icon small_edit",'data uid '= > reply. User_id,: title = > Modify reply ")% >
    <%= link_to("", "#", 'data-floor' => floor, 'data-login' => reply.user_login,
      :title => t("topics.reply_this_floor"), :class => "icon small_reply" )
   <%= sanitize_reply reply.body_html %>
<% end %>

Also cache views / replies / 202695-201405081517 / raw: false / d91dddbcb269f3e0172bf5d0d27e9088 through reply cache_key

At the same time, there are complex user authority control, which is implemented by JS;

<script type="text/javascript">
  <% if admin? %>
   $("#replies .reply a.edit").css('display','inline-block');
  <% elsif current_user %>
   $("#replies .reply a.edit[data-uid='<%= %>']").css('display','inline-block');
  <% end %>
  <% if current_user && [email protected]_liked_reply_ids.blank? %>
   Topics.checkRepliesLikeStatus([<%= @user_liked_reply_ids.join(",") %>]);
  <% end %>

Data cache

In fact, most of ruby China’s model queries don’t have cache, because according to the actual situation, mongodb’s query response time is very fast, most of the scenarios are within 5ms, or even lower.

We will do some data query caching in charge of price comparison, such as: get GitHub repos

def github_repos(user_id)
 cache_key = "user:#{user_id}:github_repos"
 items =
 if items.blank?
  items = real_fetch_from_github()
  Rails.cache.write(cache_key, items, expires_in: 15.days)
 return items

Etag is a parameter in HTTP request and response, which can be used to detect whether the content has been updated to reduce network overhead.

This is the process

Rails’s fresh when method can help you generate Etag information from your query content

def show
 @topic = Topic.find(params[:id])

 fresh_when(etag: [@topic])

Static resource cache

Please don’t look down on this thing. No matter how fast the backend is written, it may also be slowed down by these (performance on the browser)!

1. Make rational use of rails assets pipeline, and be sure to open it!

# config/environments/production.rb
config.assets.digest = true

2. In nginx, set the cache validity of CSS, JS and image to max;

location ~ (/assets|/favicon.ico|/*.txt) {
 access_log    off;
 expires      max;
 gzip_static on;

3. Reduce the number of JS, CSS and image of a page as much as possible. The simple way is to merge them and reduce the HTTP request overhead;

 Only two.
 <link href="//" rel="stylesheet" />
 <script src="//"></script>

Some Tips

Look at the statistics log and give priority to pages with high traffic;
Updated_at is a very good thing to help you clean up the cache and make good use of it! Don’t ignore it when modifying data!
Pay more attention to the query time in your rails log. The page response time less than 100ms is a good state, and users will feel sluggish after 200ms.

Recommended Today

What are the new methods of visual + map technology?

Last week, Ren Xiaofeng, chief scientist of Alibaba Gaode map, made a technical exchange with you on the development of computer vision related technology and the application in the field of map travel at the online live broadcast activity of “cloud dialogue” between senior students of Alibaba. The interaction between live broadcast is hot. Especially […]