High concurrency implementation of websoket.io


Using websocket to set up a chat room real-time interactive games is really very convenient, but the number of online people is not so simple. As soon as 300 people are online, they start to fall off. After adjustment, it was much better. The following is a record of the improvements:

Adjust the transmission mode of websocket.io

Websocket.io socket.io is a very powerful framework, which can help you build real-time applications across browsers based on websocket. It supports mainstream browsers, multiple platforms and multiple transmission modes.
There are two main transmission modes: 1. Polly polling mode and 2. Websocket mode.
The default is to poll handshake connections before upgrading to websocket. The efficiency of polling is certainly low, and the modification of configuration parameters is directly transmitted in the way of websookect. I found that the effect was very poor, very satisfied!

var io = require('socket.io')({ "transports":['websocket', 'polling']})

Determine the source location of the transmission mode:


Use namespace capabilities

Different namespaces can be distinguished to send specific messages. For some messages that are not required to be accepted globally, the namespace can be added, which can greatly save the transmission of resources.

//Create server namespace
var ServerIo = io.of('/server').on('connection', function(socket){

 socket.on('ready',function(roomId,data) {
   pub.publish(roomId, JSON.stringify({
          "event": 'ready',
          "data": '',
          "namespace" : '/user'

        pub.publish(id, JSON.stringify({
          "event": 'button-start',
          "data": '',
          "namespace" : '/user'
 //Send message for namespace
 io.of(namespace).emit('message', message)

Load balancing configuration of nginx

Through the process module of nodejs, it can realize multi process running program, and improve the utilization of CPU to the maximum limit.

Multi process startup:

var fork = require('child_process').fork;
var cupNum = require('os').cpus().length,
    workerArr = [],
  connectNum = 0 ;

for (var i = 0; i < cupNum; i++) {
    workerArr.push(fork('./shake_server.js', [8000 + i]));

    process.on('uncaughtException', function(e) {
          Console. Log ('process exception caught: ', e);

Then load through nginx

upstream io_nodes {

    server {
        listen       8080;
        server_name  localhost;
        #charset koi8-r;
        #access_log  logs/host.access.log  main;
        location /html {
            alias   /Users/snail/Documents/myworks/weihuodong/shake;
            index  index.html index.htm;
        location / {
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header X-Forwarded-For  $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_http_version 1.1;
            proxy_pass http://io_nodes;
            proxy_redirect off;
            keepalive_timeout 300S 

Adjustment of some system parameters:

1. Modify system parameters:

Official website description:
By default set to 5. Determines how many concurrent sockets the agent can have open per host.
(in order to reuse connection connections for HTTP requests, nodejs creates a connection pool with a default size of 5 in http.agent.)
Modify it as follows: require (“HTTP”). Globalagent. Maxsockets = infinity; if websocke.io is used

2. Ulimit adjustment

Heartbeat detection and disconnection detection

In case of automatic disconnection, various timeout parameters of nginx can be adjusted. There are too many configuration parameters. Check the document configuration as required

For example, the default time is 75 seconds

It can also be judged by the program code that disconnection will trigger the disconcent event. Listen to it and automatically create a socket connection.
Heartbeat check is to send a message regularly and keep connected.

Memory data sharing

There is no problem in the interprocess communication of nodejs, but it is simple and effective to share data with redis.
Create a redisclient instance to use directly.

var client = redisClient(35050, '')

  var redisObj = {
          client.hgetall(id, function (err, obj) {
        if(typeof o === 'undefined' || typeof Object.keys(o)[0] == 'undefined'){
       var key = Object.keys(o)[0]
       var value =JSON.stringify(o[key])
        if(id && uid){


Socket is not shared among processes, so we can use the redis subscription and publishing system.

See code Notes for details:

var redis = require("redis");
var sub = redis.createClient(35050, ''), pub = redis.createClient(35050, '');
var msg_count = 0;
//When the subscription event is triggered, the callback contains two parameters, namely, the subscribed channel and the total subscribed quantity
sub.on("subscribe", function (channel, count) {
   Console.log ('listen to subscription events', channel, count)
//The message event will be triggered in pub. All our business processing is basically based on listening to it and notifying socket to publish messages
sub.on("message", function (channel, message) {
   Console.log ('listen to publishing events')
   console.log("sub channel " + channel + ": " + message);
   // socket.to(channel).emit('nice game', "let's play a game");
   socket.of(channel).emit('message', message)

   msg_count += 1;
   if (msg_count === 3) {

//Add three subscriptions

//Trigger subscriber for channel 1
    pub.publish("channel1", "I am message to chanle1")

//Trigger subscribers for channel 2
  pub.publish("channel2", "I am message to chanle2")

After testing a server, 1000 or 2000 users have no pressure
It is not easy to use websocket bench when testing