Details of centos5. X system kernel optimization

Time:2019-10-20

It is mainly for detailed analysis of each item in / etc / sysctl.conf, and all contents are collected and sorted through the network, so as to facilitate everyone’s learning and understanding.

System optimization item:

kernel.sysrq = 0

#The sysrq key combination is used to understand the current operation of the system. It is set to 0 for security.

 kernel.core_uses_pid = 1

#Controls whether PID is added to the file name of the core file as an extension

kernel.msgmnb = 65536

#Size limit per message queue, in bytes

kernel.msgmni = 16 

#The maximum number of message queues in the whole system is limited, which can be increased as needed.

kernel.msgmax = 65536

#Maximum size per message

kernel.shmmax = 68719476736

#Size (in bytes) limit of available shared memory segments

kernel.shmall = 4294967296

#All memory sizes (unit: page, 1 page = 4KB)

kernel.shmmni = 4096

#Control the total number of shared memory segments. The current parameter value is 4096

kernel.sem = 250 32000 100 128

Or kernel.sem = 5010 641280 5010 128

#Semmsl (maximum number of semaphores per user), semmns (maximum number of system semaphores), semopm (number of operations per semop system call), semmni (maximum number of system semaphore sets)

Fs.aio-max-nr = 65536 or (1048576) (3145728)

#Asynchronous I / O is supported at the system level. When the system performs a large number of continuous IO, a larger value will be used.

fs.aio-max-size = 131072  

#Maximum size of asynchronous IO

fs.file-max = 65536        

#Represents the maximum number of file handles

net.core.wmem_default = 8388608

#Reserve default memory value (in bytes) for TCP socket to send buffer

net.core.wmem_max = 16777216

#The maximum amount of memory reserved for sending buffer for TCP socket (in bytes)

net.core.rmem_default = 8388608

#The default memory value (in bytes) reserved for receiving buffer for TCP socket

net.core.rmem_max = 16777216

#Maximum amount of memory (in bytes) reserved for receiving buffer for TCP socket

net.core.somaxconn = 262144

#Default parameter for listen (function), maximum number of pending requests limit

Network optimization:

net.ipv4.ip_forward = 0

#Disable packet filtering and forwarding

net.ipv4.tcp_syncookies = 1

#Open syn cookies function

net.ipv4.conf.default.rp_filter = 1

#Enable source route auditing

net.ipv4.conf.default.accept_source_route = 0

#Disable all IP source routes

net.ipv4.route.gc_timeout = 100

#The refresh frequency of route cache. When one route fails, how long does it take to jump to the other is 300 by default

net.ipv4.ip_local_port_range = 1024 65000

#The range of external connection ports is very small by default: 32768 to 61000, changed to 1024 to 65000.

net.ipv4.tcp_max_tw_buckets = 6000

#Indicates that the system maintains the maximum number of time ﹣ wait sockets at the same time. If this number is exceeded, the time ﹣ wait sockets will be cleared immediately and warning messages will be printed. The default value is 180000.

net.ipv4.tcp_sack = 1

#In high latency connections, sack is particularly important for efficient utilization of all available bandwidth. High latency can result in a large number of packets waiting to be answered at any given time. In Linux, these packets will remain in the retransmission queue until they are answered or no longer needed. These packets are queued by sequence number, but there is no index of any kind. When a received sack option needs to be processed, the TCP protocol stack must find the packet with sack applied in the retransmission queue. The longer the retransmission queue, the more difficult it is to find the required data. Generally, this function can be turned off. Selective acknowledgment has a significant impact on performance in high bandwidth delayed network connections, but it can also be disabled without sacrificing interoperability. Set the value to 0 to disable the sack function in the TCP protocol stack.

net.core.netdev_max_backlog = 262144

#The maximum number of packets allowed to be sent to the queue when each network interface receives packets faster than the kernel processes them

net.ipv4.tcp_window_scaling = 1

#TCP window expansion factor support. If the maximum TCP window exceeds 65535 (64K), set the value to 1. TCP window expansion factor is a new option, which will be included in some new implementations. In order to be compatible with the old and new protocols, the following conventions are made: 1. Only the first syn of the active connector can send window expansion factor; 2. After the passive connector receives the option with window expansion factor, if it supports it, it can send its own window expansion factor, otherwise it will be ignored. This option; 3. If both parties support this option, the window expansion factor will be used for subsequent data transmission. If the other party does not support wscale, it should not respond to wscale 0, and should not send 1460 data when receiving 46 windows; if the other party supports wscale, it should send a large amount of data to increase throughput, not to solve the problem by turning off wscale. If it is implemented by using common protocols, it needs to turn off wscale to improve performance and just in case 。

net.ipv4.tcp_rmem = 4096 87380 4194304

#TCP read buffer

net.ipv4.tcp_wmem = 4096 16384 4194304

#TCP write buffer

net.ipv4.tcp_max_orphans = 3276800

#The maximum number of TCP sockets in the system is not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. This restriction is only to prevent simple DoS attacks. You cannot rely on it excessively or reduce it artificially. You should increase this value (if memory is increased).

net.ipv4.tcp_max_syn_backlog = 262144

#Indicates the length of the syn queue, which is 1024 by default and 8192 by increasing the queue length. It can accommodate more network connections waiting for connection.

net.ipv4.tcp_timestamps = 0

#Timestamps can avoid winding the serial number. A 1Gbps link is bound to encounter a previously used serial number. Timestamps enable the kernel to accept such “abnormal” packets. It needs to be turned off here.

net.ipv4.tcp_synack_retries = 1

#In order to open the peer-to-peer connection, the kernel needs to send a syn with an ACK in response to the previous syn. That is, the second handshake in the three handshakes. This setting determines the number of syn + ACK packets sent before the kernel relinquishes the connection.

net.ipv4.tcp_syn_retries = 1

#For a new connection, how many syn connection requests does the kernel need to send before it decides to give up. Should not be greater than 255, default is 5

net.ipv4.tcp_tw_recycle = 1

#Enable timewait quick recycle

net.ipv4.tcp_tw_reuse = 1

#Enable reuse. Allows time-wait sockets to be reused for new TCP connections.

net.ipv4.tcp_mem = 94500000 915000000 927000000

#1st is lower than this value, TCP has no memory pressure, 2nd enters memory pressure stage, 3rdtcp refuses to allocate socket (unit: memory page)

net.ipv4.tcp_fin_timeout = 1

#Indicates that if the socket is required to be closed by the local side, this parameter determines that the socket will remain in the fin-wait-2 state for 15 seconds.

net.ipv4.tcp_keepalive_time = 60

#Indicates how often TCP sends keepalive messages when keepalive is enabled. The default is 2 hours, changed to 1 minute.

net.ipv4.tcp_keepalive_probes= 1

net.ipv4.tcp_keepalive_intvl= 2

#It means that if a TCP connection starts a probe after idle for 2 minutes, and if the probe fails once (2 seconds at a time), the kernel will give up completely, and the connection will be considered invalid.

Finally, for the configuration to take effect immediately, use the following command:

#/sbin/sysctl -p

In performance optimization, we should first set the goal of performance optimization, then find the bottleneck, adjust the parameters, and achieve the goal of optimization. It’s hard to find performance bottlenecks. We need to narrow the scope from a wide range, through many use cases and tests, and finally determine the bottleneck point. There are many parameters to be adjusted while testing, which requires more patience and persistence.

Example:

temp=`cat /etc/sysctl.conf|grep -c net.ipv4.tcp_max_syn_backlog`

if [ $temp -eq 0 ]

then

echo “# Add” >> /etc/sysctl.conf

echo “net.ipv4.tcp_max_syn_backlog = 65536” >> /etc/sysctl.conf

echo “net.core.netdev_max_backlog =  32768” >> /etc/sysctl.conf

echo “net.core.somaxconn = 32768” >> /etc/sysctl.conf

echo “net.core.wmem_default = 8388608” >> /etc/sysctl.conf

echo “net.core.rmem_default = 8388608” >> /etc/sysctl.conf

echo “net.core.rmem_max = 16777216” >> /etc/sysctl.conf

echo “net.core.wmem_max = 16777216” >> /etc/sysctl.conf

echo “net.ipv4.tcp_timestamps = 0” >> /etc/sysctl.conf

echo “net.ipv4.tcp_synack_retries = 2” >> /etc/sysctl.conf

echo “net.ipv4.tcp_syn_retries = 2” >> /etc/sysctl.conf

echo “net.ipv4.tcp_tw_recycle = 1” >> /etc/sysctl.conf

#net.ipv4.tcp_tw_len = 1

echo “net.ipv4.tcp_tw_reuse = 1” >> /etc/sysctl.conf

echo “net.ipv4.tcp_mem = 94500000 915000000 927000000” >> /etc/sysctl.conf

echo “net.ipv4.tcp_max_orphans = 3276800” >> /etc/sysctl.conf

#net.ipv4.tcp_fin_timeout = 30

#net.ipv4.tcp_keepalive_time = 120

echo “net.ipv4.ip_local_port_range = 1024  65535” >> /etc/sysctl.conf

Recommended Today

Single and multiple buttons are styled with pictures

I’ve always seen people asking how to style and enlarge the buttons of radio buttons and multi buttons? Let’s share an example I did. 1. First make the button into a picture  2.html page Copy code The code is as follows: <!DOCTYPE HTML> <html> <head> <meta http-equiv=”Content-Type” content=”text/html; charset=utf-8″ /> <script type=”text/javascript” src=”jquery-1.10.2.min.js”></script> <script type=”text/javascript”> […]