What has technology-based intelligent monitoring experienced in kangaroo cloud log application?

Time:2019-1-20

What has technology-based intelligent monitoring experienced in kangaroo cloud log application?
Author: Dapeng, Backend Development Engineer of Kangaroo Cloud Log Team

Traditional monitoring scope is small and intelligent monitoring efficiency is high. What do you say is how to use it? Dapeng will give you some tips 65507

What has technology-based intelligent monitoring experienced in kangaroo cloud log application?

Traditional monitoring is to set a fixed value (threshold) for monitoring items. When the index of monitoring items exceeds this threshold, people will be notified to pay attention to this index item. Traditional monitoring is generally applicable to business indicators that fluctuate in a certain range:

For example, disk usage, CPU usage and so on, when the index exceeds a certain value, it means that the system may fail, but when the fluctuation range is relatively large; for example, the transaction volume between 09:00 and 18:00 of a bank is large, at other times the transaction volume may be 0, the transaction volume during the working day is general, and the non-working day transaction increases sharply; for example, the click volume of a website is very high in the daytime. Large, late-night clicks may be 0, if the above scenarios are monitored by traditional monitoring indicators, they often do not reflect the state of the system and business well, resulting in many false positives, increasing labor costs, and even numbness and distrust of the alarm.

Technical framework 

What has technology-based intelligent monitoring experienced in kangaroo cloud log application?

Model trainer: Cloud logs form time series with business indicators collected at fixed frequencies and are sent to model trainer. Model trainer consists of a series of mathematical models (which can be added dynamically). Each model gets the predicted value. The errors before the observed value and the predicted value are compared, and we will get a mathematical model that best matches the business. Using the best training model, input the future time points, get the predicted value, and draw the future business map.

Anomaly Detector: There is a certain error between the predicted value of the training mathematical model and the observed value. This residual series is transmitted to the anomaly detector. The anomaly detector is also composed of a series of mathematical models (which can be added dynamically). The model matching the error points of model checking and the anomaly points of business will be used as the anomaly detection model, and the anomaly points detected subsequently will be sent out. Send it to the early warning system.

Time Series Modeling 
The collected time series data are not scattered and irregular. They often change with the change of business. Some have strong periodic rules and some have relatively smooth trend. We need to use corresponding mathematical models to fit them. Here are some commonly used mathematical models.

What has technology-based intelligent monitoring experienced in kangaroo cloud log application?

For different time series with different characteristics, the errors calculated by different mathematical models are quite different. We measure the matching degree of these mathematical models from the following list of indicators.

What has technology-based intelligent monitoring experienced in kangaroo cloud log application?

After the above indicators to measure the pros and cons of the prediction model, I get the most suitable business fitting curve, and get the best training model. Then input the future time point to get the predicted value of that time point, and then draw the predicted curve.

anomaly detection

After forecasting the data of future time points, we also have corresponding anomaly detection model to detect whether the business data is abnormal, as shown in the following table:

What has technology-based intelligent monitoring experienced in kangaroo cloud log application?

After calculating the residual index with the above model, the nearest anomaly detection model is selected as the follow-up anomaly detection model. When the number of anomaly data detected by the model is abnormal, the early warning is sent to the inspector immediately to prevent the future.

Cloud logs say that Qiankun, log analysis is really useful, monitoring warning sample line, fried products to sacrifice the sky. Come and join us now.

Dapeng Lecture Hall, see you next time

Recommended Today

[Redis5 source code learning] analysis of the randomkey part of redis command

baiyan Command syntax Command meaning: randomly return a key from the currently selected databaseCommand format: RANDOMKEY Command actual combat: 127.0.0.1:6379> keys * 1) “kkk” 2) “key1” 127.0.0.1:6379> randomkey “key1” 127.0.0.1:6379> randomkey “kkk” Return value: random key; nil if database is empty Source code analysis Main process The processing function corresponding to the keys command is […]