Plumelog distributed log component instruction manual

Time:2022-4-26

System introduction

  1. The intrusion free distributed log system collects logs based on log4j, log4j2 and logback, and sets the link ID to facilitate the query of associated logs.
  2. Based on elasticsearch as the query engine.
  3. High throughput and high query efficiency.
  4. The whole process does not occupy the local disk space of the application and is maintenance free; It is transparent to the project and does not affect the operation of the project itself.
  5. There is no need to modify old projects, and direct use is introduced. Dubbo and spring cloud are supported.

framework

Plumelog distributed log component instruction manual

Architecture diagram. png
  • The core component of the plumelog core includes the log collection end, which is responsible for collecting logs and pushing them to redis / Kafka queues
  • The plumelog server is responsible for asynchronously writing the logs in the queue to elastic search

design sketch

Plumelog distributed log component instruction manual

Sign in. png
Plumelog distributed log component instruction manual

Log query png
Plumelog distributed log component instruction manual

Error statistics png
Plumelog distributed log component instruction manual

System management png
Plumelog distributed log component instruction manual

Alarm settings png
Plumelog distributed log component instruction manual

Alarm record png
Plumelog distributed log component instruction manual

Alarm robot png

instructions

Server installation

Installation steps

  1. The first step is to install redis or Kafka (generally, redis is enough). Redis official website:https://redis.io; Kafka official website:http://kafka.apache.org
  2. Step 2: install elasticsearch. Download address of elasticsearch official website:https://www.elastic.co/cn/downloads/past-releases
  3. Step 3: download the installation package. The download address of the plug server is:https://gitee.com/plumeorg/plumelog/releases, you can also use the internally provided
  4. Step 4: configure the plug server and start it

Detailed explanation of configuration file

spring.application.name=plumelog_server
spring.profiles.active=test-confidential
server.port=8891
spring.thymeleaf.mode=LEGACYHTML5
spring.mvc.view.prefix=classpath:/templates/
spring.mvc.view.suffix=.html
spring.mvc.static-path-pattern=/plumelog/**


#There are four kinds of values: redis, Kafka, rest, restserver, rediscluster and redissentinel
#Redis means using redis as the queue
#Rediscluster means using rediscluster as the queue
#Redissentinel indicates that redissentinel is used as the queue
#Kafka means using Kafka as a queue
#Rest means fetching logs from the rest interface
#Restserver means to start as a rest interface server
#UI means to start as UI alone
plumelog.model=redis

#If Kafka is used, enable the following configuration
#plumelog.kafka.kafkaHosts=172.16.247.143:9092,172.16.247.60:9092,172.16.247.64:9092
#plumelog.kafka.kafkaGroupName=logConsumer

#Queue redis addresses. Clusters are separated by commas. Redis cluster mode is configured in model
plumelog.queue.redis.redisHost=127.0.0.1:6379
#If you have a password using redis, enable the following configuration
#plumelog.queue.redis.redisPassWord=123456
#plumelog.queue.redis.redisDb=0

#Management end redis address
plumelog.redis.redisHost=127.0.0.1:6379
#If you have a password using redis, enable the following configuration
#plumelog.redis.redisPassWord=123456
#plumelog.queue.redis.redisDb=0

#If you use rest, enable the following configuration
#plumelog.rest.restUrl=http://127.0.0.1:8891/getlog
#plumelog.rest.restUserName=plumelog
#plumelog.rest.restPassWord=123456

#Redis decompression mode, which does not consume uncompressed queues after it is turned on
#plumelog.redis.compressor=true

#Elasticsearch related configurations. Hosts supports portable protocols, such as HTTP and HTTPS
plumelog.es.esHosts=127.0.0.1:9200
#ES7.* The index type field has been removed, so if it is ES7, you don't need to configure this, 7* If this is not configured below, an error will be reported
#plumelog.es.indexType=plumelog
plumelog.es.shards=5
plumelog.es.replicas=1
plumelog.es.refresh.interval=30s
#Log index establishment method: day means by day and hour means by hour
plumelog.es.indexType.model=day
#Es sets the password and enables the following configuration
#plumelog.es.userName=elastic
#plumelog.es.passWord=elastic
#Trust self signed certificate
#plumelog.es.trustSelfSigned=true
#Do you want to verify the hostname
#plumelog.es.hostnameVerification=false


#Number of logs pulled in a single time
plumelog.maxSendSize=100
#Pull interval, Kafka not effective
plumelog.interval=100

#If the address of the plumelog UI is not configured, the alarm information cannot be connected
plumelog.ui.url=http://127.0.0.1:8891

#Management password. The password that needs to be entered when manually deleting the log
admin.password=123456

#Log retention days, configure 0 or do not configure default permanent retention
admin.log.keepDays=30
#Link retention days, configure 0 or do not configure default permanent retention
admin.log.trace.keepDays=30
#Login configuration. After configuration, there will be a login interface
login.username=wangfeiyong
login.password=123456

Recommended parameter configuration method for improving performance

  • The daily log volume is less than 50g, and the SSD hard disk is used
plumelog.es.shards=5
plumelog.es.replicas=0
plumelog.es.refresh.interval=30s
plumelog.es.indexType.model=day
  • The daily log volume is more than 50g, and the mechanical hard disk is used
plumelog.es.shards=5
plumelog.es.replicas=0
plumelog.es.refresh.interval=30s
plumelog.es.indexType.model=hour
  • The daily log volume is more than 100g, and the mechanical hard disk is used
plumelog.es.shards=10
plumelog.es.replicas=0
plumelog.es.refresh.interval=30s
plumelog.es.indexType.model=hour
  • The daily log volume is more than 1000g, and the SSD hard disk is used. This configuration can run to 10t for more than a day
plumelog.es.shards=10
plumelog.es.replicas=1
plumelog.es.refresh.interval=30s
plumelog.es.indexType.model=hour
  • plumelog. es. With the increase of shards and hour mode, the maximum number of slices of ES cluster needs to be adjusted
PUT /_cluster/settings
{
  "persistent": {
    "cluster": {
      "max_shards_per_node":100000
    }
  }
}

Client use

Introducing dependencies and configurations

log4j

Introduce Maven dependency

<dependency>
    <groupId>com.plumelog</groupId>
    <artifactId>plumelog-log4j</artifactId>
    <version>3.4.2</version>
</dependency>

Configure log4j configuration file and add the following appender. The example is as follows:

log4j.rootLogger = INFO,stdout,L
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern = [%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} [%c.%t]%n%m%n

#Kafka as middleware
log4j.appender.L=com.plumelog.log4j.appender.KafkaAppender
#AppName system name (just define it yourself)
log4j.appender.L.appName=plumelog
log4j.appender.L.env=${spring.profiles.active}
log4j.appender.L.kafkaHosts=172.16.247.143:9092,172.16.247.60:9092,172.16.247.64:9092
    
#Redis as middleware
log4j.appender.L=com.plumelog.log4j.appender.RedisAppender
log4j.appender.L.appName=plumelog
log4j.appender.L.env=${spring.profiles.active}
log4j.appender.L.redisHost=172.16.249.72:6379
#Redis does not have a password. This item is empty or unnecessary
#log4j.appender.L.redisAuth=123456
logback

Introduce Maven dependency

<dependency>
    <groupId>com.plumelog</groupId>
    <artifactId>plumelog-logback</artifactId>
    <version>3.4.2</version>
</dependency>

to configure

<appenders>
    <!-- Use redis to enable the following configuration -- >
    <appender name="plumelog">
        <appName>plumelog</appName>
        <redisHost>172.16.249.72:6379</redisHost>
        <redisAuth>123456</redisAuth>
    </appender>
    
    <!--  Use Kafka to enable the following configuration -- >
    <appender name="plumelog">
        <appName>plumelog</appName
        <kafkaHosts>172.16.247.143:9092,172.16.247.60:9092,172.16.247.64:9092</kafkaHosts>
    </appender>
</appenders>

<!--  Log output level -- >
<root level="INFO">
    <appender-ref ref="plumelog" />
</root>
log4j2

Introduce Maven dependency

<dependency>
    <groupId>com.plumelog</groupId>
    <artifactId>plumelog-log4j2</artifactId>
    <version>3.4.2</version>
</dependency>

to configure

<appenders>
   <!--  Use Kafka to enable the following configuration -- >
    <KafkaAppender name="kafkaAppender" appName="plumelog" kafkaHosts="172.16.247.143:9092,172.16.247.60:9092,172.16.247.64:9092" >
        <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] [%-5p] {%F:%L} - %m%n" />
    </KafkaAppender>
    
     <!-- Use redis to enable the following configuration -- >
    <RedisAppender name="redisAppender" appName="plumelog" redisHost="172.16.249.72:6379" redisAuth="123456">
        <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%t] [%-5p] {%F:%L} - %m%n" />
    </RedisAppender>
</appenders>

<loggers>
    <root level="INFO">
        <appender-ref ref="redisAppender"/>
    </root>
</loggers>

Detailed explanation of configuration file

RedisAppender
field value purpose
appName Custom app name
redisHost Redis address
redisPort Redis port number can be configured without configuration after version 3.4. It can be configured to end with a colon on the host
redisAuth Redis password
redisDb redis db
model (3.4) redis has three modes (standalone, cluster and sentinel), and the default standalone is not configured
runModel 1 indicates the highest performance mode; 2 indicates low performance mode, but 2 can get more information. If not configured, it defaults to 1
maxCount (3.1) number of batch submission logs, 100 by default
logQueueSize (3.1.2) the number of buffer queues is 10000 by default. If it is too small, it may lose logs, and if it is too large, it is easy to overflow memory. According to the actual situation, it can be set to 100000 if the project memory is enough+
compressor (3.4) whether to enable log compression. The default is false
env (3.4.2) the default environment is default
KafkaAppender
field value purpose
appName Custom app name
kafkaHosts Kafka cluster address, separated by commas
runModel 1 indicates the highest performance mode; 2 indicates low performance mode, but 2 can get more information. If not configured, it defaults to 1
maxCount (3.1) number of batch submission logs, 100 by default
logQueueSize (3.1.2) the number of buffer queues is 10000 by default. If it is too small, it may lose logs, and if it is too large, it is easy to overflow memory. According to the actual situation, it can be set to 100000 if the project memory is enough+
compressor (3.4) whether to enable log compression. The default is false
env (3.4.2) the default environment is default

Tracking code generation configuration

Filter mode (recommended)
@Bean
public FilterRegistrationBean filterRegistrationBean() {
    FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
    filterRegistrationBean.setFilter(initCustomFilter());
    filterRegistrationBean.addUrlPatterns("/*");
    return filterRegistrationBean;
}

@Bean
public Filter initCustomFilter() {
    return new TraceIdFilter();
}
Interceptor mode
@Component
public class Interceptor extends HandlerInterceptorAdapter{
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
        String uuid = UUID.randomUUID().toString().replaceAll("-", "");
        String traceid= uuid.substring(uuid.length() - 7);
        TraceId. logTraceID. set(traceid); // Set the traceid value. If this point is not buried, there will be no link ID
        return true;
    }
}

Link tracking configuration

Note: if this item is not configured, there will be no link information displayed on the interface.

The principle of this module is to use the aspect of spring AOP to generate link logs. The core is to configure spring AOP. If you are not familiar with spring AOP before configuration, it is recommended to be familiar with it first.

Precautions for use: the link tracking module will generate a large number of link logs. Modules with high concurrency should not be overused, especially global management.

Manual dotting and global dotting cannot be used at the same time. If global dotting is used, manual dotting will fail.

  1. Introduce Maven dependency
<dependency>
    <groupId>com.plumelog</groupId>
    <artifactId>plumelog-trace</artifactId>
    <version>3.4.2</version>
</dependency>

<!--  Introducing AOP -- >
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-aop</artifactId>
    <version>2.1.11.RELEASE</version>
    <scope>provided</scope>
    <!--  Scope is provided so as not to conflict with the user's version -- >
</dependency>
  1. Manual management needs to be added to the recording method@TraceYou can record the link log
@Trace
@GetMapping("/hello")
public String hello() {
    process1();
    process2();
    try {
        //An arithmetic exception is simulated here
        int i = 1 / 0;
    } catch (Exception e) {
        log. Error ("test exception", e);
    }
    return "hello";
}
  1. Global dotting requires user-defined entry points. When global dotting is defined, manual dotting will become invalid
@Aspect
@Component
public class AspectConfig extends AbstractAspect {
    @Around ("within (COM. XXXX.. *)") // write the path of your own package here
    public Object around(JoinPoint joinPoint) {
        return aroundExecute(joinPoint);
    }
}
  1. If you don’t want to see trace logs in your console or file output, you can filter them out by adding filters. The example of logback is as follows
<appender name="CONSOLE">
    <!--  This filter filters out all trace logs. The filter class of logback in version 3.4.1 -- >
    <filter>
        <level>info</level>
        <filterPackage>com.plumelog.trace.aspect.AbstractAspect</filterPackage>
    </filter>
    <encoder>
        <Pattern>${CONSOLE_LOG_PATTERN}</Pattern>
        <charset>UTF-8</charset>
    </encoder>
</appender>

Abnormal alarm push

Flying book platform

Add swarm robots

Step 1: click the [setting] button and select [swarm robots]

Plumelog distributed log component instruction manual

Flying book add group robot operation 1 png

Step 2: click the [add robot] button

Plumelog distributed log component instruction manual

Flying book add group robot operation 2 png

Step 3: select user defined robot and click Add

Plumelog distributed log component instruction manual

Flying book add group robot operation 3 png

Step 4: fill in the robot name and description information respectively, and click the [next] button

Plumelog distributed log component instruction manual

Flying book add group robot operation 4 png

Step 5: copy the webhook address. The security settings and settings are configured according to your needs. It is an optional part. Click the [finish] button

Plumelog distributed log component instruction manual

Flying book add group robot operation 5 png
Platform configuration
Plumelog distributed log component instruction manual

Alarm configuration png

Meaning description (Reference):

  • Application Name: the name of the application that needs an error alarm
  • Application environment: the application environment that requires error alarm. If it is not filled in, the default is default
  • Module name: the classname that needs alarm
  • Receiver: fill in the mobile phone number and all for everyone
  • Hook address: Group robot webhook address
  • Number of errors: the number of days that the errors have accumulated to alarm
  • Time interval: how many seconds does the error accumulate to the above error number to start the alarm

Note: the alarm history is in the alarm record. Click to directly connect to the error content.

Recommended Today

JS generate guid method

JS generate guid method https://blog.csdn.net/Alive_tree/article/details/87942348 Globally unique identification(GUID) is an algorithm generatedBinaryCount Reg128 bitsNumber ofidentifier , GUID is mainly used in networks or systems with multiple nodes and computers. Ideally, any computational geometry computer cluster will not generate two identical guids, and the total number of guids is2^128In theory, it is difficult to make two […]