• ## IV Machine learning algorithms – logistic regression

Time：2021-12-30

1. Principle of logistic regression The name of logical regression carries regression, which is not regression, but an algorithm to solve classification by regression.Input of logistic regression: ℎ (W) = w_ 0+w_ 1x_ 1+w_ 2x_ 2+…= w^TxThis input is very similar to linear regression. Then look at the sigmoid functionDraw an image of this functionCode […]

• ## Graph neural network Chapter 5 graph signal processing and graph convolution neural network reading notes

Time：2021-12-6

A few words before: Finally! Here comes the most relevant content of GNN! The first four chapters are all preliminary knowledge, or introductory knowledge. In fact, they are not particularly related to GNN. But from the beginning of this chapter, it is the core of GNN: graph signal processing. This part is actually very critical, […]

• ## The nature of GCN in Chapter 6 of neural network

Time：2021-12-3

Chapter 6 nature of GCN At the end of Chapter 5, we talk about the hasty end of GCN. As the most classical model of GNN, it has many properties that we need to understand. 6.1 difference and connection between GCN and CNN The CNN convolution volume is the value in a certain region of […]

• ## Matrix eigenvectors and eigenvalues

Time：2021-10-24

Recently, when learning LDA, we need to calculate eigenvalues and eigenvectors, so we re learned a wave The calculation of eigenvalues is relatively simple in Python, and the linalg calculation of numpy needs to be imported.Linalg is the abbreviation of linear algebra. Import numpy first import numpy as np Randomly generate a matrix A A […]

• ## Numerical analysis: power iteration and PageRank algorithm

Time：2021-10-17

1. Power iterative algorithm (power method for short) (1) Dominant eigenvalue and dominant eigenvector Known matrix$$\bm{A} \in \R^{n \times n}$$, $$\bm{A}$$The dominant eigenvalue of is$$\bm{A}$$Other eigenvalues (absolute values) of are large eigenvalues$$\lambda$$, if such eigenvalues exist, and$$\lambda$$The relevant eigenvectors are called dominant eigenvectors. (2) Properties of dominant eigenvalues and dominant eigenvectors If a vector is […]

• ## Good characteristics of machine learning

Time：2021-9-27

We have explored the method of mapping raw data to appropriate feature vectors, but this is only part of the work. Now, we must explore what values are good features of these feature vectors Avoid discrete eigenvalues that are rarely used In this way, the model can learn how the eigenvalue is related to the […]

• ## AI mathematics foundation: singular value and singular value decomposition

Time：2021-5-5

brief introduction Singular value is a very important concept in matrix, which is generally obtained by singular value decomposition. Singular value decomposition is an important matrix decomposition method in linear algebra and matrix theory, which is very important in statistics and signal processing. Before we get to singular values, let’s look at the concept of […]

• ## The analytic hierarchy process of actual combat

Time：2021-2-23

Analytic hierarchy process (AHP) is an operational research method Method background and Application Overview AHP is a hierarchical weight decision-making analysis method, which is proposed by American operational research professor satty of Pittsburgh University in the early 1970s when he studied the subject of “power distribution according to the contribution of each industrial sector to […]

• ## Gradient centralization: one line of code accelerates training and improves generalization ability | ECCV 2020 oral

Time：2021-2-20

Gradient centered GC can make the training of the network more stable and improve the generalization ability of the network. The algorithm is simple and the theoretical analysis of this paper is very sufficient, which can well explain the principle of GC  Source: Xiaofei’s algorithm Engineering Notes official account Thesis: gradient centralization: a new optimization […]

• ## Simple eigenvalue gradient pruning, 4-5 times training acceleration on CPU and arm | ECCV 2020

Time：2021-2-4

In this paper, the DBTD method is used to calculate the filtering threshold, and then the random pruning algorithm is used to prune the eigenvalue gradient. The sparse eigenvalue gradient can reduce the amount of calculation in the return phase. The training on CPU and arm has 3.99 times and 5.92 times acceleration effect respectively  […]

• ## Basic mathematics linear algebra

Time：2021-1-29

The content of linear algebra is very coherent, and the whole is [determinant — > matrix — > n-dimensional vector — > system of linear equations — > similar diagonal type — > quadratic type]. The determinant is a value. If the determinant is 0, the corresponding linear equations have multiple solutions, and the corresponding […]

• ## Gradient centralization: one line of code to accelerate training and enhance generalization ability | ECCV 2020 oral

Time：2020-12-16

Gradient centered GC makes the weight gradient zero mean, which can make the training of the network more stable, and can improve the generalization ability of the network. The algorithm is simple. The theoretical analysis of this paper is very sufficient, which can well explain the principle of GC  Source: Xiaofei’s algorithm Engineering Notes official […]