Tag：characteristic value

Time：20211230
1. Principle of logistic regression The name of logical regression carries regression, which is not regression, but an algorithm to solve classification by regression.Input of logistic regression: ℎ (W) = w_ 0+w_ 1x_ 1+w_ 2x_ 2+…= w^TxThis input is very similar to linear regression. Then look at the sigmoid functionDraw an image of this functionCode […]

Time：2021126
A few words before: Finally! Here comes the most relevant content of GNN! The first four chapters are all preliminary knowledge, or introductory knowledge. In fact, they are not particularly related to GNN. But from the beginning of this chapter, it is the core of GNN: graph signal processing. This part is actually very critical, […]

Time：2021123
Chapter 6 nature of GCN At the end of Chapter 5, we talk about the hasty end of GCN. As the most classical model of GNN, it has many properties that we need to understand. 6.1 difference and connection between GCN and CNN The CNN convolution volume is the value in a certain region of […]

Time：20211024
Recently, when learning LDA, we need to calculate eigenvalues and eigenvectors, so we re learned a wave The calculation of eigenvalues is relatively simple in Python, and the linalg calculation of numpy needs to be imported.Linalg is the abbreviation of linear algebra. Import numpy first import numpy as np Randomly generate a matrix A A […]

Time：20211017
1. Power iterative algorithm (power method for short) (1) Dominant eigenvalue and dominant eigenvector Known matrix\(\bm{A} \in \R^{n \times n}\), \(\bm{A}\)The dominant eigenvalue of is\(\bm{A}\)Other eigenvalues (absolute values) of are large eigenvalues\(\lambda\), if such eigenvalues exist, and\(\lambda\)The relevant eigenvectors are called dominant eigenvectors. (2) Properties of dominant eigenvalues and dominant eigenvectors If a vector is […]

Time：2021927
We have explored the method of mapping raw data to appropriate feature vectors, but this is only part of the work. Now, we must explore what values are good features of these feature vectors Avoid discrete eigenvalues that are rarely used In this way, the model can learn how the eigenvalue is related to the […]

Time：202155
brief introduction Singular value is a very important concept in matrix, which is generally obtained by singular value decomposition. Singular value decomposition is an important matrix decomposition method in linear algebra and matrix theory, which is very important in statistics and signal processing. Before we get to singular values, let’s look at the concept of […]

Time：2021223
Analytic hierarchy process (AHP) is an operational research method Method background and Application Overview AHP is a hierarchical weight decisionmaking analysis method, which is proposed by American operational research professor satty of Pittsburgh University in the early 1970s when he studied the subject of “power distribution according to the contribution of each industrial sector to […]

Time：2021220
Gradient centered GC can make the training of the network more stable and improve the generalization ability of the network. The algorithm is simple and the theoretical analysis of this paper is very sufficient, which can well explain the principle of GC Source: Xiaofei’s algorithm Engineering Notes official account Thesis: gradient centralization: a new optimization […]

Time：202124
In this paper, the DBTD method is used to calculate the filtering threshold, and then the random pruning algorithm is used to prune the eigenvalue gradient. The sparse eigenvalue gradient can reduce the amount of calculation in the return phase. The training on CPU and arm has 3.99 times and 5.92 times acceleration effect respectively […]

Time：2021129
The content of linear algebra is very coherent, and the whole is [determinant — > matrix — > ndimensional vector — > system of linear equations — > similar diagonal type — > quadratic type]. The determinant is a value. If the determinant is 0, the corresponding linear equations have multiple solutions, and the corresponding […]

Time：20201216
Gradient centered GC makes the weight gradient zero mean, which can make the training of the network more stable, and can improve the generalization ability of the network. The algorithm is simple. The theoretical analysis of this paper is very sufficient, which can well explain the principle of GC Source: Xiaofei’s algorithm Engineering Notes official […]