(original title: ten minutes to understand the essence of linear algebra Author: Great Chen Haoyu)

#### The essence of linear algebra

First, let’s talk about the most important thing. What is linear algebra?

The essence of linear algebra is actually a transformation in high-dimensional space.

Although this sentence is simple, what does it mean. Don’t worry, let’s introduce a very intuitive example to understand this mathematical expression.

Take the small piece of paper in the two-dimensional space as an example. Here are the small people:

I perform a series of linear operations of displacement and stretching on a small person, which can become another look:

That’s what linear algebra does. Some of the operations I do on this villain are called matrices, that is, some operations or mappings on this object.

So at first glance, it seems that this problem is not particularly difficult. Why is linear algebra so difficult.

The main reason is that many basic concepts are not linked to the actual physical meaning.

Next, let’s talk about the relationship between various nouns in linear algebra and physical meaning.

#### determinant

The first concept is the determinant of a matrix. I still remember when we first started learning linear algebra, the teacher came up and gave us a meal to solve the determinant of the matrix.

The determinant of the second-order matrix is the following formula. You should vaguely remember the method of finding the determinant of the matrix (not the definition), diagonal multiplication and subtraction:

However, I still remember that after a semester of linear algebra, I didn’t even solve the first problem. That is, teacher, why do we require a determinant of a matrix?

Why?

Why?

Here, I’ll tell you why we need to solve the determinant of the matrix!

As an example, this time we will replace the small piece of paper with a small square with an area of 1 * 1:

We operate it with a matrix, and we get the following:

It can be seen that after the transformation of the matrix shown in the figure, our previous small square becomes a larger rectangle, and the area becomes 3 * 2, that is, 6.

And then we calculate the value of the determinant of this matrix in the graph, which is also 6.

The value of determinant is the same as the area after matrix transformation!

Friends, this is no coincidence!

We can experiment with another matrix change:

We use another matrix to operate on the original small square, and we can see that the previous small square has become an oblique rectangle.

The area of the transformed oblique square is 1, and the value of the determinant of the matrix transformation in the figure is also 1.

The value of determinant and the area after matrix transformation are still the same!

Here! In fact, it is the very important physical meaning of determinant! It is actually the area change brought about by matrix transformation.

When I first saw this concept, I felt impressed. The original meaning of determinant can be understood like this!

At the same time, I lamented that I had solved the determinant of no less than 1000 matrices. It turned out that I didn’t know what I was looking for!

Of course, the above definition is inaccurate. For two-dimensional, determinant represents area change, for three-dimensional, determinant represents volume change, and so does high-dimensional space. That’s more rigorous.

#### Inverse matrix

Now we should know that matrix is a transformation. Think about the matrix transformation above, we can turn a small square into an oblique square.

Then there must be another matrix mapping that can change this oblique square back to the original small square, isn’t it?

So the physical meaning of the inverse matrix comes out. If there is a matrix that can convert the oblique square after transformation:

Revert to the previous box:

Then it is the inverse of the original matrix.

It can be understood that the inverse matrix is an inverse transformation of the original matrix.

For the inverse matrix, mathematically, there is such an expression:

A is a matrix and A-1 is the inverse of A. when they are multiplied, they will get an identity matrix.

Combined with the physical meaning, we can understand this formula: an object after the transformation of a matrix, after the transformation of the inverse matrix of a, is equal to keeping unchanged (the identity matrix is keeping unchanged).

In a word, if you change the past and come back, you have not changed.

This is the property of inverse matrix.

#### Rank of matrix

If the above thing is only a little interesting, then the next thing will come to a climax.

From the above discussion, we know what an inverse matrix is – that is, an inverse transformation.

Everything seems OK.

But here comes the problem. We like tossing mathematicians soon found that some matrix transformations can’t find the inverse transformation!

Why?

This also starts with the determinant of the matrix.

We know from the above that the determinant represents a change in area. But we will find that the value of the determinant of many matrices is 0.

What do you mean?

To give an example very loosely, think about the small square we mentioned above.

Now there is a transformation, so that the area of this small square becomes zero after transformation. What transformation do you think this is?

I don’t know if you guessed it (anyway, I didn’t have a clue at the beginning), there is only one possibility:

**This small square is compressed into a point or a line on the plane!**

**So that the transformed area is zero!**

This is the physical meaning of determinant zero.

From this physical meaning, we can further know:

If the determinant of a matrix transformation is zero, it means that the transformation will reduce the dimension of the target (for example, from plane to point).

Then we can imagine that once the dimension of an object decreases (for example, from a plane to a point), the process cannot be reversed (from a point to a plane).

This is why some matrix transformations cannot find the inverse matrix!

Further, we can get the most commonly used conclusion in Linear Algebra:

A matrix with a determinant of 0 is an irreversible matrix, and the determinant of an irreversible matrix is 0.

When I first saw this conclusion, I was roaring in my heart:

This is the legendary dimensionality reduction attack!

The things in science fiction are around, but I haven’t been digging!

#### Rank of matrix

So what is the rank of a matrix?

In a word, the dimension given after the matrix transformation is the rank of the matrix.

What do you mean? For example, it’s very simple. If a three-dimensional object is transformed into one-dimensional, then the rank of the matrix is 1. If it is two-dimensional, then the rank of the matrix is 2.

If it is still a three-dimensional object after the transformation, the rank of the matrix is 3, which is also called full rank (no loss of dimension).

In the first two cases, after matrix transformation, the dimension will decline and the information will be lost. It is conceivable that their corresponding determinant is zero – for a three-dimensional object, whether it becomes a line or a point, the area becomes 0.

So we have another important conclusion:

Only the determinant of the matrix with full rank (the dimension remains unchanged after transformation) is not zero.

We can see that these definitions will be particularly easy to understand in terms of physical meaning.

#### Eigenvalue and eigenvector

Next, let’s talk about the most core and classic problem in Linear Algebra:

Solve the eigenvalue and eigenvector of the matrix.

When I first started learning matrix, the teacher didn’t say anything. The whole class revolved around solving the eigenvectors and eigenvalues of a matrix.

Unfortunately, I was confused again, because I didn’t even understand the most basic problem. Hey, teacher, why do we ask to solve the eigenvalue and eigenvector?

What is the eigenvalue of a matrix?

What is the eigenvector of a matrix?

What? What? What?

So after I checked the relevant materials after class, I was suddenly enlightened by a netizen’s introduction:

What is an eigenvector? That is, the vector that remains unchanged after a matrix transformation in high-dimensional space is the eigenvector of the matrix.

Don’t you understand?

Never mind. As always, let’s take an example. As shown in the following figure, suppose we have a pair of vectors like this:

After a matrix transformation, it becomes like this:

Then we randomly take another vector, the yellow arrow:

See what it looks like after this matrix transformation:

You can see that the direction and size of the Yellow vector have changed after matrix transformation. Pay attention to the pink extension.

Next, let’s look at a vector whose direction can not be changed after transformation. The yellow arrow in the figure:

We can see that after the matrix transformation, the direction of the yellow arrow remains unchanged!

Here comes the point!!!

In the physical sense, this vector whose direction can remain unchanged after matrix transformation is the eigenvector of the matrix. The change in the size of these eigenvectors after transformation is the corresponding eigenvalue of the eigenvector.

Why is it called the eigenvector of this matrix? Mathematicians said that this is because we only use this vector to represent the transformation of this matrix, so it is called the eigenvector.

Maybe you’ll ask again, what’s the use of eigenvectors?

OK, the example is on the stage again!

As shown below, we have a cube object:

We now perform a wave of 3D rotation on the object to get the following appearance:

Although I told you that the process of rotation is that the red side turns from the right to the left.

But it’s still hard to imagine how it turns, right?

Computers are hard to think of!

Then, what should we do?

For the sake of intuition, we can imagine adding a rotation axis to it, as shown below:

When it rotates, it rotates around this axis:

You might say, well, it seems to be imaginable.

But let’s just rotate. What does it have to do with the eigenvector.

As mathematicians said, this rotation is actually a matrix transformation, and this axis is called the eigenvector of this rotation transformation!

Because in the whole transformation, only the direction of this axis has not changed!

In other words, when we find this axis, that is, the eigenvector, we find this rotation, that is, the simplest representation of matrix transformation!

Based on the above theory, in the modern operation of matrix solving eigenvector, an algorithm called power iterative method is widely used in computer to solve the eigenvector of matrix.

Its principle is based on — the eigenvector is the vector whose direction remains unchanged after matrix transformation.

Power iterative method, how does it solve the eigenvector of a matrix? It’s simple.

We first select a vector arbitrarily, transform it into a matrix, and then get a new vector. We then transform the new vector into a matrix, and so on. We can imagine that after countless matrix transformations, the vector will tend to be constant. This is the definition of eigenvector – a vector whose direction remains unchanged after matrix transformation.

The above is my summary of the physical meaning of linear algebra in my spare time.

[1] https://www.bilibili.com/video/av6540378/