Numpy implements linear algebra in multidimensional arrays

Time:2021-8-29
catalogue
  • brief introduction
  • Graphics loading and description
  • Gray scale of graphics
  • Compression of gray image
  • Compression of original image
  • summary

brief introduction

This paper will explain how to perform linear algebraic operations of multidimensional data in numpy in the form of charts.
The linear algebra of multidimensional data is usually used in the graphic transformation of image processing. This paper will use an image example to illustrate.

Graphics loading and description

Friends familiar with colors should know that a color can be represented by R, G and B. if it is more advanced, there is also an a for transparency. Usually we use an array of four attributes.

For a two-dimensional image, its resolution can be regarded as an X * y matrix, and the color of each point in the matrix can be expressed by (R, G, b).

With the above knowledge, we can decompose the color of the image.

First, we need to load an image. We use the imageio.imread method to load a local image, as shown below:


import imageio
img=imageio.imread('img.png')
print(type(img))

The above code reads the image locally into the IMG object. Use type to view the type of img. From the running results, we can see that the type of img is an array.


class 'imageio.core.util.Array'

Through img.shape, we can get that img is a (80, 170, 4) three-dimensional array, that is, the resolution of the image is 80 * 170, and each pixel is an array of (R, B, G, a).

Finally, draw the image as follows:


import matplotlib.pyplot as plt
plt.imshow(img)

Gray scale of graphics

For three-dimensional arrays, we can get arrays of three colors as follows:


red_array = img_array[:, :, 0]
green_array = img_array[:, :, 1]
blue_array = img_array[:, :, 2]

After we have three colors, we can use the following formula to transform them:


Y=0.2126R + 0.7152G + 0.0722B

In the figure above, Y represents gray scale.
How to use matrix multiplication? Use @ to:


 img_gray = img_array @ [0.2126, 0.7152, 0.0722]

Now img is an 80 * 170 matrix.
Now use CMAP = “gray” to plot:


plt.imshow(img_gray, cmap="gray")

The following grayscale images can be obtained:

Compression of gray image

Gray image is to transform the color of the image. What should I do if I want to compress the image?

There is a concept in matrix operation called singular value and eigenvalue.

Let a be an n-order matrix if there is a constant λ And n-dimensional non-zero vector x, so that ax= λ x. Then called λ Is the eigenvalue of matrix A, and X is the eigenvalue of A λ Eigenvector of.

A set of eigenvectors of a matrix is a set of orthogonal vectors.

That is, the linear transformation a of the eigenvector will only lengthen or shorten the vector without changing its direction.

Eigen decomposition, also known as spectral decomposition, is a method to decompose a matrix into the product of its eigenvalues and eigenvectors.

If a is a matrix of order m * n, q = min (m, n), the arithmetic square root of Q nonnegative eigenvalues of a * a is called the singular value of A.

Eigenvalue decomposition can easily extract the characteristics of the matrix, but the premise is that the matrix is a square matrix. In the case of non square matrix, singular value decomposition is needed. Let’s first look at the definition of singular value decomposition:

A=UΣVT

Where a is the matrix of M * n to be decomposed by the target, and u is a square matrix of M * M, Σ Is a matrix of M * n whose non diagonal elements are 0. Vtv ^ TVT is the transpose of V and an n * n matrix.

Singular values are similar to eigenvalues in matrices Σ It is also arranged from large to small, and the singular value decreases very fast. In many cases, the sum of the first 10% or even 1% singular values accounts for more than 99% of the sum of all singular values. In other words, we can also use the singular value of the first R to approximately describe the matrix. R is a number much smaller than m and N, so the matrix can be compressed.

Through singular value decomposition, we can approximately replace the original matrix with a smaller amount of data.

To use SVD, you can directly call linalg.svd as follows:


U, s, Vt = linalg.svd(img_gray)

Where u is an M * m matrix and VT is an n * n matrix.

In the above image, u is a (80, 80) matrix and VT is a (170, 170) matrix. S is an array of 80, and s contains singular values in img.

If s is represented by an image, we can see that most singular values are concentrated in the first part:

This means that we can take the front part of s to reconstruct the image.
To reconstruct the image using s, it is necessary to restore s to a matrix of 80 * 170:

#Rebuild
import numpy as np
Sigma = np.zeros((80, 170))
for i in range(80):
    Sigma[i, i] = s[i]

The original matrix can be reconstructed by using u @ sigma @ vt. the difference between the original matrix and the reconstructed matrix can be compared by calculating linalg.norm.


linalg.norm(img_gray - U @ Sigma @ Vt)

Or use np.allclose to compare the differences between the two matrices:


np.allclose(img_gray, U @ Sigma @ Vt)

Or just take the first 10 elements of the s array and redraw it. Compare it with the original figure:


k = 10
approx = U @ Sigma[:, :k] @ Vt[:k, :]
plt.imshow(approx, cmap="gray")

It can be seen that the difference is not great:

Compression of original image

In the last section, we talked about how to compress the gray image, so how to compress the original image?

You can also use linalg.svd to decompose the matrix.

However, some processing is required before use, because the IMG of the original image_ Array is a (80, 170, 3) matrix — here we remove the transparency and only retain the three attributes of R, B and G.

Before conversion, we need to put the axis that does not need to be transformed to the front, that is, change index = 2 to index = 0, and then perform SVD operation:


img_array_transposed = np.transpose(img_array, (2, 0, 1))
print(img_array_transposed.shape)

U, s, Vt = linalg.svd(img_array_transposed)
print(U.shape, s.shape, Vt.shape)

Similarly, now s is a (3,80) matrix, or one dimension is missing. If the image is reconstructed, it needs to be filled and processed, and finally the reconstructed image is output:


Sigma = np.zeros((3, 80, 170))

for j in range(3):
    np.fill_diagonal(Sigma[j, :, :], s[j, :])

reconstructed = U @ Sigma @ Vt
print(reconstructed.shape)

plt.imshow(np.transpose(reconstructed, (1, 2, 0)))

Of course, you can also select the previous K eigenvalues to compress the image:


approx_img = U @ Sigma[..., :k] @ Vt[..., :k, :]
print(approx_img.shape)
plt.imshow(np.transpose(approx_img, (1, 2, 0)))

The reconstructed image is as follows:

It can be found that although some accuracy is lost, the image can be distinguished.

summary

The change of image will involve many linear operations. You can take this article as an example and study it carefully.

This is the end of this article about numpy’s implementation of linear algebra in multi-dimensional arrays. For more information about numpy’s multi-dimensional array linear algebra, please search the previous articles of developeppaer or continue to browse the relevant articles below. I hope you will support developeppaer in the future!