Understanding tensorflow

Time:2020-11-27

Deep learning imitates the concept of biological neurons. You can imagine using a large number of neurons to perceive the external world through various stimuli, so as to establish a model of the external world.

Let’s give you a data pair:

x y
-1 -3
0 -1
1 1
2 3
3 5
4 7

We can use deep learning to understand the rules by learning from data, similar to human learning, to understand the world from our perception and practice.
Let’s look at the rules of these data in an intuitive way.

import numpy as np
import matplotlib.pyplot as plt
xs = np.array([-1.0, 0, 1, 2, 3, 4], dtype=float)
ys = np.array([-3, -1, 1, 3, 5, 7], dtype=float)

plt.plot(xs, ys)
plt.show()

The graphics displayed are:

From the naked eye, the intuitive feeling is linear. We train the model with deep learning (assuming we don’t know the law of the data)

import tensorflow as tf
import numpy as np
import tensorflow.keras as keras

xs = np.array([-1.0, 0, 1, 2, 3, 4], dtype=float)
ys = np.array([-3, -1, 1, 3, 5, 7], dtype=float)

model = keras.models.Sequential([
    keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(xs, ys, epochs=500)
print(model.predict([10.0]))

There is not much code for deep learning. Let’s briefly introduce one by one.

model = keras.models.Sequential([
   ...
])

keras.models.Sequential(): build a sequential stacking model, just like Lego building blocks. This is a layer by layer stacking model, and other estimates have branch like models.
In this sequential stacking model, a layer (which you can understand as a Lego building block) is added as follows:keras.layers.DenseIt’s a fully connected layer, which is the full connection between the front and the back of the neurons, just like a cortex in our brain, you can imagine that there are many socket lines in the front and a lot of output lines in the back. Here we only define one input, only one socket (input parameter is one dimension), and output is only one output line (output is a numerical value).
Why is there only one dimension for input and one dimension for output?
We only have one-dimensional input and output for y.

After defining the model, you can compile the model. The compilation is also very simple, as long as you callmodel.compile()Function can, but here you need to specify two parameters, this is the relatively difficult part of this program.
First of all, we specify a loss function loss. The loss function is used to determine whether we have learned something or not. In other words, we need to have a function to judge whether we have learned something or not. In deep learning, we use loss function to define, for example, mean here_ squared_ Error is defined as the mean square error, so that the output value of the neural network is as close as possible to the expected output value.
Another parameter is the optimizer optimizer. The SGD optimization algorithm is used here. The optimizer itself is a function, and its purpose is to minimize the loss value by optimizing the algorithm.
There are many kinds of optimizers in keras, where gradient descent algorithm is used to optimize.

Finally, it was approvedmodel.fit(xs, ys, epochs=500)The number of iterations is 500.
When the model training is completed, we can use this model to carry outmodel.predict([10.0])forecast.

Through such a simple piece of code to achieve a deep learning program.