Introduction to artificial intelligence first book: Python neural network programming neural network standard introduction code template

Time:2021-9-19

1、 Basic knowledge summary:

 

Explanation:

(1)XRefers to the weight between the input layer or hidden layer and the corresponding layerWThe result of vector sum with its input data;WRefers to the weight of the level; Subscript middlehorefer tohiddenLayer andoutputData corresponding to layer

(2) The third formula is the weight update formula, αRepresents the learning rate (determines the magnitude of the update),EkRefers to this layer(kThe direction of the (layer) propagates incorrectly,OkIt also means NokLayer output, superscriptTRepresents matrix transpose.

Interested students can find the derivation process of weight update formula (in books)P72-76(incidentallyP117There is an error in this formula,P76Page is correct)

2、 Neural network model and data:

1)The simplest three-layer neural network model used this time includes an input layer, a hidden layer and an output layer:

 

(2) Data: Although the code is built as a neural network framework and does not need to operate the data, it should be noted that the data is in matrix form, and attention should be paid to the transposition in the code.

3: Code and its comments

1 import numpy
 2 import scipy.special
 3 
 4 class neuralNetwork :
 5 # initialize neural network
 6     def _init_(self, inputnodes, hiddennodes, outputnodes, learningrate) :
 7 # number of nodes of input layer, hidden layer and output layer of incoming parameters
 8         self.inodes = inputnodes
 9         self.hnodes = hiddennodes
10         self.onodes = outputnodes
11 # initialize the weight matrix in the neural network
12         self.wih = numpy.random.normal(0.0, pow(self.hnodes, -0.5),(self.hnodes, self.inodes))
13         self.who = numpy.random.normal(0.0, pow(self.onodes, -0.5),(self.onodes, self.hnodes))
14 # initial learning rate of neural network
15         self.lr = learningrate
16 #s function
17         self.activation_function = lambda x: scipy.special.expit(x)
18 
19         pass
20 
21 
22 # training neural network
23     def train(self, input_list, targets_list) :
24 # convert input and target output into two-dimensional numpy array
25         inputs = numpy.array(input_list, ndmin=2).T
26         targets = numpy.array(targets_list, ndmin=2).T
27 # calculate the output of the hidden layer, which is the result of sigmoid function calculation after the output of the former
28         hidden_inputs = numpy.dot(self.wih, inputs)
29         hidden_outputs = self.activation_function(hidden_inputs)
30 # compute output layer output
31         final_inputs = numpy.dot(self.who, hidden_outputs)
32         final_outputs = self.activation_function(final_inputs)
33 # compute output layer output
34         output_errors = targets - final_outputs
35 # calculate hidden layer output
36         hidden_errors = numpy.dot(self.who.T, output_errors)
37 # update weights
38         self.who += self.lr * numpy.dot((output_errors * final_outputs * (1.0 - final_outputs)), numpy.transpose(hidden_outputs))
39         self.wih += self.lr * numpy.dot((hidden_errors * hidden_outputs * (1.0 - hidden_outputs)), numpy.transpose(inputs))
40 
41         pass
42 
43 # query neural network
44     def query(self, inputs_list) :
45         inputs = numpy.array(inputs_list, ndmin=2).T
46 
47         hidden_inputs = numpy.dot(self.wih, inputs)
48         hidden_outputs = self.activation_function(hidden_inputs)
49 
50         final_inputs = numpy.dot(self.who, hidden_outputs)
51         final_outputs = self.activation_function(final_inputs)
52 
53         return final_outputs