Time：2021-3-14

# Link to the original text:http://tecdat.cn/?p=20781

### What is neural network?

Artificial neural networks were originally developed by researchers who tried to mimic the neurophysiology of the human brain. By combining many simple computing elements (neurons or cells) into highly interconnected systems, these researchers hope to produce complex phenomena such as intelligence. Neural network is a kind of flexible nonlinear regression and discriminant model. By detecting the complex nonlinear relationship in the data, neural network can help to make the prediction of practical problems.

Neural networks are particularly useful for prediction problems with the following conditions:

• There is no mathematical formula to correlate input with output.
• Prediction model is more important than interpretation model.
• There’s a lot of training data.

Common applications of neural networks include credit risk assessment, marketing and sales forecasting.

`neuralNet`Based on MLP, it has the following characteristics:

• There are any number of inputs
• Using linear combination function in hidden layer and output layer
• Using S-type activation function in hidden layer
• Has one or more hidden layers that contain any number of units

### Using neural network functions

The`neuralNet`The network is trained by minimizing the objective function.

When developing neural networks, many parameters need to be selected: the number of inputs to be used, the basic network architecture to be used, the number of hidden layers to be used, the number of units for each hidden layer, the use of activation functions to be used, and so on.

You may not need any hidden layers at all. Linear model and generalized linear model can be used in many applications. Moreover, even if the function to be learned is slightly nonlinear, if there are too few data or too much noise to accurately estimate the nonlinearity, the simple linear model may be better than the complex nonlinear model. The simplest way is to start with a network without hidden units, and then add one hidden unit at a time. Then the error of each network is estimated. When the error increases, stop adding hidden units.

If there are enough data, enough hidden units and enough training time, MLP with only one hidden layer can learn the accuracy of almost any function.

### Generating independent SAS scoring code of neural network model

After training and verifying the neural network model, it can be used to score the new data. New data can be scored in a variety of ways. One method is to submit new data, then run the model, and score the data through SAS Enterprise Miner or SAS visual data mining and machine learning to generate the score output.

This example shows how to use`neuralNet`The operation generates independent SAS scoring code for ANN model. SAS scoring codes can be run in SAS environments without a SAS Enterprise Miner license.

### Creating and training neural networks

`annTrain`Will create and train an artificial neural network (ANN) for classification, regression function.

This example uses`Iris`The data set is used to create MLP neural network. Published by Fisher (1936)`Iris`The data contains 150 observations. Sepal length, sepal width, petal length and petal width were measured in mm from 50 specimens of each of the three species. Four types of measurement become input variables. The class name becomes the nominal target variable. The aim is to predict the species of iris by measuring the size of its petals and sepals.

You can load a dataset into a session through the following data steps.

`````` data mycas.iris;
set sashelp.iris;
run;``````

`Iris`There are no missing values in the data. It’s important because`annTrain`The operation will remove the observations with missing data from the model training. If the input data to be used for neural network analysis contains a large number of observations with missing values, the missing values should be replaced or estimated before performing model training. because`Iris`The data does not contain any missing values, so the example does not perform variable substitution.

This example uses`annTrain`To create and train neural networks. The neural network is a function to predict Iris species according to the length and width (in mm) of sepals and petals.

`````` target="species"
inputs={"sepallength","sepalwidth","petallength","petalwidth"}
nominals={"species"}
hiddens={2}
maxiter=1000
seed=12345
randDist="UNIFORM"
scaleInit=1
combs={"LINEAR"}
targetAct="SOFTMAX"
errorFunc="ENTROPY"
std="MIDRANGE"
validTable=vldTable ``````

1. use`sampling.Stratified`Operation`Iris`Partition input data by target variable`Species`
2. Add partition indicator column to`_Partind_`To the output table. The`_Partind_`The column contains integer values mapped to the data partition.
3. Create a sampling partition consisting of 30% of the table observations`Species`. The remaining 70% of the table observations constitute the second partition.
4. appoint`12345`The random seed value to use for the sampling function.
5. name`sampling_stratified`Output table created by operation (with new partition information column)`iris_partitioned`. If a table with that name exists in memory, the existing table is replaced by the new one`iris_partitioned`Table content overlay.
6. Specify all variables in the source table and transfer them to the sampled table.
7. The data in the newly added partition column is used to create a separate table for neural network training and validation. Training table`trnTable`For all observations in the table`iris_partitioned`A subset of values, where the integer value of the column`_Partind_`It’s equal to one.
8. The data in the newly added partition column is used to create a separate table for neural network training and verification. Hypothesis verification table`vldTable`It’s all the observations in the table`iris_partitioned`A subset of values, where the integer value of the column`_Partind_`It’s equal to 0.
9. `annTrain`By using`trnTable`A table with target variables is used to create and train MLP neural networks`Species`
10. Four input variables are specified as analysis variables for ANN analysis.
11. Requires that the target variable be`Species`It is regarded as the nominal variable of the analysis.
12. The number of hidden neurons is specified for each hidden layer in the feedforward neural network model. For example`hiddens={2}`Two hidden neurons are used to specify a hidden layer.
13. Specifies the maximum number of iterations to perform when seeking the convergence of the objective function.
14. Specifies the random seed used to perform sampling and partitioning tasks.
15. Uniform distribution is required to generate connection weights of initial neural networks randomly.
16. Specifies the scale factor for the connection weight, which is relative to the number of units in the previous layer.`scaleInit`The default value of the parameter is 1. Set the value of the parameter to`scaleInit`A value of 2 increases the proportion of connection weights.
17. The linear combination function is specified for the neurons in each hidden layer.
18. The activation function is assigned to the neuron in the output layer. By default, the softmax function is used for nominal variables.
19. The error function is specified to train the network. Entropy is the default setting for nominal targets.
20. Specifies the normalization to use on interval variables. When`std`When the value of the parameter is midrange, the variable is normalized to 0 and 1.
21. Specifies the name of the input table to be used to validate the table. This can be done by using the`optmlOpt`Parameter to stop the iteration process as soon as possible.
22. Designation`Nnet_train_model`As an output table.
23. Enable the neural algorithm solver optimization tool.
24. 250 maximum iterations are specified for optimization, and 1E – 10 is specified as the threshold stop value of the objective function.
25. The lbfgs algorithm is enabled. Lbfgs is an optimization algorithm in the family of quasi Newton methods. It approximates Broyden Fletcher Goldfarb Shanno (BFGS) algorithm by using limited computer memory.
26. Use the frequency parameter to set the validation options. When`frequency`When the value of the parameter is 1, validation is performed at each period. When`frequency`When 0, no validation is performed.

Overview of output display data.

Output: column information

come from table.columnInfo Results If used on input table`table.fetch`Command to view the sample data rows displayed in output 2.

Output 2: rows extracted

come from table.fetch Results If`simple.freq`Using the command on the input table, you can verify that each of the three types has 50 observations, and there are a total of 150 observations in the input data table, as shown in output 3.

Output 3: species frequency

come from simple.freq Results `Iris`Through the successful completion of the input table`neuralNet.annTrain`After the training process, the results will display the training data iteration history, including the objective function, loss and verification error columns, as shown in output 4.

Output 4: optimization iteration history

come from NeuroNet.annTrain Results Under the iteration history table, you should see the convergence status table. For a successful neural network model, the “convergence state” should report “optimization converged”, as shown in output 5.

Output 5: convergence state Successful model training includes outputting the summary results of the model, as shown in output 6.

Output 6: model information These results restate the key model building factors, such as model type; target variables, neural network model input, summary of hidden and output nodes; weight and bias parameters; final target value; and misclassification error of scoring validation data set.

At the bottom of the table, you will see the final misclassification error percentage determined by the validation data. If you use this neural network model as a prediction function, and your data comes from`Iris`If the validation table has the same data distribution, it can be expected that 93% – 94% of the species predictions are correct.

### The neural network model is used to score the input data

After training and verifying the neural network model, it can be used to score the new data. The most common technology is to use SAS Enterprise Miner or SAS visual data mining and machine learning environment to generate score output, so as to submit new data and run the model to score the new data.

After having trained neural network, we can use the neural network model and model`annScore`The operation scores the new input data as follows:

`````` table=vldTable
modelTable="train_model"; ``````

1. Recognition training data sheet. The training data is`iris_partitioned`The observations in the table are in the partition indicator column（`_partind_`）The value in is 0.
2. Confirm the validation data sheet. The validation data is`iris_partitioned`The observations in the table are in the partition indicator column（`_partind_`）The value in is 1.
3. Score the training data. Submit input data, which will be scored by trained neural network model. Since the data to be scored in this code block is model training data, you should expect the scoring code to read all 105 observations and misclassify the target variable values by 0%. The model training data contains the known target value. Therefore, when scoring the model training data, we should expect the classification error to be 0%.
4. The validation data were scored. In this operation, input data will be submitted and scored by trained neural network model in SAS data mining environment. The validation data contains the known target value, but the training algorithm will not read the validation data. The algorithm predicts the target value of each observed value in the validation data, and then compares the predicted value with the known value. The percentage of classification error is calculated by subtracting the percentage of correctly predicted classifications from 1. A lower percentage of classification error usually indicates better model performance.

The validation data contained 30% of the original input data observations, and was classified by target variable`Species`Stratification. The original data contained 50 observations for each species; the validation data (30%) contained 15 observations for each of the three species, a total of 45 observations. If 42 of 45 observations in the validation data are correctly classified, the error of the model is 6.67%. Most popular insights

## Java Engineer Interview Questions

The content covers: Java, mybatis, zookeeper, Dubbo, elasticsearch, memcached, redis, mysql, spring, spring boot, springcloud, rabbitmq, Kafka, Linux, etcMybatis interview questions1. What is mybatis?1. Mybatis is a semi ORM (object relational mapping) framework. It encapsulates JDBC internally. During development, you only need to pay attention to the SQL statement itself, and you don’t need to […]