Tag：Residual

Time：20201119
By aniruddha BhandariCompile  VKSource  analytics vidhya summary Understand the concept of Rside and adjust Rside Understand the key differences between the R side and the adjust r side introduce When I started my journey to data science, the first algorithm I explored was linear regression. After understanding the concept of linear regression and […]

Time：20201117

Time：20201031
In this paper, we use the deep residual network and adaptive parameterized relu activation function to construct a network (with 9 residual modules, and the number of convolution kernels is relatively small, the least is 8, and the maximum is 32). A preliminary attempt is made on the cifar10 dataset. Among them, adaptive parameterized relu […]

Time：20201029
Continued:Deep residual network + adaptive parameterized relu activation function (parameter adjustment record 1)https://blog.csdn.net/dangqin… This paper is still testing the deep residual network + adaptive parameterized relu activation function. The number of residual modules is increased to 27, and the others remain unchanged. The number of convolution kernels is still 8, 16 to 32. Continue to […]

Time：20201028
Continued:Deep residual network + adaptive parameterized relu activation function (parameter adjustment record 2)https://blog.csdn.net/dangqin… In this paper, we continue to test the performance of deep residual network and adaptive parameterized relu activation function on cifar10 image set. The number of residual modules is still 27, the number of convolution kernels is increased to 16, 32 and […]

Time：20201027
Continued:Deep residual network + adaptive parameterized relu activation function (parameter adjustment record 3)https://blog.csdn.net/dangqin… In this paper, the adaptive parameterized relu activation function is used in the depth residual network to test its effect on cifar10 image set. Different from the previous article, the structure of the residual module is modified. The original structure is two […]

Time：20201025
Continued:Deep residual network + adaptive parameterized relu activation function (parameter adjustment record 4)https://blog.csdn.net/dangqin… This paper continues to test the effect of adaptive parametric relu (aprelu) activation function on cifar10 image set. Each residual module contains two 3 × 3 convolution layers. There are 27 residual modules, and the number of convolution kernels is 16, 32 […]

Time：20201024
Continued:Deep residual network + adaptive parameterized relu activation function (parameter adjustment record 5)https://blog.csdn.net/dangqin…In this paper, we continue to adjust the parameters and test the effect of adaptive parametric relu (aprelu) activation function on cifar10 image set. The basic principle of aprelu is shown in the following figure: First of all, from the previous adjustment, it […]

Time：20201021
Continued:Deep residual network + adaptive parameterized relu activation function (parameter adjustment record 7)https://blog.csdn.net/dangqin…In this paper, the number of layers is set to be very small, and there are only two residual modules to test the effect of adaptive parametric relu (aprelu) activation function on cifar10 image set. The keras code is as follows: #!/usr/bin/env python3 […]

Time：20201018
In this paper, on the basis of parameter adjustment record 10, the number of residual modules is increased from 27 to 60, and the effect of depth residual network using adaptive parametric relu (aprelu) activation function on cifar10 image set is tested. Keras procedure is as follows: #!/usr/bin/env python3 # * coding: utf8 * “”” […]

Time：20201013
In the parameter adjustment record 14, there are only two residual modules, and the results are not fitted. Try adding a residual module this time. The basic principle of adaptive parameterized relu activation function is as follows: Keras procedure is as follows: #!/usr/bin/env python3 # * coding: utf8 * “”” Created on Tue Apr 14 […]

Time：20201012
In the previous parameter adjustment record 18, all relus in the depth residual network RESNET were replaced with adaptive parametric relu (aprelu). Since the size of input characteristic graph and output feature graph of aprelu are exactly the same, aprelu can be embedded into any part of neural network. In this paper, aprelu is placed […]