Content summary: land classification is one of the important application scenarios of remote sensing image. This paper introduces several common methods of land classification, and uses open source semantic segmentation code to build a land classification model.
original:Hyperai hyperneural network
key word:Machine vision for semantic segmentation of remote sensing data sets
Remote sensing image is an important data to carry out the work of Surveying and mapping geographic information. It is of great significance to the monitoring of geographical conditions and the updating of geographic information database. It plays an increasingly important role in the fields of military, commerce and people’s livelihood.
In recent years, with the improvement of national satellite image acquisition ability, the efficiency of remote sensing image data acquisition has been greatly improved, forming a pattern of coexistence of low spatial resolution, high spatial resolution, wide viewing angle, multi angle, radar and other sensors.
Landsat 2 in orbit is collecting earth remote sensing data
The satellite, the second in NASA’s Landsat program, was launched in 1975 to obtain global seasonal data at medium resolution
There are a wide range of sensors to meet the needs of earth observation for different purposes,However, it also causes the problems of inconsistent format of remote sensing image data and consuming a lot of storage space,In the process of image processing, it often faces great challenges.
Taking land classification as an example, in the past, remote sensing images were used for land classification,It often relies on a lot of manpower for labeling and statistics,It takes months or even a year; In addition, due to the complexity and diversity of land types, it is inevitable that there will be human statistical errors.
With the development of artificial intelligence technology, the acquisition, processing and analysis of remote sensing images have become more intelligent and efficient.
Common land classification methods
The commonly used land classification methods are basically divided into three categories: traditional classification methods based on GIS, classification methods based on machine learning algorithm, and classification methods based on neural network semantic segmentation.
Traditional method: using GIS to classify
GIS is a tool often used in remote sensing image processing, which is also called geographic information system.
It integrates advanced technologies such as relational database management, efficient graphic algorithm, interpolation, zoning and network analysis,Make spatial analysis easy.
Spatial analysis of the East Branch of the Elizabethan River Based on GIS
Using the spatial analysis technology of GIS,The information of spatial location, distribution, form, formation and evolution of corresponding land types can be obtained,Identify and judge land features.
Machine learning: classification using algorithms
Traditional land classification methods include supervised classification and unsupervised classification.
Supervision classification is also called training classification,It refers to the comparison and recognition between the training sample pixels of the confirmed class and the unknown class pixels,And then complete the classification of the whole land type.
In supervised classification, when the accuracy of training sample is not enough, the training area is usually re selected or manually modified to ensure the accuracy of training sample pixels.
After supervised classification of remote sensing image (left), red for construction land, green for non construction land
Unsupervised classification means that there is no need to obtain a priori classification standard in advance, but to carry out statistical classification according to the spectral characteristics of pixels in remote sensing images,This method is highly automated and less human intervention.
With the help of machine learning algorithms such as support vector machine and maximum likelihood method, the efficiency and accuracy of supervised classification and unsupervised classification can be greatly improved.
Neural network: semantic segmentation and classification
Semantic segmentation is an end-to-end pixel level classification method, which can enhance the machine’s understanding of the environment scene, and is widely used in automatic driving, land planning and other fields.
The semantic segmentation technology based on deep neural network outperforms the traditional machine learning method in processing pixel level classification tasks.
Using semantic segmentation algorithm to recognize and judge a remote sensing image
High resolution remote sensing image scene is complex, rich in detail information, and the spectral difference between ground objects is uncertain, which easily leads to low segmentation accuracy and even invalid segmentation.
High resolution and super-high resolution remote sensing images are processed by semantic segmentation,It can more accurately extract the pixel features of the image, quickly and accurately identify specific land types, and then improve the processing speed of remote sensing image.
Common open source models of semantic segmentation
Common open source models of pixel level semantic segmentation include FCN, segnet and deeplab.
1. Full convolution network (FCN)
characteristic:End to end semantic segmentation
advantage:It does not limit the size of the image, and has versatility and high efficiency
Disadvantages:Real time reasoning can not be carried out quickly, and the processing results are not precise enough, and are not sensitive to image details
characteristic:The maximum pooling index is transferred to the decoder to improve the segmentation resolution
advantage:The training is fast, efficient and takes up less memory
Disadvantages:The test is not feed forward and needs to be optimized to determine the map label
Deeplab is released by Google AI,DCNN is proposed to solve the task of semantic segmentation,It includes four versions: V1, V2, V3 and V3 +.
In order to solve the problem of information loss caused by pooling, deeplab-v1,The method of hole convolution is proposed,When the receptive field is increased, the number of parameters is not increased and the information is not lost.
Process demonstration of deeplab-v1 model
Deep lab-v2 is based on v1,Multi scale parallelism is added,The problem of simultaneous segmentation of objects of different sizes is solved.
DeepLab-v3 Hole convolution is applied to cascade module,And the ASPP module is improved.
DeepLab-v3+ Spp module is used in encoder decoder structure,Can restore fine object edges. Refine the segmentation results.
Model training preparation
objectiveObjective: Based on deeplab-v3 +, a 7-class model for land classification is developed
Data:304 remote sensing images of an area from Google Earth. In addition to the original image, it also includes the matching 7-category map, 7-category mask, 25 category map and 25 category mask image after professional annotation. The image resolution is 560 * 560, and the space allocation rate is 1.2m.
The upper part is the original image, and the lower part is the 7 classification image
The parameter adjustment code is as follows:
net = DeepLabV3Plus(backbone = 'xception') criterion = CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.05, momentum=0.9,weight_decay=0.00001) lr_fc=lambda iteration: (1-iteration/400000)**0.9 exp_lr_scheduler = lr_scheduler.LambdaLR(optimizer,lr_fc,-1)
Selection of computing power:NVIDIA T4
Training framework:PyTorch V1.2
Number of iterations:600 epoch
Training duration:About 50 hours
IoU：8285 (training data)
AC：7838 (training data)
Detailed training process direct link:
Details of public course on land classification of remote sensing image
The sample file in the tutorial is predict.ipynb. Running this file will install the environment and show the recognition effect of the existing model.
- Test picture path:
- Mask image path:
- Predicted picture path:
- Training data list:train.csv
- Test data list:test.csv
Training model into semantic_ The training model is saved in model / new_ deeplabv3_ cc.pt。
Deep lab v3plus is used in the model, and binary cross entropy is used in loss. The initial learning rate was 0.05.
If you use the model that we have trained, use fix saved in the model folder_ deeplab_ v3_ Cc.pt, which can be called directly in predict.py.
Question 1: in order to develop this model, what channels did you go through and what materials did you consult?
Wang Yanxin:Mainly through technical community, GitHub and other channels, I checked some deep lab-v3 + papers and related project cases, and learned in advance what are the pitfalls and how to overcome them, so as to make full preparation for any problems encountered in the follow-up model development process.
Question 2: what are the obstacles in the process? How to overcome it?
Wang Yanxin:The amount of data is not enough, which leads to the poor performance of IOU and ac. next time, we can try to use more abundant public remote sensing data sets.
Question 3: what other directions do you want to try on remote sensing?
Wang Yanxin:This time is to classify the land. Next, I want to use the combination of machine learning and remote sensing technology to analyze the ocean landscape and ocean elements, or combine acoustic technology to try to identify and judge the seabed terrain.
The amount of data used in this training is small, and the performance of IOU and AC in the training set is average. You can also try to use the existing public remote sensing data set for model training. In general, the more sufficient the training and the richer the training data, the better the model performance.
In the next article in this series,We collected 11 mainstream public remote sensing data sets and classified them.You can choose a more perfect training model according to the training ideas provided in this paper.
reference resources: http://tb.sinomaps.com/CN/049…