Security threats of artificial intelligence: analysis of attack and defense in deep learning

Time:2021-3-6

On September 15, “all things intelligence – Baidu world 2020″ was held online. The conference, together with CCTV news, released Baidu AI’s latest and cutting-edge technologies, products and solutions in the form of online conference for the industry, partners, users and media. Among them, in the Baidu PaddlePaddle and ecological open class links, Zhong Zhenyu, a senior security researcher from Baidu Research Institute, has brought the technology sharing of “deep learning model’s security issues and protection”.

Security threats of artificial intelligence: analysis of attack and defense in deep learning

In the era of abundant data, computers can acquire algorithms through self-learning, and transform data into knowledge. Deep learning is one of the most popular machine learning technologies. The essence of deep learning is to learn more useful features by building machine learning models with many hidden layers and massive training data, so as to ultimately improve the accuracy of classification or prediction.

Generally speaking, image recognition is to grasp the core image features of data, so as to identify the types of data and classify them. For example, if you want to judge a motorcycle in the picture, you just need to grasp the features of “two wheels” and “pedal” to complete the judgment. In the past, because the accuracy of image recognition is not high, this kind of judgment is difficult to complete by machine, and the emergence of deep learning makes this problem easily solved.

In recent years, with the development of deep learning technology and the emergence of various models, computer security application research based on deep learning has become a hot research direction in the field of computer security. Deep learning model is vulnerable to malicious attacks against samples, which is not new in the industry. By adding subtle disturbances to the image data that are difficult for human beings to recognize through the sense organs, we can “cheat” the model, point the deer to the horse, or even make something out of nothing. In order to implement this kind of attack, attackers often need to extract model structure and parameters, and then use specific algorithms to generate “confrontation samples” to induce the model to make wrong or even preset judgment results.

According to reports, in the real physical world, according to this principle, baidu security researchers have carried out a lot of experiments

At the blackhat European Congress, we recreated the magic of David Copperfield that made the statue of liberty disappear. By controlling the display on the back of a Lexus, we can make the famous target detection model yolov3 completely unable to recognize Lexus. Similarly, we can make a stop sign mistaken for a speed limit sign in the target detection model. It can be imagined that the resulting recognition error will bring trouble to the safety critical driving scene.

Security threats of artificial intelligence: analysis of attack and defense in deep learning

Security threats of artificial intelligence: analysis of attack and defense in deep learning

Of course, some of the experimental cases mentioned above are based on a high degree of cognition of the deep learning model. We call this “white box attack” that knows the internal structure of the model in advance and can use specific algorithms to generate “counter sample” attacks. However, for industries with high security requirements such as speech recognition and unmanned driving, attackers may not be able to obtain the detailed internal structure information such as the model framework and training data of these deep learning models, and their cognition of the models is not high. This type of attack is called “black box attack”. Obviously, “black box attack” is more difficult than others, so AI developers had better protect their own AI models to avoid letting attackers know their internal structure.

However, is it enough just to protect your own model construction? Baidu security researchers recently found that the black box model may not be more secure.

Security threats of artificial intelligence: analysis of attack and defense in deep learning

We find that many practical classification models are often based on some pre training models. And these pre training models are public. When the attacker transfers the attack target from the black box model to its parent model (in which we use a fingerprint attack technology to complete the matching of the parent model), the attack difficulty is relatively reduced. The counter samples generated after successfully attacking the parent model can also effectively attack the black box model by using the characteristics of attack migration.

At the end of the open class, baidu security researcher introduces Baidu security’s solution to counter sample, and the way to improve the robustness of deep learning model through counter training reinforcement model. Baidu security’s research on the security of artificial intelligence algorithm includes deep learning model robustness test, formal verification, machine identification, real-time monitoring of malicious samples, black and white box attack and defense, etc.

Security threats of artificial intelligence: analysis of attack and defense in deep learning

In terms of deep learning confrontation, we open source advbox, perceptron benchmark tool in GitHub. Perceptron benchmark provides a standard evaluation method for the robustness evaluation of deep learning model, and also provides an effective standard data set for the improvement of model robustness. Advbox integrates the algorithms of deep learning confrontation in the industry. This technology has been open source in GitHub, and has been listed in international industrial conferences such as black hat and Defcon, and has been concerned and recognized by the global security industry. At the same time, advbox has also been applied to Baidu deep learning open source platform paddlepaddle and the current mainstream deep learning platform. It can efficiently use the latest generation method to construct the countermeasure sample data set for the characteristic statistics of countermeasure samples, attack new AI applications, strengthen the business AI model, and provide important support for the model security research and application.

We hope that through Baidu’s secure technology and services, more people can enjoy the convenience brought by science and technology, and more enterprises can get more secure AI solutions.

Click the link and adjust to 1 hour 43 minutes to view the complete course video
https://haokan.baidu.com/v?vi…