Security software giants jointly released a new framework for malware attacks that undermine machine learning

Time:2021-11-25

Security software giants jointly released a new framework for malware attacks that undermine machine learning

Microsoft, in cooperation with mitre, IBM, NVIDIA and Bosch, has released a new open framework to help security analysts detect, respond to and remedy adversarial attacks against machine learning (ML) systems.

This project, called “adversarial machine learning threat matrix“, attempts to organize malicious opponents to use different technologies when destroying machine learning systems.

Malware threatens the stability and security of AI applications

Just as artificial intelligence and machine learning are deployed in a variety of new applications, threat participants can not only abuse this technology to enhance the ability of their malware, but also use this technology to deceive the machine learning model, resulting in the system making wrong decisions and threatening the stability and security of AI applications.

Researchers at ESET, a well-known security software company, found last year that emote, an email based malware, is using machine learning to improve its target locking ability. Emotet has supported several botnet driven spam attacks and blackmail software attacks.

Earlier this month, Microsoft warned of a new Android ransomware virus, including a machine learning model that, although not yet integrated into malware, can be used to put ransomware information images into the screen of mobile devices without any loss of authenticity.

In addition, the researchers also studied the so-called model reversal attack, that is, abusing model access rights to infer information about training data.

Most enterprises do not have appropriate tools to protect machine learning systems

According to the Gartner report cited by Microsoft, by 2022, it is expected that 30% of AI network attacks will attack machine learning driven systems using training data poisoning, model theft or adversarial samples.

Microsoft said that despite these compelling reasons to ensure the security of machine learning system, Microsoft’s survey of 28 enterprises found that most industry practitioners have not yet accepted confrontational machine learning. “25 of the 28 companies said they didn’t have the right tools to protect their machine learning systems,” Microsoft said

The adversarial machine learning threat matrix hopes to solve the threat caused by data weaponization through a series of vulnerabilities and adversary behaviors. Microsoft and mitre review these vulnerabilities and behaviors, which are effective for the machine learning system.

Companies can use the confrontational machine learning threat matrix to test the adaptability of their artificial intelligence models. By simulating real attack scenarios, companies can use a series of strategies to obtain initial access to the environment, execute unsafe machine learning models, pollute training data, and steal sensitive information through model theft attacks.

The new framework is widely used to simplify the learning process of safety analysts

Microsoft said: “the purpose of the confrontational machine learning threat matrix is to locate attacks on machine learning systems in a framework, and security analysts can locate themselves in these new and upcoming threats.”

They also mentioned that the structure of this matrix is similar to the att & CK framework because it is widely used in the security analyst community. In this way, security analysts do not have to learn new or different frameworks to understand the threats faced by machine learning systems.

This is the latest development in a series of measures to protect artificial intelligence from data poisoning and model avoidance attacks. It is worth noting that researchers from Johns Hopkins University have developed a framework called trojai to prevent Trojan horse attacks, in which the model is modified to respond to the input trigger that causes it to infer an incorrect response.

Security software giants jointly released a new framework for malware attacks that undermine machine learning

Recommended Today

Apache sqoop

Source: dark horse big data 1.png From the standpoint of Apache, data flow can be divided into data import and export: Import: data import. RDBMS—–>Hadoop Export: data export. Hadoop—->RDBMS 1.2 sqoop installation The prerequisite for installing sqoop is that you already have a Java and Hadoop environment. Latest stable version: 1.4.6 Download the sqoop installation […]