Microsoft and MITRE Create Tool to Help Security Teams Prepare for Attacks on Machine Learning Systems

Microsoft and MITRE have developed a plug-in that combines several open-source software tools to help cybersecurity professionals better prepare for attacks on machine learning (ML) systems.

The Arsenal tool implements tactics and techniques defined in the MITRE ATLAS framework. It has been collaboratively built off of Microsoft’s Counterfit as an automated adversarial attack library, so security practitioners can accurately emulate attacks on systems containing ML without a deep background in ML or artificial intelligence (A.I.).

“Bringing these tools together is a major win for the cybersecurity community because it provides insights into how adversarial machine learning attacks play out,” said Charles Clancy, Ph.D., senior vice president, general manager, MITRE Labs, and chief futurist. “Working together to address potential security flaws with machine learning systems will help improve user trust and better enable these systems to have a positive impact on society.”

The collaboration with Microsoft on Arsenal is just one example of MITRE’s efforts to develop a family of tools addressing issues including trust, transparency, and fairness to better use ML and A.I. systems for mission-critical applications in areas ranging from healthcare to national security.

Microsoft’s Counterfit is a tool that enables ML researchers to implement a variety of adversarial attacks on A.I. algorithms. MITRE CALDERA is a platform that allows the creation and automation of specific adversary profiles. MITRE ATLAS, which stands for Adversarial Threat Landscape for Artificial-Intelligence Systems, is a knowledge base of adversary tactics, techniques, and case studies for ML systems based on real-world observations, demonstrations from ML red teams and security groups, and the state of the possible from academic research.

The Arsenal plug-in enables CALDERA to emulate adversarial attacks and behaviors using Microsoft’s Counterfit library.

“While other automated tools exist today, they’re typically better suited to research that examines specific vulnerabilities within an ML system, rather than the security threats that system will encounter as part of an enterprise network,” Clancy said. Creating a robust end-to-end ML workflow is necessary when integrating ML systems into an enterprise network and deploying these systems for real-world use cases. This workflow can become complex, making it challenging to identify potential and legitimate vulnerabilities in the system. Integrating the Arsenal plug-in into CALDERA allows security professionals to discover novel vulnerabilities within the building blocks of an end-to-end ML workflow and develop countermeasures and controls to prevent exploitation of ML systems deployed in the real world.

“As the world looks to A.I. to positively change how organizations operate, it’s critical that steps are taken to help ensure the security of those A.I. and machine learning models that will empower the workforce to do more with less of a strain on time, budget and resources,” said Ram Shankar Siva Kumar, principal program manager for A.I. security at Microsoft. “We’re proud to have worked with MITRE and HuggingFace to give the security community the tools they need to help leverage A.I. more securely.”

The tool currently includes a limited number of adversary profiles based on information publicly available today as security researchers document new attacks on ML systems, Microsoft and MITRE plan to continually evolve the tools to add new techniques and adversary profiles.

Leave a Reply

Your email address will not be published. Required fields are marked *