As the Intel Science & Technology Center for Adversary-Resilient Security Analytics (ISTC-ARSA) housed at Georgia Tech’s Institute for Information Security & Privacy (IISP), researchers will study the vulnerabilities of ML algorithms and develop new security approaches to improve the resilience of ML applications including security analytics, search engines, customized news feeds, facial and voice recognition, fraud detection, and more.
A $1.5 million gift from Intel Corporation has established a new research center at the Georgia Institute of Technology dedicated to the emerging field of machine-learning (ML) cybersecurity with a focus on strengthening the analytics behind malware detection and threat analysis.
As the Intel Science & Technology Center for Adversary-Resilient Security Analytics (ISTC-ARSA) housed at Georgia Tech’s Institute for Information Security & Privacy (IISP), researchers will study the vulnerabilities of ML algorithms and develop new security approaches to improve the resilience of ML applications including security analytics, search engines, customized news feeds, facial and voice recognition, fraud detection, and more. Work at the ISTC-ARSA will compliment additional ML research conducted by the Machine Learning at Georgia Tech (ML@GT) research center, established in July in the College of Computing.
Already, attackers can launch a causative (or, data poisoning) attack, which injects intentionally misleading or false training data so that an ML model becomes ineffective. Intuitively, if the ML algorithm uses the wrong examples, it is going to learn the wrong model. Attackers can also launch an exploratory (or, evasion) attack to find the blind spots of a ML model and evade detection. For example, if an attacker discovers that a detection model looks for unusually high traffic, he can send malicious traffic at a lower volume and just take more time to complete his attack. Researchers at the ISTC-ARSA will systematically evaluate the security and robustness of ML systems in the face of causative and exploratory attacks and develop new algorithms and systems to improve resilience.
“These issues in an adversarial setting pose many interesting and new machine learning challenges,” says Wenke Lee, the principle investigator leading the ISTC-ARSA, a co-director of the IISP, and the John P. Imlay Jr. chair in software at Georgia Tech’s School of Computer Science. “For example, for the defender, it is important to understand the trade-offs between how long to keep a machine-learning model fixed, which can give rise to exploratory attacks, versus how frequently to update it, which opens the window for causative attacks. This grant from Intel will enable us to explore these issues and develop new approaches to better address these vulnerabilities.”
“Intel Labs has long been a significant investor in university research. With this investment in the Georgia Institute of Technology, we continue to support academic research in one of the most challenging areas of security, namely the deterrence of adversarial attacks on today’s machine learning infrastructure,” said Sridhar Iyengar, vice president and Director of Security and Privacy Research of Intel Labs.
In order to determine how adversaries can attack machine-learning security analytics, researchers and students at the ISTC-ARSA have begun to develop “MLsploit” – an evaluation and fortification framework that incorporates Intel® Software Guard Extensions (Intel® SGX). The MLsploit tool will:
- Automate exploratory attacks by transforming a given piece of malware to behave like legitimate software in order to evade detection;
- Inject noise into malware behavior so that the malicious data from the malware becomes polluted;
- Apply “feature deletion” framework to emerging machine-learning algorithms to make them more resilient to future attacks, and
- Develop an online ensemble framework as a major countermeasure.
Intel SGX, an Intel technology for application developers who seek to protect select code and data from disclosure or modification, will be used to hide part of the machine learning process from adversaries.
The ISTC-ARSA team has an extensive background in machine learning, systems and network security, botnet and intrusion detection, and malware analysis. In addition to Lee are assistant professors Polo Chau and Le Song from the School of Computational Science & Engineering at Georgia Tech, and Taesoo Kim from the School of Computer Science. Assisting them will be three graduate security-track students and three machine learning-track students. Research results from ISTC-ARSA will be shared are part of course materials for teaching students both security and big data analytics in an integrated fashion.
About the researchers
Wenke Lee is the John P. Imlay Jr. chair of software in the College of Computing and co-director of the Institute for Information Security & Privacy (IISP), at the Georgia Institute of Technology. Lee’s research interests are systems and network security, applied cryptography, and data mining. Lee has researched extensively in intrusion and botnet detection and malware analysis, and has pioneered research in applying machine-learning techniques to security analysis problems as well as conducted research in adversarial machine learning.
Polo Chau, assistant professor, received his Ph.D. in Machine Learning from Carnegie Mellon University in 2012. His research interests are machine learning, security analytics including malware analysis, and human-computer interaction. Dr. Chau will lead the development of countermeasures, in particular, the ensemble framework.
Taesoo Kim, assistant professor, received his Ph.D. in Computer Science from Massachusetts Institute of Technology in 2014. Kim’s research interests are systems security, malware analysis, and security analytics. He will lead the development of the MLsploit toolkit and also will incorporate results from this project into other curriculum development efforts funded by Intel and the National Science Foundation.
Le Song, assistant professor, received his Ph.D. in Computer Science from the University of Sydney in 2008. His research interests are machine learning and its applications. Dr. Song will lead the theoretical studies of machine learning vulnerabilities and adversaries’ capabilities, as well as algorithmic improvements to machine learning.
The research is supported by Intel Corp. through a grant to the Georgia Tech Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the sponsoring agency. Intel is a registered trademark of Intel Corporation in the United States and other countries.
About the IISP
The Institute for Information Security & Privacy (IISP) at the Georgia Institute of Technology connects government, industry, and academia to solve the grand challenges of cybersecurity. As a coordinating body for multiple information security labs dedicated to academic and solution-oriented applied research, the IISP leverages intellectual capital from across Georgia Tech and its external partners to address vital solutions for national security, economic continuity, and individual safety. The IISP provides a gateway to faculty, students, and scientists and a central location for national and international collaboration. www.iisp.gatech.edu