Summary |
Adversarial machine learning has been an important area of study for the securing of machine learning systems. However, for every defense that is made to protect these artificial learners, a more sophisticated attack emerges to defeat it. This has created an arms race, with the problem of adversarial attacks never being fully mitigated. This thesis examines the field of adversarial machine learning; specifically, the property of transferability, and the use of dynamic defenses as a solution to attacks which leverage it. We show that this is an emerging field of research, which may be the solution to one of the most intractable problems in adversarial machine learning. We go on to implement a minimal experiment, demonstrating that research within this area is easily accessible. Finally, we address some of the hurdles to overcome in order to unify the disparate aspects of current related research. |
General note | Presented to the faculty of the Department of Computer Science |
General note | Advisor: Nasseh Tabrizi |
General note | Title from PDF t.p. (viewed October 10, 2019). |
Dissertation note | M.S. East Carolina University 2019. |
Bibliography note | Includes bibliographical references. |
Technical details | System requirements: Adobe Reader. |
Technical details | Mode of access: World Wide Web. |