PhD Defence by Thomas Kobber Panum

Time

21.06.2021 kl. 09.00 - 12.00

Description

Title
Robustness of Defenses against Deception Attacks

Abstract
Security advancements of computer systems have caused adversaries to explore alternative entry points for their attacks. Instead of attacking the systems directly, attack vectors that are initiated by social interactions have increased in popularity.

These types of attacks are known to exploit a variety of social influences to deceive victims into performing a harmful action intended by the adversary. Typical defense solutions attempt to detect these attacks using machine learning techniques.

Numerous of these solutions have reporting impressive detection rates for these types of attacks. However, the existence of these seemingly effective solutions remain in strong contrast to the high frequency of attacks in real-world settings. In this thesis, I initially set out to explore the adversarial robustness of defenses against a widely established type of deception attack, phishing attacks. In this process, define a set of axioms for the functional properties of attacks that serve as guideline for assessing detection strategies that influential and recent methods have adopted. A part of this assessment, is a demonstration of relatively simple perturbation techniques that emphasize the fragility of the detection solutions. Additionally, it is shown that a detection solution that apply a deep metric model, is more vulnerable to known test- time attacks than initially reported.

Consequently, suggesting a fragility of deep metric models similar to traditional classifiers that rely on neural network architectures.

Overall, this research highlights that both influential and recent methods for detecting deception attacks contain relatively simple failure modes, when exposed to an adversary that seek evasion.

Improvements to the underlying methods of recent solutions, demonstrated that their robustness can be enhanced. However, these results remain empirical thus further guarantees and proofs of attainable adversarial robustness are still open problems.

Assessment Committee
Associate professor Ulrik Nyman, Aalborg University, Denmark (chairman)
Professor Kevin Curran, Ulster University, United Kingdom
Professor Søren Hauberg, Technical University of Denmark

Supervisors
Professor Jens Myrup Pedersen, Aalborg University, Denmark
Associate Professor René Rydhof Hansen, Aalborg University, Denmark

Moderator
Associate Professor Tatiana Kozlova Madsen, Aalborg University, Denmark

Host

Communication, Media and Information Technologies, Department of Electronic System

Address

Aalborg University, Fredrik Bajers Vej 7A - B3-104 (the auditorium) and via TEAMS

Registration Deadline

18.06.2021 kl. 12.00

Register at

aby@es.aau.dk