Jingwen's Homepage

Ptolemy: Architecture Support for Robust Deep Learning
Yiming Gan*, Yuxian Qiu*, Jingwen Leng, Minyi Guo, Yuhao Zhu
In Proceedings of International Symposium on Microarchitecture (MICRO), 2020

ABSTRACT
Deep learning is vulnerable to adversarial attacks, where carefully-crafted input perturbations could mislead a well-trained Deep Neural Network (DNN) to produce incorrect results. Adversarial attacks jeopardize the safety, security, and privacy of DNN-enabled systems. Today's countermeasures to adversarial attacks either do not have the capability to \textit{detect} adversarial samples at inference-time, or introduce prohibitively high overhead to be practical at inference-time.

We propose \proj, an algorithm-architecture co-designed system that \textit{detects adversarial attacks at inference time with low overhead and high accuracy}. We exploit the synergies between DNN inference and imperative program execution: an input to a DNN uniquely activates a set of neurons that contribute significantly to the inference output, analogous to the sequence of basic blocks exercised by an input in a conventional program. Critically, we observe that adversarial samples tend to activate distinctive paths from those of benign inputs. Leveraging this insight, we propose an adversarial sample detection framework, which uses canary paths generated from offline profiling to detect adversarial samples at runtime. \no{We provide a high-level programming interface that allows programmers to explore the critical trade-off between detection accuracy and efficiency under the general algorithm framework.} The \proj compiler along with the co-designed hardware enable efficient execution by exploiting the unique algorithmic characteristics. Extensive evaluations show that \proj achieves higher or similar adversarial sample detection accuracy than today's mechanisms with a much lower (as low as 2\%) runtime overhead.

qiu20micro-ptolemy