Efficient Defenses Against Adversarial Attacks

时间:2021-10-24 03:25:44
【文件属性】:
文件名称:Efficient Defenses Against Adversarial Attacks
文件大小:876KB
文件格式:PDF
更新时间:2021-10-24 03:25:44
网络安全 Deep learning has proven its prowess across a wide range of computer vision applications, from visual recognition to image generation [17]. Their rapid deployment in critical systems, like medical imaging, surveillance systems or security-sensitive applications, mandates that reliability and security are established a priori for deep learning models. Similarly to any computer-based system, deep learning models can potentially be attacked using all the standard methods (such as denial of service or spoofing attacks), and their protection only depends on the security measures deployed around the system. Additionally, DNNs have been shown to be sensitive to a threat specific to prediction models: adversarial examples. These are input samples which have deliberately been modified to produce a desired response by a model (often, misclassification or a specific incorrect prediction which would benefit the attacker)

网友评论