DOIONLINE

DOIONLINE NO - IJACEN-IRAJ-DOIONLINE-19644

Publish In
International Journal of Advance Computational Engineering and Networking (IJACEN)-IJACEN
Journal Home
Volume Issue
Issue
Volume-11,Issue-4  ( Apr, 2023 )
Paper Title
Generating Attachable Adversarial Patches to Make the Object Identification Wrong Based on Neural Networks
Author Name
Shi-Jinn Horng, Huang Huang
Affilition
Pages
35-42
Abstract
Adversarial example is the one that can make our network misclassification through small disturbance, which are often harmless to human cognition but fatal to neural networks. Nowadays, there is no way to resist all kinds of disturbance attacks, which makes people have more doubts about the architecture of the network. Three different sub-models are proposed in this research to attack the neural networks. The attack scope model can effectively reduce the attack range and guide the adversarial algorithm to conduct accurate perturbation attack. Adversarial attack models can generate different adversarial patches through adversarial algorithms. The adversarial patches are compact and can be manufactured artificially. This disturbance patch can be directly attached to the original map to efficiently and accurately disturb the target model. The success rate of disturbance by generating a small patch is 70.1%. Especially, the method proposed in this paper can be applied in different neural networks. Keywords - Deep Learning, Neural Network, Adversarial Attack, Adversarial Patch
  View Paper