• Àüü
  • ÀüÀÚ/Àü±â
  • Åë½Å
  • ÄÄÇ»ÅÍ
´Ý±â

»çÀÌÆ®¸Ê

Loading..

Please wait....

¿µ¹® ³í¹®Áö

Ȩ Ȩ > ¿¬±¸¹®Çå > ¿µ¹® ³í¹®Áö > TIIS (Çѱ¹ÀÎÅͳÝÁ¤º¸ÇÐȸ)

TIIS (Çѱ¹ÀÎÅͳÝÁ¤º¸ÇÐȸ)

Current Result Document :

ÇѱÛÁ¦¸ñ(Korean Title) Defending and Detecting Audio Adversarial Example using Frame Offsets
¿µ¹®Á¦¸ñ(English Title) Defending and Detecting Audio Adversarial Example using Frame Offsets
ÀúÀÚ(Author) Yongkang Gong   Diqun Yan   Terui Mao   Donghua Wang   Rangding Wang  
¿ø¹®¼ö·Ïó(Citation) VOL 15 NO. 4 PP. 1538 ~ 1552 (2021. 04)
Çѱ۳»¿ë
(Korean Abstract)
¿µ¹®³»¿ë
(English Abstract)
Machine learning models are vulnerable to adversarial examples generated by adding a deliberately designed perturbation to a benign sample. Particularly, for automatic speech recognition (ASR) system, a benign audio which sounds normal could be decoded as a harmful command due to potential adversarial attacks. In this paper, we focus on the countermeasures against audio adversarial examples. By analyzing the characteristics of ASR systems, we find that frame offsets with silence clip appended at the beginning of an audio can degenerate adversarial perturbations to normal noise. For various scenarios, we exploit frame offsets by different strategies such as defending, detecting and hybrid strategy. Compared with the previous methods, our proposed method can defense audio adversarial example in a simpler, more generic and efficient way. Evaluated on three state-of-the-arts adversarial attacks against different ASR systems respectively, the experimental results demonstrate that the proposed method can effectively improve the robustness of ASR systems.
Å°¿öµå(Keyword) Speech Recognition Safety   Adversarial Defense   Adversarial Detection   Audio Adversarial Example   ASR  
ÆÄÀÏ÷ºÎ PDF ´Ù¿î·Îµå