comparemela.com

Latest Breaking News On - Adversarial machine learning - Page 4 : comparemela.com

TrojanModel: A Practical Trojan Attack against Automatic Speech Recogn by Wei Zong, Yang Wai Chow et al

While deep learning techniques have achieved great success in modern digital products, researchers have shown that deep learning models are susceptible to Trojan attacks. In a Trojan attack, an adversary stealthily modifies a deep learning model such that the model will output a predefined label whenever a trigger is present in the input. In this paper, we present TrojanModel, a practical Trojan attack against Automatic Speech Recognition (ASR) systems. ASR systems aim to transcribe voice input into text, which is easier for subsequent downstream applications to process. We consider a practical attack scenario in which an adversary inserts a Trojan into the acoustic model of a target ASR system. Unlike existing work that uses noise-like triggers that will easily arouse user suspicion, the work in this paper focuses on the use of unsuspicious sounds as a trigger, e.g., a piece of music playing in the background. In addition, TrojanModel does not require the retraining of a target model.

Detecting Audio Adversarial Examples in Automatic Speech Recognition S by Wei Zong, Yang Wai Chow et al

Automatic Speech Recognition (ASR) systems are ubiquitous in various commercial applications. These systems typically rely on machine learning techniques for transcribing voice commands into text for further processing. Despite their success in many applications, audio Adversarial Examples (AEs) have emerged as a major security threat to ASR systems. This is because audio AEs are able to fool ASR models into producing incorrect results. While researchers have investigated methods for defending against audio AEs, the intrinsic properties of AEs and benign audio are not well studied. The work in this paper shows that the machine learning decision boundary patterns around audio AEs and benign audio are fundamentally different. Using dimensionality-reduction techniques, this work shows that these different patterns can be visually distinguished in two-dimensional (2D) space. This in turn allows for the detection of audio AEs using anomal- detection methods.

vimarsana © 2020. All Rights Reserved.