Automatic speech recognition (ASR) systems are now ubiquitous in many commonly used applications, as various commercial products rely on ASR techniques, which are increasingly based on machine learning, to transcribe voice commands into text for further processing. However, audio adversarial examples (AEs) have emerged as a serious security threat, as they have been shown to be able to fool ASR models into producing incorrect results. Although there are proposed methods to defend against audio AEs, the intrinsic properties of audio AEs compared with benign audio have not been well studied. In this paper, we show that the machine learning decision boundary patterns around audio AEs and benign audio are fundamentally different. In addition, using dimensionality reduction techniques, we show that these different patterns can be distinguished visually in 2D space. Based on dimensionality reduction results, this paper also demonstrates that it is feasible to detect previously unknown audio
Revolutionizing the Edge with TinyML | IT Business Edge itbusinessedge.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from itbusinessedge.com Daily Mail and Mail on Sunday newspapers.
Abstract
Machine learning is becoming increasingly popular in modern technology and has been adopted in various application areas. However, researchers have demonstrated that machine learning models are vulnerable to adversarial examples in their inputs, which has given rise to a field of research known as adversarial machine learning. Potential adversarial attacks include methods of poisoning datasets by perturbing input samples to mislead machine learning models into producing undesirable results. While such perturbations are often subtle and imperceptible from the perspective of a human, they can greatly affect the performance of machine learning models. This paper presents two methods of verifying the visual fidelity of image-based datasets by using QR codes to detect perturbations in the data. In the first method, a verification string is stored for each image in a dataset. These verification strings can be used to determine whether or not an image in the dataset has been pertu