Manufacturing Bits: March 16
Tripping up neural networks
For years, Russia has been an active area in R&D.
In one example, Russia’s Skolkovo Institute of Science and Technology (Skoltech) has demonstrated how certain patterns can cause neural networks to make mistakes in recognizing images. Leveraging the theory behind this research, Skoltech can design defenses for pattern recognition systems that are vulnerable to attacks.
A subset of AI, machine learning is a technology that make use of a neural network in a system. In this system, the neural network crunches data and identify patterns. It then matches certain patterns and learns which of those attributes are important.
E-Mail
Skoltech researchers were able to show that patterns that can cause neural networks to make mistakes in recognizing images are, in effect, akin to Turing patterns found all over the natural world. In the future, this result can be used to design defenses for pattern recognition systems currently vulnerable to attacks. The paper, available as an arXiv preprint, was presented at the 35th AAAI Conference on Artificial Intelligence (AAAI-21).
Deep neural networks, smart and adept at image recognition and classification as they already are, can still be vulnerable to what s called adversarial perturbations: small but peculiar details in an image that cause errors in neural network output. Some of them are universal: that is, they interfere with the neural network when placed on any input.