Live Breaking News & Updates on Skoltech Computational Intelligence Lab

Stay updated with breaking news from Skoltech computational intelligence lab. Get real-time updates on events, politics, business, and more. Visit us for reliable news and exclusive interviews.

Manufacturing Bits: March 16


Manufacturing Bits: March 16
Tripping up neural networks
For years, Russia has been an active area in R&D.
In one example, Russia’s Skolkovo Institute of Science and Technology (Skoltech) has demonstrated how certain patterns can cause neural networks to make mistakes in recognizing images. Leveraging the theory behind this research, Skoltech can design defenses for pattern recognition systems that are vulnerable to attacks.
A subset of AI, machine learning is a technology that make use of a neural network in a system. In this system, the neural network crunches data and identify patterns. It then matches certain patterns and learns which of those attributes are important. ....

Samarskaya Oblast , Sankt Peterburg , Alan Turing , Artem Larin , Ivan Oseledets , Ivan Troyan , Dmitriy Zuev , Skoltech Computational Intelligence Lab , Institute Of Crystallography , Russian Academy Of Sciences , National University Of Science , Massachusetts Institute Of Technology , Russia Skolkovo Institute Of Science , Cornell University , Us Department Of Energy , Technology Misi , Skolkovo Foundation , Oak Ridge National Laboratory , Petersburg Academic University , Department Of Physics , Technology Skoltech , Skolkovo Institute , Data Intensive Science , Massachusetts Institute , National University , Russian Academy ,

Skoltech team shows how Turing-like patterns fool neural networks


 E-Mail
Skoltech researchers were able to show that patterns that can cause neural networks to make mistakes in recognizing images are, in effect, akin to Turing patterns found all over the natural world. In the future, this result can be used to design defenses for pattern recognition systems currently vulnerable to attacks. The paper, available as an arXiv preprint, was presented at the 35th AAAI Conference on Artificial Intelligence (AAAI-21).
Deep neural networks, smart and adept at image recognition and classification as they already are, can still be vulnerable to what s called adversarial perturbations: small but peculiar details in an image that cause errors in neural network output. Some of them are universal: that is, they interfere with the neural network when placed on any input. ....

Alan Turing , Nurislam Tursynbek , Ilya Vilkoviskiy , Valentin Khrulkov , Ivan Oseledets , Maria Sindeeva , Skoltech Computational Intelligence Lab , Artificial Intelligence , Data Intensive Science , Computer Vision , Pattern Recognition , Advanced Studies , Technology Engineering Computer Science , Computer Science , System Security Hackers , Research Development , Robotry Artificial Intelligence , ஆலன் டூரிங் , செயற்கை உளவுத்துறை , தகவல்கள் தீவிர அறிவியல் , கணினி பார்வை , முறை அங்கீகாரம் , தொழில்நுட்பம் பொறியியல் கணினி அறிவியல் , கணினி அறிவியல் , அமைப்பு பாதுகாப்பு ஹேக்கர்கள் , ஆராய்ச்சி வளர்ச்சி ,