comparemela.com


 E-Mail
Skoltech researchers were able to show that patterns that can cause neural networks to make mistakes in recognizing images are, in effect, akin to Turing patterns found all over the natural world. In the future, this result can be used to design defenses for pattern recognition systems currently vulnerable to attacks. The paper, available as an arXiv preprint, was presented at the 35th AAAI Conference on Artificial Intelligence (AAAI-21).
Deep neural networks, smart and adept at image recognition and classification as they already are, can still be vulnerable to what's called adversarial perturbations: small but peculiar details in an image that cause errors in neural network output. Some of them are universal: that is, they interfere with the neural network when placed on any input.

Related Keywords

Alan Turing ,Nurislam Tursynbek ,Ilya Vilkoviskiy ,Valentin Khrulkov ,Ivan Oseledets ,Maria Sindeeva ,Skoltech Computational Intelligence Lab ,Artificial Intelligence ,Data Intensive Science ,Computer Vision ,Pattern Recognition ,Advanced Studies ,Technology Engineering Computer Science ,Computer Science ,System Security Hackers ,Research Development ,Robotry Artificial Intelligence ,ஆலன் டூரிங் ,செயற்கை உளவுத்துறை ,தகவல்கள் தீவிர அறிவியல் ,கணினி பார்வை ,முறை அங்கீகாரம் ,தொழில்நுட்பம் பொறியியல் கணினி அறிவியல் ,கணினி அறிவியல் ,அமைப்பு பாதுகாப்பு ஹேக்கர்கள் ,ஆராய்ச்சி வளர்ச்சி ,

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.