How critical is that vulnerability? University researchers are improving predictions of which software flaws will end up with an exploit, a boon for prioritizing patches and estimating risk.
It specifically targets the growing adoption of input-adaptive multi-exit neural networks, which are designed to reduce carbon footprint by passing images through just one neural layer to see if the necessary threshold to accurately report what the image contains has been achieved.
In a traditional neural network, the image would be passed through every layer before a conclusion is drawn, often making it unsuitable for smart devices or similar technology that requires quick answers using low energy consumption.
The researchers found that by simply adding more complication to images, such as slight background noise, poor lighting, or small objects that obscure the main subject, the input-adaptive model views these images as being more difficult to analyse and assigns more computational resources as a result.
AI consumes a lot of energy. Hackers can cause more consumption.
Attack: But if you change the input that this type of neural network means, such as the image it feeds, you can change how much computation it takes to fix it. This opens up a vulnerability that hackers can exploit, according to researchers at the Maryland Cybersecurity Center. International Conference on Delegations of Studies this week. By adding a small amount of noise to network inputs, they detect that inputs are more difficult and increase the calculation.
When the attacker was assumed to have complete information about the neural network, they were able to maximize energy consumption. When the attackers assumed that there was no limited information, they were still able to slow down network processing and increase energy use by between 20% and 80%. The reason, the researchers found, is because the attacks are well transferred to different types of neural networks. Designing an attack on an image classification