Deep-learning models have become very powerful, but that has come at the expense of transparency. As these models are used more widely, a new area of research has risen that focuses on creating and testing explanation methods that may shed some light on the inner-workings of these black-box models.
With conventional methods, it is extremely time-consuming to calculate the spectral fingerprint of larger molecules. But this is a prerequisite for correctly interpreting experimentally obtained d .
MIT researchers created a method that helps humans develop a more accurate mental model of an artificial intelligence teammate, so they have a better understanding of when they should trust the AI agent’s algorithmic predictions.
Feature-attribution methods are used to determine if a neural network is working correctly when completing a task like image classification. MIT researchers developed a way to evaluate whether these feature-attribution methods are correctly identifying the features of an image that are important to a neural network’s prediction.