How a Team from MIT is Using an AI Tool to Detect Melanoma
Thought LeadersLuis R. Soenksen Ph.D.Research Scientist and Venture Builder
Massachusetts Institute of Technology
AZoRobotics speaks with Luis R. Soenksen, entrepreneur and medical device expert, currently acting as MIT s first Venture Builder in Artificial Intelligence and Healthcare. Luis was part of a team that has developed a system that uses neural networks to spot ugly duckling pre-cancerous lesions on a patient s skin, helping to detect melanoma more efficiently.
Can you give our readers a summary of your recent research? What were the main aims of the research?
Nvidia GPU Technology Conference Brings Out The AI All Stars forbes.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from forbes.com Daily Mail and Mail on Sunday newspapers.
LESSWRONG
109
Artificial Neural Networks (ANNs) are based around the backpropagation algorithm. The backpropagation algorithm allows you to perform gradient descent on a network of neurons. When we feed training data through an ANNs, we use the backpropagation algorithm to tell us how the weights should change.
ANNs are good at inference problems. Biological Neural Networks (BNNs) are good at inference too. ANNs are built out of neurons. BNNs are built out of neurons too. It makes intuitive sense that ANNs and BNNs might be running similar algorithms.
There is just one problem: BNNs are physically incapable of running the backpropagation algorithm.
2021 Premium Python Certification Bootcamp Bundle Is Up For A Massive Offer This Week wccftech.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from wccftech.com Daily Mail and Mail on Sunday newspapers.
Multimodal Neurons in Artificial Neural Networks
March 4, 2021
22 minute read
We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.
Fifteen years ago, Quiroga et al. discovered that the human brain possesses multimodal neurons. These neurons respond to clusters of abstract concepts centered around a common high-level theme, rather than any specific visual feature. The most famous of these was the “Halle Berry” neuron, a neuron featured in both Scientific American and The New York Times, that responds to photographs, sketches, and the text “Halle Berry” (but not other names).