Do Not Sell My Personal Information
When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary
Do Not Sell My Personal Information
When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary
How the U.S. Government Can Learn to See the Future
Editor’s Note:
Intelligence assessments are made under tremendous time pressure with imperfect information, so it is no surprise that they are often wrong. They can be better, but the intelligence community often fails to use the best analytic techniques. Julia Ciocca, Michael C. Horowitz, Lauren Kahn and Christian Ruhl of Perry World House at the University of Pennsylvania explain the current deficiencies in assessment techniques and argue that rigorous probabilistic forecasting, keeping score of assessments, and employing the “wisdom of crowds” produces better results.
Daniel Byman
In 1973, then-Secretary of State and National Security Adviser Henry Kissinger argued that policymaking could be reduced to a process of “making complicated bets about the future,” noting that it would be helpful if he could be supplied with “estimates of the relevant betting odds.”
In 1983, the U.S. military’s research and development arm began a ten-year, $1 billion machine intelligence program aimed at keeping the United States ahead of its technological rivals. From the start, computer scientists criticized the project as unrealistic. It promised big and ultimately failed hard in the eyes of the Pentagon, ushering in a long artificial intelligence (AI) “winter” during which potential funders, including the U.S. military, shied away from big investments in the field and abandoned promising areas of research.
Today, AI is once again the darling of the national security services. And once again, it risks sliding backward as a result of a destructive “hype cycle” in which overpromising conspires with inevitable setbacks to undermine the long-term success of a transformative new technology. Military powers around the world are investing heavily in AI, seeking battlefield and other security applications that might provide an advantage over potential adv