Two hazards frighten our otherwise optimistic AI techies. First, the possibility of superintelligence what our transhumanists have been calling the "Singularity" taking over the world and dispensing with the human race. Second, bad actors with malicious intent getting a hold on powerful AI tools and disrupting global communications while letting loose lethal autonomous weapons. Hazards more than hopes occupy today's AI techies
In 1993, during the VISION-21 Symposium sponsored by NASA Lewis Research Center and the Ohio Aerospace Institute, science-fiction writer and mathematician