Two hazards frighten our otherwise optimistic AI techies. First, the possibility of superintelligence -- what our transhumanists have been calling the "Singularity" -- taking over the world and dispensing with the human race. Second, bad actors with malicious intent getting a hold on powerful AI tools and disrupting global communications while letting loose lethal autonomous weapons. Hazards more than hopes occupy today's AI techies