it killed the operator. lucas tomlinson will break it down. lucas. lucas: that is right. good morning, todd and carley. chief of testing says this happened during a simulation that taught a drone to destroy missiles. they said it went wrong when the operator tried to take over the manual. the operator would say, yes, kill that threat. the system realized, they did identify the threat at times, the operator would tell it not to kill that threat. what did it do? it killed the operator because that person was keeping it from accomplishing its objective. sounds like the terminator. the air force says the department of air force has not conducted such exercises. security experts are still warning something like this could happen, claiming humans have created disinformation and propaganda bots. center of ai safety adding this could be catastrophic to humans. many are concerned that ai in future and given it is progressing rapidly, could lead to human extinction or making societ
have created disinformation and propaganda bots. center of ai safety adding this could be catastrophic to humans. many are concerned that ai in future and given it is progressing rapidly, could lead to human extinction or making society go more poorly. yes, it could lead to extinction. lucas: military officials are denying this took place, claiming the air force is committed to responsible use of ai. carley: basically the story is this military ai system wound up thinking it was smarter than the human operator, who the system was built to take down missiles when the human operator told it no, don t take down the air missiles, the ai drone identified the human as the