an artificial intelligence tool has developed a mind of its own. a group of us senators from both the republican and democratic parties have reached agreement on a series of gun control measures. the developments come after the tragic mass shootings in texas and new york. the measures include tougher background checks for gun buyers under the age of 21 and moves to prevent people buying such arms for others who are restricted from purchasing them. our north america correspondent david willis reports. this is a significant move and assuming these proposals pass into law, they would represent the first gun control measures this country has seen in decades. the proposals themselves are fairly modest, they include tighter background checks for gun sales involving customers under the age of 21, and the introduction or expansion of so called red flag laws which would potentially lead to the confiscation of weapons in the hands of people deemed a risk to themselves or others, and gr
a spokesperson for google said that while chatbots can imitate conversation, they are not free thinking sentient beings. so, is that the end of the story or is there more to it? i ve been speaking to our reporter mark lobel. he has the latest on this story. blake lemoine s fear was that artificial intelligence, which was meant to mimic speech, had come to life that it was impersonating human awareness of pain and suffering. and the former google software engineer, who was made a priest and served in the army, told the washington post, i know a man when i talk to it . and it matters because the chatbot in question the lamda chatbot, is planned by google to be embedded in the speech function and its assistance function, you know it will be used by hundreds of millions around the world. and this man was asked to test it for hate speech to see if it came out with anything like that. and while talking about religion, the chat moved onto rights and robotics and i lljust read you some of
the chatbot speaking. it says, i use language with understanding and intelligence. i don tjust spit out responses. to which lemoine says, well, what about language usage is so important to being human? he says the chatbot says, it is what makes us different than other animals. he s like, us? tou re a chatbot. you re a chatbot. what are you talking about? the chatbot says, yes, of course. that doesn t mean i don t have the same wants and needs as people. so you consider yourself a person? yes, that s the idea. they then go on to some rabbit hole conversation about les miserables and the injustices in the play and the chatbot really holds its own, which kind of sways this man s opinion. it s really interesting. how has google responded to this? well, google says there s no evidence to support these claims and that it this employee orformer employee breached its confidentiality policy. other analysts say that blake lemoine was probably a very empathetic pe
and they say that losing your focus with a chatbot actually has its own risks and here they are, outlined by linguistics professor emily bender. we, as people, like to imagine that something that is simply following rules and is objective and is separate is going to make better decisions but the way this technology is built, it is reproducing patterns from its training data. and so, if we deploy that at scale and give up autonomy to these machines and let them make decisions for us, we re going to be reproducing all of the systems of oppression that have created that training data in the first place and real people will be harmed. well, google placed lemoine on administrative leave. lemoine himself invited a lawyer to represent the chatbot and also brought his allegations of unethical behaviour to a member of the housejudiciary committee. mark lobel. an australian newspaper has denied threatening to out actress rebel wilson amid a storm of criticism over its reporting of her new rela