Sandra what could happen . John im John Roberts in washington. Wow, sandra. It just doesnt stop. Nevery day its new to us. John im Sandra Smith in new york. We have a big hour for you. This case stems from a yearlong investigation. The president s son is charged with three felonies and six misdemeanors concerning 1. 4 million in owed taxes that have since been paid. No statement yet from the first family, and Kamala Harris declined to speak to this moments ago, that we did just hear from Karine Jeanpierre. She is declining to comment specifically as well, although she said there will still not be a pardon by the president , john. John yes, the answer that question is still a no, and as we pointed out, right up until it is a yes. And President Biden ignored questions from the press as he boarded Marine One earlier. Listen to this. Mr. President, will you commute your Sons Sentence . Are you glad your son has pled in his case . John yes, the dulcet sounds of the auxiliary Power Unit on a
porkchops, text messages, things like that, e mails, minutes of meetings, both formal and informal with senior figures, their names unredacted. this is what she has asked for. all relating to her process about whether or not she is being elevated to the house of lords. she is repeating, though, it is our intention to resign, but given what she now knows and believes to be true she says this process has become necessary to put an end to the speculation. so, we will get a bit of reaction and we will get a bit of reaction and we will get a political correspondent to dojargon a bit will get a political correspondent to do jargon a bit aboutjargon, too. at the headline there is that she is giving a bit of extra nation about why she has not yet resigned as an mp, but saying that she fully intends to do so. we will have more that in a bit. now though we all had to be received sportscenter. jude bellingham will be real madrid s latest star player. with the price tag to match. real ma
discriminate against applicants based on their race, age or gender. next, are ai apps and tools that aren t banned and don t pose any high risk. nello cristianini professor of artificial intelligence at the university of bath and author of the short cut: why intelligent machines do not think like us . i started by asking him what he makes of these attempts to regulate ai. we must be responsible for the good and also the problems of our technology. it s good that we think ahead at this time. and what kind of things can be regulated, do you think here? well, i think the parliament has a lot of power about what companies can do operating in europe, or also operating from europe outwards. and the list that you made is very good. so first, it establishes there will be some things that are not acceptable. that s a big statement. there will be some things that cannot be done in europe. second, certain things can only be done with very serious oversight. and third, even better, th
parliament have approved a draft voluntary code of conduct to regulate artificial intelligence tools. they re trying to limit harm from al while also promoting innovation in everything from self driving cars to chatbots. new laws could be in place by the end of the year. our reporter simi jola oso has been looking the plans. the idea is to govern the use of ai based on three levels of risk, starting with unacceptable risk. that is when ai is used for things deemed so unethical, such as biometrics, surveillance, or even using it to keep kind of social scores on people. think netflix is black mirror. next is high risk. so things that might cause harm to people s health, harm to the environment, or affect people s fundamental rights. for example, an ai tool that scans cv s in order to rank job applicants, which is fine as long as it abides by certain rules. for example, it doesn t discriminate against applicants based on their race, age or gender. next, our ai apps and tools that aren t b
members of the european parliament have approved a draught voluntary code of conduct to regulate artificial intelligence tools. it s a tough thing to do, as you can imagine. clearly, these things are quite technical so our reporter has been taking a look at the plans for us. the idea is to govern the use of ai based on three levels of risk, starting with unacceptable risk. that is when ai is used for things deemed so unethical, such as biometrics, surveillance, or even using it to keep kind of social scores on people. think netflix is black mirror. next is high risk. so things that might cause harm to people s health, harm to the environment, or affect people s fundamental rights. for example, an ai tool that scans cvs in order to rankjob applicants, which is fine as long as it abides by certain rules. for example, it doesn t