From around the world, once again, a big hello and a warm welcome to the show. You know, fraud and scams, theyre nothing new. But with each technological change, the fraudsters, they are quick to pounce on new possibilities. Weve all had emails, phone calls and Text Messages trying to scam us out of cash by people pretending to be someone we know or with a Time Limited Offer thats just too good to miss. Or maybe theyre pretending to be a tax official or a utility which asks us to transfer money urgently to them. Well, now, as Artificial Intelligence promises to speed up the world of business, the fraudsters, theyre already taking advantage of these new possibilities. In april, jennifer distefano, a mum in arizona in the United States, received a phone call from an unknown number. She thought it might be her doctor. Instead, she heard the voice of a 15 year old daughter, briana. She told the rest of the story to the United States senate. It was brianna sobbing and crying, saying, mom. At first i thought nothing of it and casually asked her what happened. Brianna continued with, mom, i messed up crying and sobbing continually, not thinking twice, i asked her again, 0k, what happened . Suddenly, a mans voice barked at her, lay down, put your head back. At that moment, i started to panic. My concern escalated as i demanded to know what was going on, but nothing could have prepared me for her response that she gave me. Mum, these bad men have me. Help me, help me. Help me she begged and pleaded as a phone was taken from her. A threatening and vulgar man took the call over. Listen here i have your daughter. You call anybody, you call the police. Im going to pop her stomach so full of drugs. Im going to have my way with her. Im going to drop her in mexico and youll never see your daughter again. They required me to get in a van with a bag over my head with 50,000 in cash to be transported to my daughter. If i didnt have all the money, then we were both going to be dead. I was shocked at that point in time. The second mum came back to me and she had located my husband who had found bree resting safely in bed. I called the police to pursue the matter and unfortunately i was met with it was a prank call that it happens often and that theres nothing that can be done. So i turn to the community and the responses were overwhelming. Friends and neighbours came out of the woodwork with their stories, kidnapping, phone calls coming from their childrens phones, bags of money being driven halfway to mexico. Even voices of Young Children nowhere to be found on social media who do not have phones. The stories kept pouring in. Jennifer destefano� s story is a dramatic example of the new possibilities ai is giving criminals to scam us. In the United States alone, there was a 30 jump in fraud last year to almost 9 billion. 2. 6 billion of that was to so called imposter scams fraudsters pretending to be people or businesses and successfully getting billions for their dirty work. But were also spending a lot to tackle the threat. 6. 5 billion was spent on detecting Ai Fraud Last year. Thats a global estimate by the Research Company juniper, who predict that that will rise to 10 billion a year by 2027. One of those leading the charge is the massive Cybersecurity Company checkpoint. And my first guest is its global chief Information Security officer. Pete nicoletti, A Real Pleasure having you on the show. Pete, lets start with this, because fraud is obviously nothing new, but im just wondering, pete, how big of a game changer is al when it comes to fraud . Well, its lowering the bar for scammers and both with the ability to use al to automatically create convincing targeted Phishing Emails and scams, as well as creating zero day malware code that can be the vector for ransomware. Its creating a lot of College LevelEnglish Speaking hackers in countries thatjust dont speak english. And we just heard jennifer destefano� s story about a deep fake phone call from her daughter. How easy is it, pete . Well, it can take as little as 10 minutes to clone a voice from social media posted videos. The tools are free, theyre easy to use, and hackers are even using their own tools that are on the dark web that dont have the watermarking or the guardrails that are trying to prevent this kind of misuse. And i know my producer asked you to look at my social media to see what you could do. And i think youve got an audio clip and a video clip. Lets just start with what you came up with, the audio clip. Hi, honey. Im here in paris and my wallet and phone were stolen and im calling from a borrowed phone. I need to have some money sent to me immediately. Thats me im going to send you my french colleagues paypal information. Rachel, thank you very much. I love you and ill call you later. Wow. I mean, thats pretty scary. Thats pretty scary stuff. I mean, that was my voice. We live in a post real society, aaron. Yes, absolutely, its your voice. And if you call your mother with that voice, it will fool your mother. Wow. And just to be clear, pete, that was just from the stuff available that i had put out or shared on social media. So do people need to change what they actually put out on social media . Are we likely to stop sharing things if this opens us all up for scams . Well, i think that cat is out of the bag. What we need to think about is what we share publicly. The risk to you getting scammed or your house broken into increases with posts when you say youre on vacation and scammers know your dogs name and your grandmothers name dont overshare. But the other countermeasure is have a safe word that only you and your family know or you and your Business Colleagues use to validate transactions and potential threats to your family. And, pete, i mean, that was scary enough just listening to the audio, but youve also put a fake video of me together. What did you come up with . Morocco is facing devastating consequences of the recent earthquake. Support their recovery now by donating. We actually use chatgpt and other Artificial Intelligence tools to take your voice clone and then create a video for the moroccan earthquake where were asking for money and youre calling into a number that the scammers have. So its actually trivial. It took us about ten or 15 minutes to do that. It sort of begs the question, what do people do about this . Well, right now, the Artificial Intelligence tools can only take a frontal face picture and animate it. So if you think youre talking to an Artificial Intelligence person, ask them to turn around and show them their bald spot, the back of the head, because theyre not scanning the back of their heads right now. Wow, 0k, good tip. And its notjust phone calls because im wondering, pete, texts, emails and letters, can can they be just as powerful . Yes, and its actually easier to create with scripting, chatgpt. So thousands of emails can be created and sent out for pennies and only a few people have to click on those and fall for those kind of scams for it to be profitable for the scammers. And pete, we heard a mum who received a fake kidnap call from her daughter, and she said that the police said to her that no law had been broken because the scammers didnt actually get any money. But it makes you wonder, does there need to be a change in the law to make this kind of impersonation illegal . Yes, we do. Here in the us, weve seen at least six scams where the hackers have made off with money from those type of kidnapping scams. And weve also seen companies scammed for money. We recently saw the mgm hack that was a call into the help desk where it was potentially with a voice clone. Absolutely. Also, again, notjust families or individuals. Businesses are being targeted, fraudsters pretending well, pretending to be colleagues or other companies scamming money. Absolutely. Were seeing it multiple times and the scams are increasing. Were seeing calls into the help desk where people are impersonating employees and were seeing Money Transfers from people that are calling into their cfo to have money transferred. So absolutely, the scammers are winning in this case. And, pete, a lot of the times when we talk about al, certainly when ive talked about al on my show, its always, you know, the worry about its coming for yourjob. But with this in some sort of strange way, this kind of feels like ai creating jobs either for the fraudsters using ai orfor the people, the experts that we now need to tackle these scams. Absolutely. It is creating more scammers on the the offence and here at checkpoint we are hiring more people for defence. You have to bring al to an ai fight or you will be a statistic. And, pete, can Companies Like yours, can you keep up with all of this or do you need to sort of Keep Tweaking things to get One Step Ahead . Well, we are staying One Step Ahead, and thats what checkpoint is all about with our prevention mantra. But theres new scams every day. Theres new zero days every day. So we have to be extremely reactive and things like our threat cloud protect our customers across the world in two to three seconds from new scams every day. And, pete, let me end on this, because ai and, pete, let me end on this, because ai certainly is storming the headlines of late, but this is where we are now. I mean, i cant even think of where is it going to be in five years� time . Well, its crazy because this is the fastest evolving technology that the human race has ever evolved, you know, dealt with and is evolving with. So, you know, just since november, we have over 100 million users with chatgpt. And im one of the pessimistic ones. I think the singularity of when ai actually gains a consciousness is coming sooner than we think it is. It may be an Arnold Schwarzenegger future for us poor humans. Wow. Well, on that scary point, Pete Nicoletti of checkpoint, A Real Pleasure having you on the show. Thanks for everything. And well talk to you soon. Youre very welcome. And i hope some of my ideas are going to keep our viewers safe. So what can consumers do to protect themselves in the face of this new wave of fraudulent threat . Well, my next guest, shes been on the show before. Shes the director of Consumer Protection at the Consumer Federation of america. Erin whitty, welcome back to the show. Always a pleasure having you with us. And erin, let me start on this i want to know how common is this in terms of the complaint coming in to you . Because im also wondering just how big are these new ai scams . So this is actually becoming very common. Last year, the Us Federal Trade Commission reported that Impostor Scams were the top type of fraud that was reported to the ftc. So we know that this method of impersonating somebody else, using someone else� s likeness, whether its a family member, a friend, a government agency, is extremely effective. And we also can safely assume that something thats effective, that takes money away from people quickly, if ai can be used to speed up that process, i think its safe to assume that scammers will continue to rely on this and that this type of scam is only going to grow. Also, this is happening all over the world, notjust in the us. The uk recently released a report that was based on a survey to consumers ages 18 to sa, and 16 of them reported that they had been targeted by an ai generated scam. A few months ago, chinese authorities arrested an individual who was able to scam someone out of 600,000 in one instance. Wow, wow. So on that point, whats your advice, erin, and whats the advice to consumers . So theres a few things that consumers can do. The number one thing that consumers can do is educate themselves, learn more about this and keep it top of mind. And not only educate yourself, find out more about it, but tell your friends, tell your family. This is probably a too common topic within my own family. Theyre tired of hearing me talk about scams. Another thing to remember is that scammers are the most effective when they can create a sense of urgency. So when someone has you on the phone, if theyve got you on a video call and theyre saying it purports to be a family member, it can be really scary and it is often very, very urgent. So that is the time when its the most critical to Exercise Scepticism, try to verify, is this the person that they say they are . If its your family member, maybe you can send them a text and say, is this really you . Where are you right now . You can try to call 911 on a different call. Ask someone whos in the room with you. Really try to Exercise Scepticism when you feel that really crazy sense of urgency. And erin, ivejust experienced a deep fake of myself. I mean, its scary stuff and it was just taken off social media, so im just wondering, do you think people need to rethink what they share on social media . Yeah, i think it is important, really important to exercise a lot of caution when youre sharing something that youve seen, especially because it can be used in ways that are much bigger than just an individual. It could be used to create an entire Marketplace Disruption when someone could create a video of a natural disaster. So as people are resharing these videos, something seems really unlikely or like a huge shock, really try to Exercise Scepticism, try to verify it with a third party or a different source before you immediately share it with someone else. And, erin, are the regulations keeping up or going fast enough . Or is ijust too too advanced already . I think its impossible to expect that any Government Regulator is going to be able to act as quickly as weve seen Ai Technology moving. But we dont have to create an entirely new framework of law to address this conduct. A lot of times the way that al is being used and whats the most problematic is to commit conduct thats already illegal discrimination, fraud, anti competitive conduct, so regulators can actually use their existing tools to combat all of this illegal conduct with tools that they already have. The us President Biden actually issued an ai blueprint Bill Of Rights last yearfor companies to use as guidance when theyre developing Ai Technology in a way that will protect the American Public for things like data privacy, algorithms that are being used in discrimination and things like that. And, erin, next yearwe have. Well, we have elections where you are in the us, here in the uk, i mean india, nigeria, spain, just to name a few. How worried should we be . Very worried. I think weve already seen deepfakes being used in the context of elections to spread false videos, false information. Consumers should be very, very careful about this and very aware that its going to increase significantly to try to promote a certain party, a certain candidate, to try to undermine parties or candidates. Consumers should be on very high alert when theyre seeing videos being used in political ads or campaigns. And, erin, as the Consumer Federation, what is it that you would like to see being done about all of this . Theres a few Different Things that wed like to see. We talked before about the importance of consumer education, and i think that is absolutely crucial. But education doesnt onlyjust happen for consumers. It should also be happening for businesses to educate their employees about the use of ai scams and fraud. Ai can be used to impersonate your boss, your co worker, someone outside your company telling you that you need to do something quite quickly, so education should be used by businesses, promoted by organisations to make sure that people are fully aware of the way that this can be used to scam people at all different levels. We should also be emphasising education for smaller, local outlets like Consumer Affairs organisations, police departments. These are often the companies. Agencies that hear directly from consumers about these problems, so the more that we can spread the word about effective tools to combat it, ways to get peoples money back, the better off well all be. And, erin, let me end on this sort of summing it up. How concerning is all of this or is there a lot more fear of technology than actual threat . I think this is very concerning. You know, ai could be used in ways to benefit people. It could be used responsibly and in ways that adhere to law. But unfortunately, its not always what were seeing. And so the alarm thats being created by the proliferation of deepfakes, discriminatory Decision Making algorithms and things like that is very real. Erin, does there need to be a law change here . Because weve heard from one mother who received a fake kidnap call from her daughter. Talked to the police, the police said they cant do anything about it, no money was actually handed over, but itjust feels wrong in some ways. It does feel wrong. And i think one of the biggest problems with frauds and scams, notjust in al, but generally, is that it can be really difficult for someone to get their money back because that is an extremely effective tool thats used by scammers, right . They take someone� s money away from them and theyre gone. Well, on that point, erin witte, great to see you again. Thanks f