When hackers descended to test AI, they found flaws aplenty khaleejtimes.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from khaleejtimes.com Daily Mail and Mail on Sunday newspapers.
The hackers tried to break through the safeguards of various AI programs in an effort to identify their vulnerabilities - to find the problems before actual criminals and misinformation peddlers did - in a practice known as red-teaming. Each competitor had 50 minutes to tackle up to 21 challenges - getting an AI model to "hallucinate" inaccurate information, for example.
The hackers had the blessing of the White House and leading A.I. companies, which want to learn about vulnerabilities before those with nefarious intentions do.