This article describes an experiment using OpenAI’s ChatGPT-4 to do appellate work, usually considered the most intellectually challenging area of the law. My hypothesis was that AI was.
The leaders of the AI red teams at Microsoft, Google, Nvidia and Meta say they are tasked with looking for vulnerabilities in their AI systems so they can be fixed.
The hackers tried to break through the safeguards of various AI programs in an effort to identify their vulnerabilities - to find the problems before actual criminals and misinformation peddlers did - in a practice known as red-teaming. Each competitor had 50 minutes to tackle up to 21 challenges - getting an AI model to "hallucinate" inaccurate information, for example.