Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code which could be used to launch cyber attacks, according to research.
A study warns that AI tools such as ChatGPT could potentially be exploited for malicious purposes. Artificial intelligence (AI) tools, including the widely-used ChatGPT, have demonstrated vulnerabilities that could potentially be exploited for malicious purposes, a study conducted by researchers at the University of Sheffield warned.
Six AI tools, including OpenAI’s ChatGPT, were exploited to write code capable of damaging commercial databases – although OpenAI appears to have now fixed the vulnerability
A study by researchers from the University of Sheffield s Department of Computer Science found that it was possible to manipulate chatbots into creating code capable of breaching other systems.