As LLMs begin to integrate multimodal capabilities, attackers could use hidden instructions in images and audio to get a chatbot to respond the way they want, say researchers at Black Hat Europe 2023.
The University of Chicago’s “Glaze” tool is getting an upgrade called “Nightshade” that injects images with “poison” for AI models trained on them without permission.
Researchers have come up with a clever new tool that could allow artists to "poison" their work, causing popular image-generating AI models like OpenAI's DALL-E, Midjourney, or Stable Diffusion to come up with useless output. The tool could allow artists, who are at risk of having their copyrighted material used to generate new, unoriginal pieces, […]
Shinning a light on Nightshade — a tool coming from the shadows readwrite.com - get the latest breaking news, showbiz & celebrity photos, sport news & rumours, viral videos and top stories from readwrite.com Daily Mail and Mail on Sunday newspapers.
The tool, called Nightshade, could be used to damage future iterations of image-generating AI models, leveling the playing field between AI companies and artists.