The University of Chicago’s Glaze Project has released Nightshade v1.0, which enables artists to sabotage generative AI models that ingest their work for training. Nightshade makes invisible pixel-level changes to images that trick AI models into reading them as something else and corrupt their image output — for example, identifying a cubism style as cartoon. It’s out now for Windows PC and Apple Silicon Macs.