A recent report has uncovered a concerning trend in the development of artificial intelligence image generators, revealing the use of explicit photos of children in their training datasets.
A study by researchers at the Stanford Internet Observatory has found that LAION-5B, one of the largest image datasets used to train AI systems like Stable Diffusion, contains thousands of instances of child sexual abuse material (CSAM).
Stable Diffusion and other top text-to-image generative AI tools have been trained on illegal images of kids, according to research by the Stanford Internet Observatory.
A recent report has uncovered a concerning trend in the development of artificial intelligence image generators, revealing the use of explicit photos of children in their training datasets.
A Charleroi man arrested last week for possessing child pornography faces more than 240 additional charges after investigators searched his phone. Court documents allege that Travis Cleop Hughes III, 25, had 242 pictures and videos on his phone, two of which involved infants. Charleroi Regional Police began investigating Hughes for sending and receiving child pornography […]