The U.S. National Center for Missing and Exploited Children (NCMEC) said it had received 4,700 reports last year about content generated by artificial intelligence that depicted child sexual exploitation. The NCMEC told Reuters the figure reflected a nascent problem that is expected to grow as AI technology advances. In recent months, child safety experts and researchers have raised the alarm about the risk that generative AI tech, which can create text and images in response to prompts, could exacerbate online exploitation.