Child trafficking and online sexual exploitation are phenomena that are increasingly intertwined with the dynamics of digital life. Criminals exploit social media platforms, messaging services, online video games, and, to an increasing extent, immersive environments to lure, manipulate, and monetize minors. Traditional trafficking networks themselves use the internet to recruit, advertise, and coordinate exploitation, turning the internet into the “digital face” of trafficking. Data from international organizations show a steady increase in crimes and a sophistication of criminal tools, from financial sextortion to the use of images generated by artificial intelligence (AI) to circumvent controls.
Grooming through social media and video gam
Grooming involves establishing a relationship of trust with a minor in order to abuse them. On social media and in-game chats, predators exploit private messages, temporary groups, and voice chats. Platforms with a strong youth presence and ephemeral communications further reduce the likelihood of detection.
According to the National Center for Missing & Exploited Children (NCMEC), reports to CyberTipline increased in 2024, partly due to expanded reporting requirements in the United States. Cases of financial sextortion against teenagers are also on the rise. The WeProtect Global Alliance reports an 87% increase in reports analyzed since 2019, with new forms of abuse linked to AI and dissemination on messaging platforms and online games.
In online video games, predators exploit cooperative dynamics, anonymous nicknames, and poorly moderated voice channels. Warning signs for parents and educators include requests for private chats, insistence on secrecy, sending gifts or game credits, “testing” boundaries with seemingly harmless requests, and attempts to isolate the child from their network of trusted adults.
Faced with these increasingly widespread and diverse risks, prevention cannot rely solely on technical tools: it is essential to strengthen the ability to recognize and interpret warning signs in a timely manner. This is where awareness and digital education play a central role.
Digital education and teacher training are key elements in the early detection of behavioral patterns typical of online grooming. Investing in teacher awareness and competence means, on the one hand, equipping them with the knowledge to recognize indicators such as requests for private chats, sudden changes in contacts, isolation from friends or family, insistence on secrecy, or in-game gifts; on the other hand, it means enabling them to intervene promptly, reporting suspicious situations to the authorities or support services and activating protection measures for the child.
Targeted training also enables teachers to adopt effective teaching methods to convey to students a culture of digital responsibility, awareness of risks (grooming, sextortion, manipulation), and strategies for moving online safely and critically. Without this level of preparation, schools risk becoming a vulnerable rather than a protective environment, as the rapid evolution of digital platforms, from social media to video games to immersive environments, and of exploitation techniques requires up-to-date skills that go far beyond traditional lessons on ‘safe computer use’.
Darknet and closed forums
Child sexual abuse material (CSAM) is distributed not only on mainstream platforms, but also in closed forums, encrypted channels, and on the darknet, often accessible via Tor. These environments offer high impunity and a flexible infrastructure for sharing, selling, or exchanging illegal content.
According to Europol’s 2024 IOCTA, online child sexual exploitation remains one of the main cyber threats. Criminal networks use end-to-end encryption, anti-forensics techniques, and self-hosted platforms, while the use of multi-layer VPNs and privacy-oriented cryptocurrencies makes investigations particularly complex.
The emergence of synthetic content, such as deepfakes or AI-generated images, poses a growing challenge: while not depicting real minors, such material can normalize pedo-criminal behavior and contribute to the creation of new offenders. In 2024, the international hotlines INHOPE reported millions of CSAM URLs, a double-digit increase over the previous year. Added to this are the transnational operations carried out by Europol and Interpol in 2025, which dismantled networks active in the dissemination of abuse material generated using artificial intelligence.
These dynamics show how the risk of exploitation is not confined to hidden corners of the internet, but can extend to any new digital environment that offers anonymity, interaction, and little moderation. It is precisely in this context that the vulnerabilities of immersive worlds emerge.
Risks of the metaverse
Immersive environments such as VR, AR, and social metaverses represent new frontiers of digital interaction, but they introduce amplified risks. Attackers can interact with three-dimensional avatars, creating a perception of closeness or physicality that is much more intense than on traditional platforms.
The lack of age verification, the possibility of using multiple avatars, and the presence of unmoderated private rooms significantly increase the risk of grooming and sextortion. The Digital Services Act (DSA) extends accountability and transparency obligations to immersive platforms, imposing restrictions on advertising targeting minors and on the management of illegal content. However, studies by the European Parliament Research Service (2024) and Ofcom (Online Nation 2024) highlight that, while representing a step forward, the DSA is not yet fully adequate for the complexity of VR/AR worlds, where the collection of biometric data, such as eye movements, posture, and voice tone, can reveal sensitive information about the user, including their age or emotional state.
Le piattaforme immersive devono pertanto sviluppare misure di “safety by design” specifiche, tra cui:
- Age assurance proporzionata e accurata, distinguendo tra metodi comportamentali, che stimano l’età in modo probabilistico sulla base dei pattern di interazione, e verifiche documentali, che comportano un accertamento formale dell’identità. La scelta deve privilegiare l’opzione meno invasiva compatibile con il livello di rischio, garantendo al tempo stesso privacy e affidabilità.
- Moderazione in tempo reale delle interazioni vocali e gestuali, supportata da modelli di machine learning addestrati su contesti VR.
- Canali di segnalazione immediati e accessibili, inclusi sistemi di emergenza (“panic button”) per interrompere rapidamente interazioni indesiderate.
- Supervisione parentale adattiva, che permetta ai genitori di monitorare l’attività dei figli senza compromettere inutilmente la privacy o l’autonomia.
