Connected and protected: How mobile industry leaders can safeguard kids' digital lives
March 3, 2026
Practical, evidence‑backed strategies for operators and platform partners to reduce harm to minors while enabling safe digital participation at scale.
Natalie Antoshchuk, Chief Marketing Officer, Trackimo
Thesis
Telecom industry, stakeholders including mobile operators, platforms, and ecosystem partners must act as both protectors and enablers when talking about digitalisation and safety regarding content for children. We must unite to proactively mitigate risks that 21st century kids face every day – cyberbullying, grooming, AI-enabled abuse, sextortion, data risks, and identity theft – while preserving the connectivity, learning and social benefits unique to mobile access. This requires prioritizing scalable technologies and safety-by-design product approaches, building interoperable, privacy-preserving detection and reporting systems, as well as advanced cross-sector cooperation with governments, schools, NGOs, and law enforcement. These measures, aligned with globally emerging regulatory frameworks, let the industry protect children at scale without compromising digital inclusion or the positive educational and social outcomes that connectivity delivers.
Executive summary
The scale and complexity of online risks lying in wait for children represent societal and business imperatives for the mobile ecosystem. Operators, software and hardware partners hold unique capabilities to prevent, detect and remediate these risks at scale through network telemetry, account lifecycle controls, innovative tools, and technology. This article argues that the telecom industry must adopt four strategic priorities: embed safety-by-design across products and default settings for under-age users; empower parents and guardians with transparent, graduated controls and educational approach; implement interoperable, safety and privacy preserving detection and reporting pipelines; and engage constructively with policymakers to align on interoperable standards and age-appropriate rules. Executed together, these measures reduce harm, protect brand trust and regulatory standing, and preserve the inclusive, educational and social benefits of mobile connectivity for children.
Market context and urgency
The data is evident: globally, online harm to children is rising, creating urgent strategic and operational risks for the mobile ecosystem. A deep dive into numbers reveals shocking statistics: 46% of U.S. teens report experiencing at least one instance of cyberbullying behavior (Teens and Cyberbullying, 2022). On the scale of criminal exploitation, the NCMEC CyberTipline logged 20.5 million reports of suspected child sexual exploitation in 2024 (CyberTipline Data, 2025), underscoring the sheer volume of material flowing across networks and platforms. According to Youth Endowment Fund (UK), teenage exposure to real‑life violent content is high on social platforms – 30% of 13-17‑year‑olds (44% of TikTok users) report seeing violence on TikTok, with Facebook (33%), Snapchat (32%) and Instagram (31%) showing similar exposure levels. According to WHO, approximately one in six children experiences cyberbullying. The comprehensive HBSC (Health Behaviour in School-aged Children) report draws on responses from over 279,000 adolescents across 44 countries and regions; its second volume offers detailed insights into bullying, cyberbullying and physical fighting. The HBSC is a cross‑national study conducted with WHO/Europe every four years to inform policy and practice on the health, well‑being and social environments of 11‑, 13‑ and 15‑year‑olds (World Health Organization, 2024).
Threats are also evolving in their sophistication. WeProtect’s Global Threat Assessment (November, 2024) documents increasing scale and complexity, while Europol’s IOCTA (2024) highlights AI‑enabled social engineering, synthetic content, and LLM‑assisted grooming as evolving vectors. Statistically, mobile phones are the most common gadget for receiving sexually explicit content (79%), and WeProtect reports that online sexual harm against children in Europe is occurring predominantly on private image/video‑sharing services, with the rest of 29% on laptops/computers and 13% on tablets (Spadaro, 2024). Among respondents who reported receiving sexually explicit content from an adult (known or unknown), 76% experienced it at ages 16 – 18, 69% at 13 – 15, 26% at 9 – 12, and 14% before the age of 9.
These dynamics translate directly into commercial exposure for operators and platform partners: unresolved safety issues damage customer trust and retention, elevate regulatory and legal risk, and create reputational vulnerability. Conversely, they represent a differentiated market opportunity – operators that demonstrate credible child‑safety capabilities (safety‑by‑design defaults, robust reporting pipelines, transparent parental tools and advanced technology) can protect users at scale while unlocking new, trust‑based product offerings and strengthening relationships with regulators, schools and families.
Main online threats for children
As Charles Kettering once said, a problem well-stated is a problem half-solved. Deep understanding of the global issue of child digital safety is shaping and outlining industry’s strategic development. Cyberbullying is a behaviour that is using digital devices via SMS, text, apps, social media, forums or gaming. The form can vary from a malicious comment or text, to stalking and sharing abusive content, or false information about an individual. Most common platforms for these harmful acts are Facebook, Instagram, Snapchat, and TikTok (Bitesize, 2025). It looks like the solution is easy – we can ban children from social media, when they might face not only bullying, but also predators (adults or older teens who groom children online through flattery, secrecy or manipulation) that use mainstream and open web platforms as the first point of contact with children (NSPPC UK, 2024). The fact is that abuse is everywhere – social media chat apps, video games and messaging apps on consoles, dating sites and chatrooms. Perpetrators encourage children to continue communication on private and encrypted messaging platforms where abuse can proceed undetected. Looking at emerging AI capabilities, including increasing DeepFake criminal incidents (Police.uk, 2026) and AI voice cloning (Clark, 2026), children are subjected to impersonation, identity theft, damaged reputations, fraud and so much more. Banning social media for underage kids is a first major step that governments have started to implement already, confirming the global concern for physical, mental and social health of children globally.
Key examples of these emerging regulations include:
Australia (eSafety Commissioner): Platforms will be required to implement age-gating and verification for users under 16, starting December 10, 2025. These new duties also mandate stronger content moderation and reporting obligations (esafety.gov.au, 2025)
United Kingdom (Online Safety Act): This law establishes statutory duties of care for platforms, incorporating safety-by-design principles, requiring risk assessments for harmful content, strengthening moderation and escalation processes, and granting regulators greater transparency and enforcement authority (gov.uk, 2025).
France: Parliamentary proposals aim to prohibit access to mainstream social networks for children under 15 by imposing platform restrictions and onboarding controls to prevent registration (Schofield, 2026).
Spain: Similar to France and Australia, Spain is planning to ban social-media access for users under 16 to restrict younger users’ exposure to potentially harmful content (Jopson et al., 2026).
Why mobile operators and advanced software partner’s matter
Telecom operators along with platform partners have unique capabilities to bring the network to a safe yet digitalized ecosystem, where underage users benefit from not only safe communication, but quality content, education, and fun. Functional control of network telemetry, management of account provisioning and verification, enforcement of billing and age‑checks, and administration of device‑level and network‑level controls – these tools are already in use today and should be used for building the new guarded ecosystem for kids. With a wide influence on app distribution and policy through app stores, telcos must participate in trust frameworks and authentication ecosystems, and maintain long‑term relationships with subscribers and families across device refresh cycles. Scaling is possible with action on implementation of graduated access controls, detection of anomalous behavior for prompt exposure, embedment of safety-by-default settings, and forwarding of verified reports to law enforcement and hotlines with preserved evidence. Operators, together with parent organisations, child psychologists, schools and governments, can empower parents and guardians by providing them with innovative software that will combine tools from AI-security agents, advanced parental controls, and contact verification, for kids-appropriate content, educational games and tools, and secure chatting capabilities.
The smartphone‑free world is no longer realistic: society is continually digitalizing, and denying children access to technology cuts them off from learning and social opportunities. Fitzell, who launched a petition that has gathered about 800 signatures, notes that 90% of 11‑year‑olds and a third of six‑ and seven‑year‑olds now have smartphones – devices never designed with children in mind (Armstrong & Shaw, 2026). This underscores that the problem is not simply access, but the lack of software and platforms purpose‑built for children; that gap can and should be closed through child‑centred design, safer defaults and industry collaboration.
Industry solutions that put safety by design into practice
Innovations in secure communication are now being realized and are gaining substantial market traction. These pioneering developments represent a paradigm shift in software and connectivity for minors. Looking towards the future, these solutions incorporate standard parental controls, which consider not only essential concerns for safety but also acknowledge children’s and adolescents’ requirements for communication, high-quality content, educational and recreational applications, and reliable mobile connectivity.
These solutions extend beyond mere parental controls. They constitute an environment where children and teens can communicate securely, for example, via a Chat application, with such features as a double-parent contact verification and an AI agent for real-time background chat monitoring. Monitoring is designed to detect and flag abusive language, bullying, and grooming indicators, thereby augmenting security and providing reassurance to parents.
In addressing threats associated with conventional phone numbers, such solutions abstain from issuing “physical” phone numbers to minors. All communication is exclusively facilitated through data transmission and confined to the inner ecosystem applications. All other mobile applications are disabled at the backend level.
Within this environment, ‘entertainment systems’ can be curated and tailored specifically for young audiences. They are free of advertisements, external hyperlinks, and explicit material, offering only pre-approved television programming, music, games, and applications. This configuration maintains all operations within secure parameters, and content is categorized by age to prevent younger users from accessing material intended for older demographics. Parents are afforded robust control mechanisms, children can utilize the service offline, and the focus on educational content encourages active engagement. Exploration and creativity are fostered, though within defined limits. As children mature, their access is progressively expanded, ensuring the experience evolves with their development. The key differentiating factors include network-level security protocols and privacy-conscious monitoring, resulting in a system that genuinely enables children’s participation in the digital world while maintaining their safety. This represents a tangible, scalable solution that prioritizes children without compromising strong protective measures.
Conclusion
The reality is that children live so much of their lives on mobile devices now, and the risks keep growing – more complex, more common. For mobile operators, platforms, and ecosystem partners, this isn’t just a challenge – it is a responsibility and an opportunity to protect children at scale, at the same time making sure they keep the learning, the friendships, and the possibilities that come from being connected. The industry has to move beyond piecemeal safety fixes. Safety needs to be built in from the start – think age-appropriate settings, privacy-first detection tools, real reporting channels, and smarter parental controls embedded into products as they evolve. And none of this works in isolation. Stakeholders of the industry need to partner with schools, NGOs, hotlines, law enforcement, and be on top of new regulations, so standards stay compatible, measurable, and focused on children’s rights.
If you’re reading this as part of the GSMA community, here’s where to start: test out operator-integrated safe ecosystems, put machine-readable reporting APIs in place, make transparency and audits the norm, and put real money behind digital citizenship programs. Track your progress with clear metrics – who’s using family controls, how fast you can act when reports come in, how well you escalate verified cases, and how much you can cut down on repeat risks. This is how you, the members of the industry should impact and earn public trust. When we put children’s safety at the heart of our networks, our products, and our partnerships, we make sure kids can step into the digital world safely – connected and protected, just as they deserve.


