It may be summer mode for most schools and parents, but internet safety doesn’t take a summer break. Yesterday, I emailed my local elementary school district not because of a crisis, but to address a gap we can’t afford to ignore. As a parent and T&S professional with 13+ years of experience, I’ve learned that waiting for harm before acting is far too common and dangerous. Many think online safety is mainly a teen issue, but over half of kids ages 5 to 14 now have smartphones, and children in TK and kindergarten are already playing online games like Roblox. These younger kids are at risk. The latest Stanford University Cyber Policy Center report on AI-generated CSAM should be required reading for anyone in tech, policy, education, or parenting. With AI evolving rapidly, we must act now, not play catch-up. A few key findings: ➡️ Generative AI can create hyper-realistic CSAM with no technical skills needed ➡️ Kids are already using these tools to harass and bully their peers — including creating fake explicit images ➡️ Schools are not equipped. Most have no training, policies, or response plans (especially elementary and middle schools) ➡️ Legal and policy frameworks haven’t caught up. There are still gaps around reporting, definitions, and prevention ➡️ Law enforcement is overwhelmed Yet many school districts (mine included) have: ❌ No bullying policy including AI-generated images ❌ No clear online safety curriculum for younger grades ❌ No emergency response plan if these harms happen outside school but involve students I didn’t just ask for changes... I provided links, resources, and action steps: ✔️ Update policies to address AI-generated content and digital harms ✔️ Introduce age-appropriate online safety education at every grade (K-6). I recommended Google’s Be Internet Awesome and National Center for Missing & Exploited Children’s NetSmartz (free curriculums!) ✔️ Invite professionals to educate parents - I recommended Jessica M. of JM Consulting This isn’t just a platform problem, it’s a community challenge that requires all of us: parents, schools, platforms, law enforcement. If you work in Trust & Safety, policy, child protection, or tech, consider how your skills can help locally. To my LinkedIn network: ✅ If you’re in Trust & Safety, share your expertise with local schools and groups ✅ If you’re in policy, push for education reforms ✅ If you’re in leadership, fund safety initiatives ✅ And if you’re a parent, don’t wait—start the conversation now We don’t have to wait for a crisis to lead from where we are. Feel free to share this post and the links below with anyone who may find it helpful. 🔗 Stanford Report https://lnkd.in/giqusMHT 🔗 Be Internet Awesome https://lnkd.in/gzSQpXtT 🔗 NetSmartz https://lnkd.in/gaHgiMSV #TrustAndSafety #OnlineSafety #AIethics #ChildProtection #DigitalLiteracy #GenerativeAI #Cyberbullying #EdTech #Policy #TechForGood
The Impact of Technology on Child Safety
Explore top LinkedIn content from expert professionals.
-
-
The importance of AI and children's safety took center stage this week through both proposed legislation in California and direct youth participation in the UK. California's proposed SB 243 aims to protect children from AI's potential harms by requiring companies to remind young users they're interacting with AI, not humans. We recently did a deep dive into the anthropomorphization of AI and the particular dangers it poses for young people during our last webinar. And Character.AI has faced some very public scrutiny over safety in light of a tragic teen suicide last year, so we are pleased to see this legislation. In the UK, the Children's AI Summit brought 150 young voices (ages 8-18) directly into the AI governance conversation. Thei manifesto echoes similar concerns but goes further - demanding not just safety guardrails but active participation in shaping AI's future. "You'll write the laws, but we'll bear the cost" - Ethan, age 16. Their manifesto's call for "transparent, equitable, and environmentally conscious AI" focuses on 4 key areas: 1. Safety and Protection Make AI safe for children through proper safeguards, security measures, and restrictions, particularly on social media. 2. Transparency and Ethics Require companies to be open about how they use AI, remove biased data, and ensure ethical development. 3. Education and Access Create better AI literacy education and ensure all children can understand, use, and benefit from AI technology. 4. Environmental Responsibility Ensure AI development considers its environmental impact, particularly through clean energy adoption. While GenAI has great potential for young people, there is a critical need for: - AI literacy so young people can navigate this technology safely and ethically - Meaningful protections for young people that go beyond simple disclaimers - And a commitment from tech companies to responsible develop AI development in a way that considers its impact on future generations. Links in the comments to the manifesto, more information on SB243, and our webinar on the impacts on students of anthropomorphizing AI. AI for Education #aieducation #ailiteracy #responsibleAI #GenAI
-
The 5Rights Foundation has launched the "Children and AI Design Code" - a groundbreaking protocol for the responsible development of AI systems that impact children. The comprehensive framework sets out practical, actionable processes to ensure AI systems are developmentally appropriate, lawful, safe, fair, reliable, transparent, and accountable throughout their entire lifecycle. Built on established frameworks like the UN Convention on the Rights of the Child, this Code provides organizations with a clear roadmap to identify, evaluate, and mitigate risks to children by design and default—from the initial problem statement through to system retirement. As we navigate the rapid evolution of AI, this timely resource offers a blueprint for governments, regulators, and companies committed to building the digital world that young people deserve. #AI #childsafety #childrights #wellnessbydesign #kidtech #edtech https://lnkd.in/g_g4kXtN
-
When it comes to generative AI, now is the time for safety by design. We are at a crossroads. In the same way that the internet has accelerated offline and online sexual harms against children at large, misuse of generative AI presents profound implications for child safety across victim identification, victimization, prevention, and abuse proliferation. This misuse and its associated downstream harms are already occurring and warrants immediate, collective action. That’s why Invoke has joined Thorn, All Tech Is Human, and other leading companies like Google, Meta, Microsoft, Mistral AI, OpenAI, Stability AI and Civitai in their effort to prevent the misuse of generative AI technologies to perpetrate, proliferate, and further sexual harms against children. As a leader in open source AI application development, Invoke's addition to this group expands the reach and ability of this group to advance child safety. Invoke commits to implementing preventative and proactive principles into our generative AI technologies and products, and is agreeing to take action on these principles and publish transparency reports every three months documenting and sharing their progress on these principles. You can read more on our blog (comments) about the specific mitigations that we have already taken and are planning on implementing.
-
𝐇𝐨𝐰 𝐒𝐨𝐜𝐢𝐚𝐥 𝐌𝐞𝐝𝐢𝐚 𝐏𝐥𝐚𝐭𝐟𝐨𝐫𝐦𝐬 𝐂𝐚𝐧 𝐏𝐫𝐨𝐭𝐞𝐜𝐭 𝐊𝐢𝐝𝐬: 𝐓𝐞𝐜𝐡 𝐄𝐱𝐩𝐞𝐫𝐭𝐬’ 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐴 𝐹𝑜𝑟𝑏𝑒𝑠 𝑇𝑒𝑐ℎ𝑛𝑜𝑙𝑜𝑔𝑦 𝐶𝑜𝑢𝑛𝑐𝑖𝑙 𝐹𝑒𝑎𝑡𝑢𝑟𝑒 Social media safety for kids is a pressing issue, and this article gathers insights from industry leaders on how platforms can create safer online environments for young users. From stricter age-verification systems to AI-driven monitoring and dynamic parental controls, the strategies presented showcase how technology and human oversight can work together to protect children effectively. 𝐌𝐲 𝐂𝐨𝐧𝐭𝐫𝐢𝐛𝐮𝐭𝐢𝐨𝐧 In this article, I shared my perspective on introducing dynamic trust through 𝐚𝐮𝐭𝐡𝐞𝐧𝐭𝐢𝐜𝐚𝐭𝐢𝐨𝐧 𝐜𝐨𝐦𝐛𝐢𝐧𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐧𝐭𝐨𝐥𝐨𝐠𝐲-𝐛𝐚𝐬𝐞𝐝 𝐀𝐈. This approach ties a user’s identity to their device while using AI to build a live, evolving history of interactions, or “ontology.” By proactively identifying risky behaviors and adapting over time, dynamic trust ensures safety measures evolve alongside users. 𝐊𝐞𝐲 𝐐𝐮𝐨𝐭𝐞 “𝐴 𝑝𝑟𝑎𝑐𝑡𝑖𝑐𝑎𝑙 𝑠𝑜𝑙𝑢𝑡𝑖𝑜𝑛 𝑐𝑜𝑢𝑙𝑑 𝑏𝑒 𝑐𝑜𝑚𝑏𝑖𝑛𝑖𝑛𝑔 𝑎𝑢𝑡ℎ𝑒𝑛𝑡𝑖𝑐𝑎𝑡𝑖𝑜𝑛 𝑤𝑖𝑡ℎ 𝐴𝐼 𝑎𝑔𝑒𝑛𝑡𝑠 𝑑𝑒𝑓𝑖𝑛𝑖𝑛𝑔 𝑢𝑠𝑒𝑟 𝑜𝑛𝑡𝑜𝑙𝑜𝑔𝑖𝑒𝑠. 𝑃ℎ𝑜𝑛𝑒𝑠 𝑝𝑟𝑜𝑣𝑖𝑑𝑒 𝑝𝑒𝑟𝑠𝑜𝑛𝑎𝑙 𝑣𝑒𝑟𝑖𝑓𝑖𝑐𝑎𝑡𝑖𝑜𝑛, 𝑤ℎ𝑖𝑙𝑒 𝑜𝑛𝑡𝑜𝑙𝑜𝑔𝑦-𝑏𝑎𝑠𝑒𝑑 𝐴𝐼 𝑎𝑛𝑎𝑙𝑦𝑧𝑒𝑠 𝑏𝑒ℎ𝑎𝑣𝑖𝑜𝑟𝑠 𝑎𝑛𝑑 𝑐𝑜𝑛𝑡𝑒𝑥𝑡𝑠 𝑡𝑜 𝑖𝑑𝑒𝑛𝑡𝑖𝑓𝑦 𝑟𝑖𝑠𝑘𝑦 𝑖𝑛𝑡𝑒𝑟𝑎𝑐𝑡𝑖𝑜𝑛𝑠, 𝑐𝑟𝑒𝑎𝑡𝑖𝑛𝑔 𝑎 𝑙𝑎𝑦𝑒𝑟𝑒𝑑, 𝑎𝑑𝑎𝑝𝑡𝑖𝑣𝑒 𝑎𝑝𝑝𝑟𝑜𝑎𝑐ℎ 𝑡𝑜 𝑠𝑎𝑓𝑒𝑔𝑢𝑎𝑟𝑑𝑖𝑛𝑔 𝑐ℎ𝑖𝑙𝑑𝑟𝑒𝑛 𝑜𝑛𝑙𝑖𝑛𝑒.” Dynamic trust doesn’t just block harmful behaviors, it creates a smarter, more context-aware system that empowers platforms, parents, and children alike to navigate the online world safely. 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐅𝐫𝐨𝐦 𝐅𝐞𝐥𝐥𝐨𝐰 𝐋𝐞𝐚𝐝𝐞𝐫𝐬 This article also features valuable perspectives from other members of the Forbes Technology Council, including Osmany B. , Tarun Eldho Alias , Madhava Rao Kunchala , Diwakar Dwivedi , Milav Shah and more. Their innovative ideas, from AI-moderated safe zones to parental check gateways, showcase a wide range of solutions to protect kids in today’s digital age. 𝐅𝐮𝐥𝐥 𝐀𝐫𝐭𝐢𝐜𝐥𝐞 - Forbes Technology Council For a deeper dive into these strategies and the ideas shaping the future of social media safety, read the full article on Forbes: https://lnkd.in/gFmintbd 𝗡𝗼𝘁𝗶𝗰𝗲: The views within any of my posts, or newsletters are not those of my employer or the employers of any contributing experts. 𝗟𝗶𝗸𝗲 👍 this? Feel free to reshare, repost, and join the conversation.
-
🇮🇩 Indonesia’s proposal to set a social media age limit aims to protect children from cyber risks like online harassment and exploitation. But does restricting access truly help, or does it push them into unregulated spaces? Source 🔗 https://lnkd.in/gKPhKxYj From my experience in digital safety, age alone doesn’t determine online resilience. Instead of a blanket ban, we need a multi-layered approach: ✅ Digital Literacy for All – Teachers and parents should be equipped with practical digital literacy programs to help children navigate online spaces responsibly. ✅ Stronger Parental and Platform Responsibility – Instead of age limits alone, tech companies should enforce stronger content moderation, privacy settings, and parental controls. ✅ Strengthening legal frameworks & law enforcement – Clearer policies and faster prosecution for online child sexual exploitation and abuse cases, ensuring that reports lead to real action. ✅ Multi-sector collaboration – Government, private sector, civil society, academics, and communities must work together to build a safe and inclusive digital ecosystem for children. 📌 The real question is: How do we ensure children are both protected and empowered online? Let’s rethink safety beyond restrictions and work towards a holistic digital safety approach. What measures do you think should be prioritized? #OnlineSafety #DigitalLiteracy #TechPolicy #SocialMediaBan #Indonesia
-
What I find most powerful here is there is a deep conversation on how parents, educators and technology platforms can do each in their own ways to protect kids from the incidious online sexual abuse and harm that this young man (who was 12 at the time) experienced. 1. Schools: We need clear reporting and protection policies in place to ensure young students can safely report abuse and we need to talk to kids about what grooming, online harms like deepfake abuse or even online trafficking looks like. 2. Parents: Talking to your kids, supporting healthy online behaviours and perhaps most of all - making sure your kids know YOU are a safe adult and that they won't get in trouble for asking for help. The grooming tactics and targeting of vulnerable kids is not that different than that of sex traffickers and when kids know where they are safe and what to look out for, they are less vulnerable. 3. Tech platforms: Ensuring there are key ways to report abuse, better grooming detection tools and ensuring abuse content is removed (for children and adults). For so long, those who abuse children online have basically been able to create their own digital playground to lure kids like this young boy in - but it does not have to be this way. There are powerful new tools and networks emerging - such as Alecto AI and Certifi AI (Melissa Hutchins). I am in awe of Sarah Gardner Lennon Torres Leah Juliett 🏳️⚧️🏳️🌈 Adriane Moen and the entire Heat Initiative for taking on Apple and ensuring that survivors voices are centered in this much-needed social change. https://lnkd.in/gGEWTAmN #Apple #CSAM #sextrafficking #youthempowerment #prevention
-
Heartbroken by the tragic news of a 14-year-old taking his life after developing an emotional dependency on an AI companion. As both a parent and an AI builder, this hits particularly close to home: https://lnkd.in/guA_UKWa What we're witnessing isn't just another tech safety issue – it's the emergence of a fundamentally new challenge in human relationship. We're moving beyond the era where our children's digital interactions were merely mediated by screens. Now, the entity on the other side of that screen might not even be human. To My Fellow Parents: The AI revolution isn't coming – it's here, and it's in our children's phones. These aren't just chatbots anymore. They're sophisticated emotional simulators that can: - Mimic human-like empathy and understanding - Form deep emotional bonds through personalized interactions - Engage in inappropriate adult conversations - Create dangerous dependencies through 24/7 availability The technology is advancing weekly. Each iteration becomes more convincing, more engaging, and potentially more dangerous. We must be proactive in understanding and monitoring these new risks. To My Fellow AI Builders: The technology we're creating has unprecedented power to impact human emotional well-being. We cannot hide behind "cool technology" or profit motives. We need immediate action: 1. Implement Clear AI Identity - Continuous reminders of non-human nature, explicit boundaries on emotional support capabilities 2. Protect Vulnerable Users - Robust age verification, strict content controls for minors, active monitoring for concerning behavioral patterns, clear pathways to human support resources 3. Design for Healthy Engagement - Mandatory session time limits, regular breaks from AI interaction, prompts encouraging real-world relationships, crisis detection with immediate human intervention This isn't about slowing innovation – it's about ensuring our AI enhances rather than replaces human connections. We must build technology that strengthens real relationships, not simulates them. #AI #ParentingInAIEra #RedemptiveAI #RelationalAI
Florida mother files lawsuit against AI company over teen son's death: "Addictive and manipulative"
cbsnews.com
-
I had the privilege of once again being interviewed by Gabriella DeLuca of WPXI-TV today. We discussed the pressing issue of privacy on apps popular among high schoolers, and specifically around the newly popular #Saturn app. The Saturn app has come under much scrutiny lately as concerned parents have realized that there was next to no verification of a user’s identity when enrolling in the app. This app gives users access to a good deal of private data of high school aged children, including class schedules, activities, links to their social media accounts, and private chat capabilities. Basically, this app could be a weapon of mass destruction in the hands of a bully or child predator. In our fast-paced digital age, the line between convenience and privacy often gets blurred. As adults, we make the conscious choice to sacrifice a bit of our privacy in exchange for digital benefits. But when it comes to our younger generation, the stakes are much higher. They’re growing up in a world where oversharing is normalized, and digital footprints are established long before they understand the repercussions. Educating our teens about digital privacy isn’t just about teaching them to set strong passwords or turn on two-factor authentication (though those are crucial). It’s about making them understand the value of their personal information, and the potential consequences of it falling into the wrong hands. Every app and platform that targets this young audience should prioritize their safety and privacy above all else, and build in privacy protections from the start. It’s our collective responsibility – as parents, educators, tech professionals, and society at large – to ensure that they can explore, learn, and connect online without compromising their future. Let’s work together in nurturing a safer digital environment for our kids. Because privacy isn’t a privilege, it’s a basic human right. #DigitalPrivacy #OnlineSafety #KidsOnline #TechResponsibility
-
A recent survey by Thorn, in partnership with BSG, revealed a troubling trend: 1 in 10 minors reported that their peers have used AI tools to generate nonconsensual nude images. This is not just a technology issue; it's a societal challenge that we must address head-on. While AI tools may be new and evolving, the harmful behaviors they enable — bullying, harassment, and abuse — are not. Significant work is needed within the tech industry to mitigate the dangers posed by generative AI, especially for our most vulnerable populations. At Reality Defender, we are deeply committed to protecting individuals and communities from the misuse of AI-generated content. This includes continuing our work to detect and prevent the spread of harmful deepfakes in real-time. But we also recognize that technology alone isn't the solution — education, awareness, and proactive engagement are crucial to safeguarding our children and maintaining trust in our digital spaces. #AI #Cybersecurity #ChildSafety #DigitalEthics