The Grace Tame Foundation
News
May 27
A national call to action:
Prirotising child safety in the age of AI
Most Australians are aware that Artificial Intelligence (AI) is transforming our world at an unprecedented pace – offering both immense opportunities and profound risks. Among the most urgent challenges is the way AI is being manipulated to abuse children, fueling a new form of exploitation that challenges traditional law enforcement methods and child protection efforts. Australia must act now to address these risks.
The first version of Facebook was launched in 2004 and the first iPhone revealed in 2007, and in the decades since, we have seen how these technologies have shaped the lives of children in profound ways. As a Coalition, we applaud the important steps that the Australian Government has taken to address these challenges, such as the introduction of the Children's Online Privacy Code and the proposed digital duty of care. These measures mark a critical turning point in safeguarding children young people online. Generative AI has the potential to be as revolutionary as the smartphone and social media, if not more so. But unlike the early internet era, we now have the chance to shape this technology proactively—not only to prevent harm, but to use AI as a force for good in child protection. We must build on this momentum and act decisively now, or risk finding ourselves in an even more challenging and harmful technological landscape in the decades to come.
As the SaferAI for Children Coalition, a collective of child protection organisations, academic experts, and law enforcement agencies, led by ICMEC Australia, we are calling on the Australian government to continue to prioritise AI and child safety, consult with experts, and commit to immediate actions that ensure AI is a tool for safety rather than exploitation.
Drawing on our diverse expertise, we have seen firsthand how AI is being manipulated to produce and distribute abusive material. This is not a hypothetical threat – it is real and an illegal activity happening in Australia right now. Our law enforcement partners have seen a sharp increase in AI-generated child sexual abuse material (CSAM) in the last 6 – 12 months. It has escalated to the point where investigators struggle to distinguish AI-generated material from material recording the sexual abuse of a real child, diverting resources away from locating and rescuing real children.
The National Centre for Missing and Exploited Children (NCMEC) reported a 1,325% increase in reports involving GenAI, from 4,700 in 2023 to 67,000 in 2024. The scale of this issue is rapidly growing and, without intervention, it will become even harder to control.
In February 2025, Operation Cumberland – a groundbreaking global law enforcement effort – revealed the true scale of this threat. With 19 countries involved, the operation led to the arrest of 25 individuals, including two Australians, for their alleged roles in the production and distribution of AI-generated CSAM. Danish authorities identified 273 offenders who were subscribers to an online platform selling AI-generated abuse material, reinforcing the reality that the demand for abuse material is high and growing. AI-generated CSAM does not exist in isolation – it is part of broader child exploitation networks that thrive in the shadows.
While our international partners are moving swiftly to address these risks, Australia also has an opportunity to take a leading role in this fight. The United Kingdom, for example, has recently introduced legislation that makes it illegal to possess, create, or distribute AI tools designed to produce child sexual abuse material, and to possess so-called ‘AI pedophile manuals’—targeting the insidious ways offenders share tactics and normalise their behaviour. As AI capabilities continue to evolve, Australia must keep pace with global developments and consider similar legislative measures to protect children from emerging threats.
Australia has long been recognised as a global leader in child protection from a law enforcement perspective. We now have an opportunity to extend this leadership into the realms of technology and AI adoption. This is a defining moment for Australia to set a global standard and ensure AI is used responsibly and ethically to protect children. Implemented effectively, technologies including natural language processing for detecting grooming behaviours, and computer vision solutions for recognising CSAM, enable real time reporting and intervention – giving unprecedented opportunities to shield victims, reduce trauma exposure for investigators, and strengthen the broader child protection ecosystem, whilst respecting privacy and individual agency.
This is not an issue that affects just one child — it impacts parents and carers, schools, and entire communities. Prevention must be a collective responsibility, with families, educators, frontline workers, and the tech sector all playing a role. Importantly, we must also ensure children and young people themselves are part of the conversation — not only as those we seek to protect, but as partners in shaping safe and ethical AI futures.
The SaferAI for Children Coalition was formed in 2024 to address these challenges head on. Throughout the past year, we have worked to understand both the risks and opportunities of AI – how it is being used to harm children and how it can be leveraged to protect our youngest community members. Our research and consultation has shown that Australians care deeply about this issue, but there is an urgent need for greater public awareness and a coordinated policy response.
With more than 1 in 4 Australians having experienced sexual abuse in childhood, the prevalence of this crime is undeniable and at epidemic proportions. AI is making it easier for offenders to exploit children and harder for law enforcement to intervene. The Australian parliament must act decisively to prevent AI from becoming another tool for harm.
As the new government begins its term, we urge leaders to:
- Make child protection in the age of AI a national policy priority, ensuring that the risks posed by AI-assisted exploitation are addressed with urgency.
- Engage with the SaferAI for Children Coalition, leveraging expert knowledge to develop effective, technology-informed solutions.
- Explore investment in AI-driven technological solutions, using the same technology that is being weaponised against children to strengthen prevention, detection, and justice system capabilities.
- Prioritise education and cooperative efforts to equip children and young people with the knowledge to navigate the ethical dimensions of technology and ensure that legal frameworks adapt to hold offenders accountable.
- Champion international cooperation, ensuring Australia plays a leading role in setting global standards for AI and child protection.
Right now, Australia has a choice – to be reactive and struggle to keep pace with a rapidly evolving threat, or to proactively lead the way in protecting children in the digital age.
If we fail to act now, this problem will only escalate. We stand ready to work with the Australian Government to turn this vision into action. We look forward to working to set a global benchmark for ethical AI adoption, ensuring that technology serves to protect our children, not exploit them.


