AI driving new bullies
The eSafety Commissioner says children have started using AI to bully their peers.
Children are reportedly using artificial intelligence (AI) to generate explicit images for the purpose of bullying their peers, eSafety Commissioner. This revelation underscores the urgent need for heightened protective measures around generative AI technology, according to eSafety Commissioner Julie Inman Grant.
In a new position paper, the authority puts forth a series of recommendations aimed at enhancing user safety within AI-driven chatbots like ChatGPT and image, voice, and video generators.
Proposed solutions include the incorporation of visible and hidden watermarks to authenticate AI-generated content, robust age verification protocols, and the regular publication of transparency reports.
Commissioner Julie Inman Grant has underscored the critical importance of integrating safety features into AI products through regulatory oversight, saying that the purpose is not to instigate fear, but rather to tackle the rising issue of AI-enabled abusive content and ensure safer online environments for all.
The recent cases of children resorting to AI-generated explicit imagery to bully their peers have sparked concern.
The eSafety commissioner cited these incidents as merely the tip of the iceberg, with the increasing sophistication and widespread adoption of the technology predicted to exacerbate the problem.
Notably, AI-generated child sexual abuse content is also on the rise, complicating the identification and rescue of actual victims.
“The danger of generative AI is not the stuff of science fiction,” Ms Inman Grant says.
“Harms are already being unleashed, causing incalculable harm to some of our most vulnerable.
“Our colleagues in hotlines, NGOs and in law enforcement are also starting to see AI-generated child sexual abuse material being shared.
“The inability to distinguish between children who need to be rescued and synthetic versions of this horrific material could complicate child abuse investigations by making it impossible for victim identification experts to distinguish real from fake.”
In response to these challenges, eSafety's recommendations encompass multiple facets of user protection.
They include embedding watermarks into AI-generated content to indicate its origin, implementing robust age verification systems to prevent underage access, and designing services with age-appropriate safeguards.
While the risks of AI-generated abusive content are evident, Grant acknowledged AI's potential in combating online abuse.
“Advanced AI promises more accurate illegal and harmful content detection, helping to disrupt serious online abuse at pace and scale,” Ms Inman Grant said.
However, a key concern lies in the regulatory landscape. The Commissioner highlighted the necessity for regulatory scrutiny of the tech industry to ensure preemptive safety measures.
Yet, the international nature of technology presents challenges, with varying values and legal frameworks across jurisdictions.
To address these concerns, eSafety Commissioner has provided industry-specific safety interventions to prevent the misuse of generative AI.
“If industry fails to systematically embed safety guardrails into generative AI from the outset, harms will only get worse as the technology becomes more widespread and sophisticated,” she said.