A global coalition of privacy watchdogs has fired a warning shot at the generative AI industry, saying companies churning out realistic synthetic images can’t pretend that data protection rules don’t apply.
The joint statement [PDF] signed by more than 60 regulators, including the UK Information Commissioner’s Office (ICO) and Ireland’s Data Protection Commission (DPC), boils down to a simple point: if your model can convincingly fake a person, you don’t get to pretend data protection law doesn’t exist.
“Recent developments – particularly AI image and video generation integrated into widely accessible social media platforms – have enabled the creation of non-consensual intimate imagery, defamatory depictions, and other harmful content featuring real individuals,” said the signatories. “We are especially concerned about potential harms to children and other vulnerable groups, such as cyberbullying and/or exploitation.”
The warning lands weeks after the ICO and DPC opened formal probes into Elon Musk’s xAI following reports that its Grok chatbot produced sexual images of real people without their consent.
The group says organizations dabbling in generative AI need to build safeguards from the start and think carefully about risks such as non-consensual imagery, misuse of someone’s likeness, and potential harms to children – all areas where the tech has raced ahead of social norms and, in some cases, common sense.
The regulators stress that the law already covers this, and that firms don’t get a free pass just because the content came from a machine.
William Malcolm, executive director of Regulatory Risk & Innovation at the ICO, said: “People should be able to benefit from AI without fearing that their identity, dignity or safety are under threat. AI already plays a large role in all our lives, and everybody has a right to expect that AI systems handling their personal data will do so with respect. Responsible innovation means putting people first: anticipating the risks and building in meaningful safeguards to ensure autonomy, transparency, and control.
“Public trust is foundational to the successful adoption and use of AI. Joint regulatory initiatives like this show global commitment to high standards of data protection in AI systems and help provide regulatory certainty. We expect those developing and deploying AI to act responsibly. Where we find that obligations have not been met, we will take action to protect the public.”
The joint statement on AI-generated imagery suggests that if companies want to keep pushing ever more realistic AI into everyday products, they should expect regulators to keep asking awkward questions about how it all works. ®