Spain has started legal action against X, Meta, and TikTok over claims that their AI models helped create and spread child sexual abuse material.
Spanish Prime Minister Pedro Sánchez said on Tuesday that the cabinet will invoke Article 8 of the Organic Statute of the Public Prosecutor’s Office. He wrote:
“Today, the Council of Ministers will invoke Article 8 of the Organic Statute of the Public Prosecution Service to request that it investigate the crimes that X, Meta and TikTok may be committing through the creation and dissemination of child pornography by means of their AI.”
Pedor added that: “These platforms are attacking the mental health, dignity, and rights of our sons and daughters. The State cannot allow it. The impunity of the giants must end.”
Two weeks before this legal request, Pedro had shared new plans to tackle online abuse in Spain, vowing that it would block social media access for anyone under 16.
In November, Pedro said the parliament in Spain would investigate Meta for possible privacy violations affecting Facebook and Instagram users.
Data from the Britain-based Internet Watch Foundation shows how fast the problem is growing. Last year, it flagged 3,440 AI-generated child sexual abuse videos. The year before, it reported only 13.
Europe continues to tighten enforcement against major tech companies
The action from Spain adds to rising tension between European governments and large American tech companies. Those companies have support from the administration of Donald Trump, the 47th President of the United States, elected in 2024 and serving in 2026.
European leaders argue that platforms must protect users. U.S. politicians and tech executives argue that strict regulation risks limiting speech. The disagreement is now playing out in courts and regulatory agencies.
In December, the European Union fined X 120 million euros, about 140 million dollars, under the Digital Services Act. It was the first penalty issued under that law. This month, French police searched the local offices of X as part of a cybercrime investigation into the spread of child pornography and Holocaust denial on the platform.
Ireland’s Data Protection Commission has now stepped in as well. The regulator, which enforces the European Union’s General Data Protection Regulation, said late Monday that it opened a formal investigation into the creation and posting of what it called “potentially harmful” sexualized images made by Grok.
The concern is that these images may have involved the processing of personal data belonging to users in the EU.
Grok is built directly into the social media feed on X. It was developed by Elon Musk’s AI company, xAI, which bought X last year. Earlier this month, xAI merged with Musk’s rocket company SpaceX. The deal created a combined group valued at 1.5 trillion dollars.
The Irish probe is the latest action taken by regulators around the world against X. In early January, thousands of sexualized deepfake images of women were generated using Grok. The images spread quickly online. The backlash was immediate. Users, online safety experts, and politicians all reacted strongly.
Graham Doyle, the deputy commissioner at the Irish regulator, said the agency had already been in contact with X after media reports first appeared weeks ago. He said those reports complained that users could prompt the @Grok account on X to generate sexualized images of real people, including children.
Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.