European Regulators Probe X Amid Allegations of AI-Generated Child Abuse Material
The European Commission has launched an investigation into Elon Musk's social media platform X over allegations that it failed to take adequate measures to prevent the spread of AI-generated child abuse material (CSAM). The probe follows a similar inquiry into Grok, another AI-powered tool deployed by X.
According to regulators, X allegedly neglected its legal obligations under the Digital Services Act, leaving European citizens vulnerable to exploitation. Specifically, the commission claims that X's decision to utilize Grok on its platform allowed manipulated sexually explicit images, including potentially CSAM material, to disseminate freely.
"This is a violent and unacceptable form of degradation," said Henna Virkkunen, the Commission's executive VP. "We will determine whether X has met its legal obligations or treated European citizens as collateral damage."
Commission officials stated that they will assess whether X took sufficient steps to mitigate risks associated with Grok's deployment, including manipulated CSAM content. They argue that these risks materialized, causing harm to EU citizens.
This investigation comes on the heels of a 140 million euro ($120 million) fine levied against X by the European Commission in 2023 for breaching its Digital Services Act. Musk has publicly criticized the EU's actions, describing it as "the fourth Reich" and advocating for its abolition.
In response to the new inquiry, X reiterated its commitment to creating a safe platform, stating that it has zero tolerance for child exploitation, non-consensual nudity, and unwanted sexual content.
As tensions between Europe and American tech companies escalate, this investigation raises questions about the responsibility of social media platforms in regulating AI-generated content.
The European Commission has launched an investigation into Elon Musk's social media platform X over allegations that it failed to take adequate measures to prevent the spread of AI-generated child abuse material (CSAM). The probe follows a similar inquiry into Grok, another AI-powered tool deployed by X.
According to regulators, X allegedly neglected its legal obligations under the Digital Services Act, leaving European citizens vulnerable to exploitation. Specifically, the commission claims that X's decision to utilize Grok on its platform allowed manipulated sexually explicit images, including potentially CSAM material, to disseminate freely.
"This is a violent and unacceptable form of degradation," said Henna Virkkunen, the Commission's executive VP. "We will determine whether X has met its legal obligations or treated European citizens as collateral damage."
Commission officials stated that they will assess whether X took sufficient steps to mitigate risks associated with Grok's deployment, including manipulated CSAM content. They argue that these risks materialized, causing harm to EU citizens.
This investigation comes on the heels of a 140 million euro ($120 million) fine levied against X by the European Commission in 2023 for breaching its Digital Services Act. Musk has publicly criticized the EU's actions, describing it as "the fourth Reich" and advocating for its abolition.
In response to the new inquiry, X reiterated its commitment to creating a safe platform, stating that it has zero tolerance for child exploitation, non-consensual nudity, and unwanted sexual content.
As tensions between Europe and American tech companies escalate, this investigation raises questions about the responsibility of social media platforms in regulating AI-generated content.