Tuesday, 28 October, 2025
London, UK
Tuesday, October 28, 2025 3:37 AM
overcast clouds 12.5°C
Condition: Overcast clouds
Humidity: 84%
Wind Speed: 24.1 km/h

Grok’s antisemitic outburst heaps pressure on EU to clamp down on artificial intelligence

BRUSSELS — A series of Hitler-praising comments by Elon Musk’s artificial intelligence chatbot Grok has fired up European policymakers to demand stronger action against Big Tech companies as the bloc takes another step to enforce its laws.

Musk’s chatbot this week sparked criticism for making antisemitic posts that included glorifying Nazi leader Adolf Hitler as the best-placed person to deal with alleged “anti-white hate,” after X updated its AI model over the weekend.

The latest foul-mouthed responses from the chatbot saw EU policymakers seize the opportunity to demand robust rules for the most complex and advanced AI models — such as the one that underpins Grok — in new industry guidance expected Thursday.

It’s also put a spotlight on the EU’s handling of X, which is under investigation for violating the bloc’s social media laws.

The Grok incident “highlights the very real risks the [EU’s] AI Act was designed to address,” said Italian Social-Democrat European Parliament lawmaker Brando Benifei, who led work on the EU’s AI rulebook that entered into law last year. 

“This case only reinforces the need for EU regulation of AI chat models,” said Danish Social-Democrat lawmaker Christel Schaldemose, who led work on the EU’s Digital Services Act, designed to tackle dangerous online content such as hate speech.

Grok owner xAI quickly removed the “inappropriate posts” and stated Wednesday it had taken action to “ban hate speech before Grok posts on X,” without clarifying what this entails.

The EU guidance is a voluntary compliance tool for companies that develop general-purpose AI models, such as OpenAI’s GPT, Google’s Gemini or X’s Grok.

The European Commission last week gave a closed-door presentation seen by POLITICO that suggested it would remove demands from earlier drafts, including one requiring companies to share information on how they address systemic risks stemming from their models.

Lawmakers and civil society groups say they fear the guidance will be weak to ensure that frontrunning AI companies sign up to the voluntary rules.

Table of Contents

Ammunition

After ChatGPT landed in November 2022, lawmakers and EU countries added a part to the EU’s newly agreed AI law aimed at reining in general-purpose AI models, which can perform several tasks upon request. OpenAI’s GPT is an example, as is xAI’s Grok.

That part of the law will take effect in three weeks’ time, on August 2. It outlines a series of obligations for companies such as xAI, including how to disclose the data used to train their models, how they comply with copyright law and how they address various “systemic” risks.

The Grok incident “highlights the very real risks the [EU’s] AI Act was designed to address,” said Italian Social-Democrat European Parliament lawmaker Brando Benifei, who led work on the EU’s AI rulebook that entered into law last year. | Wael Hamzeh/EPA

But much depends on the voluntary compliance guidance that the Commission has been developing for the past nine months.

On Wednesday, a group of five top lawmakers shared their “great concern” over “the last-minute removal of key areas of the code of practice, such as public transparency and the weakening of risk assessment and mitigation provisions.”

Those lawmakers see the Grok comments as further proof of the importance of strong guidance, which has been heavily lobbied against by industry and the U.S. administration.

“The Commission has to stand strongly against these practices under the AI Act,” said Dutch Greens European Parliament lawmaker Kim van Sparrentak. But “they seem to be letting Trump and his tech bro oligarchy lobby the AI rules to shreds through the code of practice.”  

One area of contention in the industry guidance relates directly to the Grok fiasco.

In the latest drafts, the risk stemming from illegal content has been downgraded to one that AI companies could potentially consider addressing, rather than one they must.

That’s prompted fierce pushback. The industry code should offer “clear guidance to ensure models are deployed responsibly and do not undermine democratic values or fundamental values,” said Benifei.

The Commission’s tech chief Henna Virkkunen described work on the code of practice as “well on track” in an interview with POLITICO last week.

Risks

The Commission also pointed to its ongoing enforcement work under the Digital Services Act, its landmark platform regulation, when asked about Grok’s antisemitic outburst. 

While there are no EU rules on what illegal content is, many countries criminalize hate speech and particularly antisemitic comments. 

Large-language models integrated into very large online platforms, which include X, “may have to be considered in the risk assessments” that platforms must complete and “fall within the DSA’s audit requirements,” Commission spokesperson Thomas Regnier told POLITICO.

The problem is that the EU is yet to conclude any action against X through its wide-reaching law.

The Commission launched a multi-company inquiry into generative AI on social media platforms in January, focused on hallucinations, voter manipulation and deepfakes. 

In X’s latest risk assessment report, where the platform outlines potential threats to civic discourse and mitigation measures, X did not outline any risks related to AI and hate speech.

Neither X nor the Commission responded to POLITICO’s questions on whether a new risk assessment for Grok has been filed after it was made available to all X users in December. 

French liberal MEP Sandro Gozi said she would ask the Commission whether the AI Act and the DSA are enough to “prevent such practices” or whether new rules are needed. 

LP Staff Writers

Writers at Lord’s Press come from a range of professional backgrounds, including history, diplomacy, heraldry, and public administration. Many publish anonymously or under initials—a practice that reflects the publication’s long-standing emphasis on discretion and editorial objectivity. While they bring expertise in European nobility, protocol, and archival research, their role is not to opine, but to document. Their focus remains on accuracy, historical integrity, and the preservation of events and individuals whose significance might otherwise go unrecorded.

Categories

Follow

    Newsletter

    Subscribe to receive your complimentary login credentials and unlock full access to all features and stories from Lord’s Press.

    As a journal of record, Lord’s Press remains freely accessible—thanks to the enduring support of our distinguished partners and patrons. Subscribing ensures uninterrupted access to our archives, special reports, and exclusive notices.

    LP is free thanks to our Sponsors

    Privacy Overview

    Privacy & Cookie Notice

    This website uses cookies to enhance your browsing experience and to help us understand how our content is accessed and used. Cookies are small text files stored in your browser that allow us to recognise your device upon return, retain your preferences, and gather anonymised usage statistics to improve site performance.

    Under EU General Data Protection Regulation (GDPR), we process this data based on your consent. You will be prompted to accept or customise your cookie preferences when you first visit our site.

    You may adjust or withdraw your consent at any time via the cookie settings link in the website footer. For more information on how we handle your data, please refer to our full Privacy Policy