Wednesday, 17 December, 2025
London, UK
Wednesday, December 17, 2025 6:02 AM
fog 3.0°C
Condition: Fog
Humidity: 94%
Wind Speed: 11.1 km/h

Who’s afraid of AI?

Who’s afraid of AI? 

The EU wants to regulate, while the U.S. government is letting companies run free. What’s behind the divergent approaches goes back a century.

By CALDER MCHUGH

Illustrations by Nicolás Ortega for POLITICO

A young American student gets a text from an AI agent: “stanford just sent an email asking where you wanna study abroad.” After asking “uhhh where should i go” back to the app and deciding on Paris, he spends the semester at first confused by the European capital and then in love, as he meets a French girl and goes to picnics and movies, all with the help of his trusty AI friend. As the semester draws to a close, he asks for help: “can you check me into my flight home?” The AI agent replies: “wait no. why would you wanna go back man” to which the young man responds, “??” The AI says, “stay in Paris.” And it’s decided: He looks up from his phone and puts his arm around his French girlfriend.  

The back and forth is cut into a 2 minute, 37 second ad for Poke.com, an AI application that serves as something between a personal assistant and a wise friend, built by the startup Interaction. But ironically, even as it sells the dream of Americans heading to Europe, the company itself was formed when the opposite happened: Interaction is run out of California by German transplants.  

The European-led American company is an example of a problem that has perplexed policymakers and tech advocates in Europe: Even though the continent can generate the ideas and talent needed to build new AI apps, it rarely becomes the place where those ideas scale.

“Wherever you are in the world — Europe or Asia or wherever — everyone just wants to come to the Bay Area, as long as you’re in AI,” says Marvin von Hagen, one of the co-founders of Interaction. The data bear out von Hagen’s assertion. According to a report from the venture firm Accel,  80 percent of the money invested in generative AI in the U.S., Europe and Israel in 2023 and 2024 went to American firms. In 2024, the U.S. produced 40 “notable AI models,” compared to 15 in China and just three in Europe, according to the 2025 Artificial Intelligence Index Report from Stanford University. Eleven percent of all U.S. tech companies have European founders, and hundreds of promising companies begun in Europe — many concerned directly with AI — have moved to the United States.  

“People that want to be part of this AI revolution come here [to the U.S.],” said Florian Juengermann, another German ex-pat who is the co-founder of Listen Labs, an AI company for customer research. “I’m a little bit sad, actually, for Germany.”  

Tech founders working on AI companies head to the United States for all kinds of reasons, some of which are self-reinforcing: Silicon Valley is full of AI companies, which makes it easier to build an AI company there. There are more venture funds in the U.S., and they are more interested in investing in unknown products. But there’s another big reason, according to many of the founders themselves and AI advocates in Europe: Tech types are often deeply suspicious of regulation — and Europe certainly has plenty of it, particularly when it comes to AI. 

In recent months, tech companies headquartered in Europe, as well as some national governments and the European Commission itself, have sought to lessen the regulatory burden on AI companies by delaying key parts of the implementation of legislation or advocating for the EU to reassess its entire framework. But the differences between Europe and the United States when it comes to AI regulation aren’t so easy to fix; they’re rooted in deep cultural differences that have informed how the tech industries have developed on both continents.  

Scholars and members of the industry alike say that changing this culture is crucial for Europe to start playing catch-up, both when it comes to keeping more AI professionals on the continent and encouraging those who do stay to be more entrepreneurial. 

Today, European countries like Germany still retain exceptional talent,” said Robert Windesheim, a German investor at the San Francisco-based Founders Fund, “but often lack the cultural atmosphere that enables this talent to channel their energy into creating new companies.” 

Precedent

The European Union has for decades been more committed to regulation across industries than the United States. And for as long as Europe has chosen a somewhat slower, somewhat safer growth model, there have been young, ambitious people who get frustrated by bureaucratic guardrails.   

But exactly why the EU has such a different understanding of the role of the state — in particular as it relates to AI — is a broader question that goes to the heart of the historical and cultural differences across the Atlantic.  

A lot of it has to do with privacy. “The first thing that many people in Europe think about when they think about technology is ‘They will spy on us,’ or ‘This technology will be used in a negative way,’” said Juengermann. “For example, in Germany, people will not give out their phone number. They protect their phone number. It’s like people [in the U.S.] with their Social Security number.”  

According to Anu Bradford, a professor at Columbia University who studies the European Union’s digital regulatory state — who herself is largely in favor of Europe’s regulations on AI — some of this can be traced almost a century back. “You need to think about historical reasons and the Second World War, and how the Nazis got the information to identify the Jews — they were infringing on their right to privacy,” she said. “You think about the surveillance by the Stasi in East Germany. Europeans know what it’s like when you don’t have privacy … they’re hypersensitive to that for cultural reasons.” 

Dean Ball, who was the primary author of the Trump administration’s AI Action Plan, has little agreement with Bradford about regulation. But he also traced the cultural differences between the two places to the mid-20th century. The European Union has “preserved the status quo in amber,” Ball believes, operating with a 20th century mindset to solve 21st century problems.   

Windesheim, a European by birth himself, also traced fears of safety to crises from the last century. “Europe’s 20th-century catastrophes left a lasting, and rightfully cautionary, mindset. Downside protection and safety became paramount,” he said. In large part, he believes, Europeans have simply adopted and codified into law a different risk assessment around tech than the American government.  

Then there’s Silicon Valley, which itself has a culture that’s out of step with much of the American and European populace — and that has shaped the rest of America’s posture on tech. That culture has long been guided by a libertarian ethos and unwavering faith in technological advancement, not the forces that might inhibit it. “Despite the central role played by public intervention in developing hypermedia, the Californian ideologues preach an anti-statist gospel of hi-tech libertarianism: a bizarre mishmash of hippie anarchism and economic liberalism beefed up with lots of technological determinism,” two media theorists wrote in a seminal essay, The Californian Ideology, for Whole Earth Catalog in the mid-90s. 

One of the most popular concepts flowing through Bay Area circles today is the idea of accelerationism. This philosophy includes different strains of thought: all the way from believing (and hoping) that unregulated AI development will lead to a techno-utopia where the machines solve disease to believing (and hoping) that AI will destroy democracy and usher in a world where a vanishingly small number of tech overlords rule the world. These different outlooks contributed to vastly different regulatory cultures around technology in the U.S. and EU through the 20th and 21st centuries. The EU’s General Data Protection Regulation, a comprehensive privacy law implemented in 2018, enshrines a right to privacy that doesn’t exist in the U.S. — and restricts tech companies’ ability to collect data and monetize it, which has had massive ramifications for tech companies’ ability to grow in Europe.  

“There’s a lot of hubris, a lot of arrogance, and [Europe] really has this mindset that they need to be the world’s regulator, but they do it before the technology is actually developed,” said Michael Jackson, an American tech investor who lives and works in Paris. That’s compared to the U.S., where according to him, the government steps in with more targeted regulation once it understands the needs of the marketplace. 

AI threw these differences into starker relief than ever. AI presented greater challenges to privacy and more opportunities for surveillance, and the consequences are harder to predict than almost any other innovation that had come before it, with the possible exception of the internet itself — a nightmare for the risk-averse.  

Europe has taken a notably more muscular regulatory approach. The Artificial Intelligence Act — which entered into force in the EU on August 1st, 2024, is Europe’s most comprehensive attempt at reining in AI companies that are not acting in the best interest of the public. Largely, the legislation is about harm reduction. It creates categories of risk for AI applications — from “minimal” to “unacceptable” risk (the latter applications are banned) — and forces most AI companies to be more transparent about how they work. 

At the same time, after some halting efforts to regulate AI during the Joe Biden administration, the U.S. under President Donald Trump has thrown regulation to the side. In July, the Trump administration released the AI Action Plan, a series of policy preferences that pledged to “remove red tape and onerous regulation” of AI development. As the EU regulates more, America is doing so less. And as the gap has widened, so has the number of founders in each place. 

There have also been new reasons why Europe needs to think about regulation. “[American success] has amplified the need for Europe to protect itself, because you find yourself increasingly more dependent on a technology you don’t own and you don’t control,” said Mariarosaria Taddeo, an Italian who is now a professor of digital ethics and defense technologies at the Oxford Internet Institute in the UK.  

The EU has to think more about stimulating tech development, she said, because it is not sure what the ultimate goals of American tech giants are — and how much it might need to fight against a private corporation that doesn’t have the best interests of Europe’s citizens in mind.  

“[Europe] is in a weak position, because most of the developers [in the world] are Americans,” said Bradford. “It’s becoming increasingly hard, if the EU is trying to police the world on its own and if the Americans are not regulating themselves.” 

Beyond the narrative

Advocates for the European framework argue that far from stifling innovation, it simply makes this rapidly emerging technology safer for users and founders alike. In fact, they chafe at the very notion that regulation and innovation are antithetical.  

“I don’t want there to be this perception that you need to choose the American hands-off model if you want to have innovation, and the European model will be somehow fundamentally inconsistent with innovation,” said Bradford. “It’s a very easy narrative to say, ‘Well, because they regulate so much, there’s no innovation.’ That’s not why Europeans are not leading in AI innovation.” 

Bradford cited the difficulty of having 27 jurisdictions without a single, united marketplace, as one of the major reasons that AI development in the EU has not been simple — an idea that many anti-regulation experts and tech founders also agreed with.  

Beyond that, there’s just more investment available in America right now. Venture capitalists are pouring money into U.S.-based tech companies and are often more reticent to do so for those that are elsewhere — between 2013 and 2022, EU-headquartered firms received $1.4 trillion less in venture funding than those based in the U.S.  

Europe is also far from the only place where the regulatory state is on the rise.  

In fact, although Washington may not be moving to put guardrails on AI, the state of California is stepping into the breach and implementing several AI guidelines that do a similar job. One reason is that Americans are also fearful of unregulated AI — according to a Gallup poll conducted in April and May of 2025, 80 percent of Americans believe in maintaining rules for AI safety and data security, even if it means developing AI capabilities at a slower rate.  

“My view is that Americans and Europeans are closely aligned on AI governance. If you look at the polling data, if you look at the concerns about copyright, concerns about privacy, concerns about labor displacement, you see it in equal measure in both regions,” said Marc Rotenberg, the president and founder of the Center for AI and Digital Policy in Washington. “The White House has taken a position on AI regulation that’s out of step with where most Americans are, with where most state legislators are, and even with where they were previously.” 

For Euro-optimists, there are some signs that while the governance might not be perfect, regulation is not stifling innovation, and Europe is starting to find its footing in the realm of AI development. 

“AI is not going to disappear. It’s not going to be gone in 10 years … You don’t need to be first in AI. You need to be resilient and robust and trustworthy in AI,” said Taddeo. 

Even as Europe takes a more robust regulatory posture, they are also trying to get in on the action. In November, the European Commission mobilized €200 billion for AI investment and French President Emmanuel Macron announced a commitment to sinking €109 billion in private investment into the sector. In the Nordic countries in particular, government investment has led to innovation and successful, growing companies. So far, though, no European government has taken direct aim at the regulatory state.  

But as the continent tries its own approach to building companies that’s wildly different from Silicon Valley’s, the question is whether they’re too late to matter — whether the good parts of the party are already over.  

“They were too late five years ago, and they’re absolutely too late now,” said Ball. 

LP Staff Writers

Writers at Lord’s Press come from a range of professional backgrounds, including history, diplomacy, heraldry, and public administration. Many publish anonymously or under initials—a practice that reflects the publication’s long-standing emphasis on discretion and editorial objectivity. While they bring expertise in European nobility, protocol, and archival research, their role is not to opine, but to document. Their focus remains on accuracy, historical integrity, and the preservation of events and individuals whose significance might otherwise go unrecorded.

Categories

Follow

    Newsletter

    Subscribe to receive your complimentary login credentials and unlock full access to all features and stories from Lord’s Press.

    As a journal of record, Lord’s Press remains freely accessible—thanks to the enduring support of our distinguished partners and patrons. Subscribing ensures uninterrupted access to our archives, special reports, and exclusive notices.

    LP is free thanks to our Sponsors

    Privacy Overview

    Privacy & Cookie Notice

    This website uses cookies to enhance your browsing experience and to help us understand how our content is accessed and used. Cookies are small text files stored in your browser that allow us to recognise your device upon return, retain your preferences, and gather anonymised usage statistics to improve site performance.

    Under EU General Data Protection Regulation (GDPR), we process this data based on your consent. You will be prompted to accept or customise your cookie preferences when you first visit our site.

    You may adjust or withdraw your consent at any time via the cookie settings link in the website footer. For more information on how we handle your data, please refer to our full Privacy Policy