Millions of people are forming emotional bonds with artificial intelligence chatbots — a problem that politicians need to take seriously, according to top scientists.
The warning of a rise in AI bots designed to develop a relationship with users comes in an assessment released Tuesday on the progress and risks of artificial intelligence.
“AI companions have grown rapidly in popularity, with some applications reaching tens of millions of users,” according to the assessment from dozens of experts, mostly academics — completed for the second time under a global effort launched by world leaders in 2023.
Specialized companion services such as Replika and Character.ai have user numbers in the tens of millions — with users citing a variety of reasons including fun and curiosity, as well as to alleviate loneliness, the report says.
But people can also seek companionship from general-purpose tools such as OpenAI’s ChatGPT, Google’s Gemini or Anthropic’s Claude.
“Even the ordinary chatbots can become companions,” said Yoshua Bengio, a professor at the University of Montreal and lead author of the International AI Safety report. Bengio is considered one of the world’s leading voices on AI. “In the right context and with enough interactions between the user and the AI, a relationship can develop,” he said.
While the assessment acknowledges that evidence regarding the psychological effects of companions is mixed, “some studies report patterns such as increased loneliness and reduced social interaction among frequent users,” the report says.
The warning lands two weeks after dozens of European Parliament lawmakers pressed the European Commission to look into the possibility of restricting companion services under the EU’s AI law amid concerns over their impact on mental health.
“I can see in political circles that the effect of these AI companions on children, especially adolescents, is something that is raising a lot of eyebrows and attention,” said Bengio.
The worries are fueled by the sycophantic nature of chatbots, which aim to be helpful for their users and please them as much as possible.
“The AI is trying to make us, in the immediate moment, feel good, but that isn’t always in our interest,” Bengio said. In that sense, the technology has similar pitfalls to social media platforms, he argued.
Bengio said to expect that new regulations will be introduced to address the phenomenon.
He pushed back, however, against the idea of introducing specific rules for AI companions and argued that the risk should be addressed through horizontal legislation which addresses several risks simultaneously.
The International AI Safety report lands ahead of a global summit starting Feb. 16, an annual gathering for countries to discuss governance of the technology that this year is held in India.
Tuesday’s report lists the full series of risks that policymakers will have to address, including AI-fueled cyberattacks, AI-generated sexually explicit deepfakes and AI systems that provide information on how to design bioweapons.
Bengio urged governments and the European Commission to enhance their internal AI expertise to address the long list of potential risks.
World leaders first gave a mandate for the annual assessment at the 2023 AI Safety Summit in the United Kingdom. Some of the advisers are well-known figures in the Brussels tech policy world, including former European Parliament lawmaker Marietje Schaake.



Follow