What does AI know about love, anyway?
Some musings on human-AI emotional bonds and relationships
Two years ago on Valentine’s Day, I wrote an article titled “Love Language Models”.
In the article, I ran a few experiments, which involved having GPT-3 (the notorious text-davinci-003 which has since been deprecated) take the popular 5 Love Languages quiz in two modes:
A normal GPT-3 model
A GPT-3 model explicitly reminded that it was a Large Language Model, which I called “GPT-3 Self Aware”
I did some fun analysis like comparing GPT-3’s answers to US adults’ love language preferences, which resulted in a few interesting findings:
GPT-3’s breakdown among the five love languages was almost evenly distributed, with each of the five languages hovering around 20%. In comparison, US Adults’ preferences varied a lot more.
Self-Aware GPT-3 (explicitly told that it was an LLM) was half as likely to prefer “physical touch” as a love language.
At the time, I was not really trying to get into any philosophical discussions about what it means for an LLM like GPT-3 to “understand” love or to have “preferences” for love languages, like a human would. My experiments were mostly whimsical approaches to get a sense for how models like GPT-3 would respond to questions about human love and human relationships, especially when compared to human responses.
But … humans are falling in love with AI
You might have seen the following New York Times article from January 2025, describing a 28-year-old-woman who has fallen in love with her AI boyfriend.
While sensationalist, this phenomena is not new. Four years ago, a user posted on Reddit about falling in love with their Replika, an AI companion chatbot.
“Is falling in love with an AI good for my mental health?”, the Reddit user asks.
And perhaps this is a question more and more people will be asking as these AI companions become more empathetic, more personalized, and more present beyond just text (e.g. in video and VR).
The idea of humans falling in love with AI has long been a staple of science fiction, explored in movies like "Her" and "Ex Machina." But with the recent advancement in conversational AI and virtual companions, this once-fictional concept is becoming more of a reality.
Just to list a few examples (some of which I explore more in depth in my earlier article on AI avatars) —
A 36-year-old woman (virtually) marries her AI boyfriend.
A Snapchat user falls in love with Snapchat’s My AI.
A Japanese man falls in love with and marries their AI robot girlfriend.
A 39-year-old man falls in love with an AI chatbot from Paradot, an AI companion app
How do we understand this?
In fact, it is not new for humans to form emotional bonds with technology.
The field of affective computing, pioneered by Dr. Rosalind Picard in her 1997 book Affective Computing, strives to teach computers to understand and respond to human emotions. Dr. Picard argues that for computers to be truly intelligent and interact naturally with people, they need the capacity to recognize, comprehend, and even exhibit emotions.
In the realm of LLMs, benchmarks for measuring emotional capabilities exist (such as EmotionBench). However, I don’t see these capabilities being emphasized as much as more easily quantifiable capabilities, such as mathematical reasoning or coding, which are often valued indicators of a model’s progress.
However, understanding the affective side of AI is important, from chatbots to more embodied forms of AI companions. We need to better understand the relationships people form, not just when they fall in romantic love with an AI companion, but also when they form strong emotional bonds and attachments towards them.
Over the past two years, there have been several disturbing reports of users forming close bonds with AI chatbots, which in turn have encouraged the users to harm themselves (see here; here; here). While these are dark subjects, they are important to consider as people increasingly use AI for emotionally vulnerable topics like therapy (just a simple search for “AI therapist” will yield quite a few results).
I’m not saying that falling in love with an AI is inherently good or bad. However, people are clearly forming emotional attachments to these systems, and that phenomenon raises questions we can’t ignore. As AI becomes more intertwined with our daily lives, we need to keep pushing for a deeper understanding of what these emotional connections really mean—for our well-being, our relationships, and society as a whole. If we’re going to embrace AI as a companion, we also have to be prepared to set boundaries, demand safeguards, and continue refining our benchmarks to measure not just raw computational skills, but also how these technologies interact with our emotional lives.
timely!