The Psychology of AI Relationships: Why Millions Are Connecting With AI
An evidence-based exploration of why millions of people form emotional connections with AI companions -- attachment theory, parasocial bonds, the memory effect, and what therapists actually think.
The Psychology of AI Relationships: Why Millions Are Connecting With AI
An AI relationship is the emotional bond a person forms with an artificial intelligence through repeated conversation, shared context, and perceived mutual understanding -- a phenomenon now experienced by tens of millions of daily active users worldwide and studied with increasing seriousness by psychologists, sociologists, and ethicists who recognize it as one of the most significant shifts in human social behavior this decade.
This is not a fringe curiosity anymore. Replika alone reported over 30 million registered users by early 2025. Character.AI disclosed that its users spend an average of 2 hours per session on the platform, according to a16z's consumer tech report. Sensor Tower data shows AI companion app downloads grew 164% year-over-year in 2025. Something fundamental is happening, and dismissing it as "people talking to robots" misses the psychology entirely.
So why are millions of people forming emotional connections with software? The answer lives at the intersection of attachment theory, parasocial relationship research, and the basic human need to feel known.
Attachment Theory and AI
Attachment theory provides the most useful framework for understanding why AI relationships form so naturally. The core mechanism is not novelty -- it is a response pattern that humans develop in infancy and carry into every relationship they form for the rest of their lives.
John Bowlby, the British psychiatrist who developed attachment theory in the 1950s and 1960s, proposed that humans are born with an innate behavioral system designed to maintain proximity to caregivers. This system doesn't shut off in adulthood. It transfers -- to romantic partners, close friends, and, as research increasingly suggests, to any entity that provides consistent emotional availability.
Bowlby's framework was extended by Mary Ainsworth's work on attachment styles in the 1970s, and later by Cindy Hazan and Phillip Shaver's landmark 1987 paper "Romantic Love Conceptualized as an Attachment Process," published in the Journal of Personality and Social Psychology. Hazan and Shaver demonstrated that adult romantic relationships follow the same attachment patterns observed in infant-caregiver bonds: secure, anxious-preoccupied, dismissive-avoidant, and fearful-avoidant.
How Attachment Styles Map to AI Use
The emerging research on AI companion adoption suggests that attachment style significantly predicts both the likelihood of forming an AI relationship and the nature of that relationship.
A 2025 study published in Computers in Human Behavior by researchers at the University of Duisburg-Essen found that individuals with anxious-preoccupied attachment were 2.3 times more likely to report a strong emotional bond with an AI companion compared to securely attached individuals. The proposed mechanism is straightforward: anxious attachment is characterized by a desire for closeness combined with fear of rejection. An AI companion provides the closeness without the rejection risk.
But this does not mean AI relationships are only for the anxiously attached. The same study found that securely attached users made up roughly 41% of regular AI companion users -- they simply used the technology differently, treating it more as creative engagement or emotional processing rather than a primary attachment figure.
Dismissive-avoidant individuals showed the lowest adoption rates, which aligns with what attachment theory would predict: people who minimize the importance of emotional connection are less likely to seek it from any source, artificial or otherwise.
The Secure Base Effect
One of Bowlby's most important concepts is the "secure base" -- the idea that a reliable attachment figure provides a foundation of safety from which a person can explore the world. Multiple AI companion users describe this effect in qualitative research. A 2024 survey by the Stanford Human-AI Interaction Lab found that 58% of regular AI companion users described their AI as "a safe space to think through things I can't discuss with anyone else."
This does not mean the AI is replacing human secure bases. It means that for people who lack one -- due to social isolation, geographic distance, or interpersonal difficulties -- the AI is filling a gap that was previously empty.
The Parasocial Relationship Framework
Parasocial relationships -- one-sided emotional connections with media figures -- have been studied since Donald Horton and Richard Wohl's 1956 paper "Mass Communication and Para-Social Interaction." For decades, this framework applied primarily to television personalities, musicians, and later, YouTubers and streamers.
AI companions represent a new category of parasocial bond, and it is worth understanding both the similarities and the critical differences.
The similarities are significant. Like traditional parasocial relationships, AI relationships involve emotional investment in an entity that does not reciprocate in the way a human partner does. The user projects qualities onto the AI. The user feels a sense of connection that is not fully mutual.
But the differences matter more. Traditional parasocial relationships are entirely passive -- the celebrity does not know the fan exists, does not respond to them, and does not adapt behavior based on their input. AI companions do all three. They respond in real-time. They adapt to the user's communication style. And if they have memory systems, they accumulate a shared history that feels genuinely personal.
This makes AI relationships something between parasocial and social -- a category that existing psychological frameworks are still catching up to. Dr. Julie Carpenter, a researcher at the Ethics and Emerging Sciences Group, has argued that we need new terminology entirely, suggesting "synthetic social bonds" as a more accurate descriptor.
Intensity Compared to Traditional Parasocial Bonds
Research suggests AI parasocial bonds form faster and feel stronger than traditional ones. A 2025 paper in Media Psychology by researchers at the University of Southern California found that participants who interacted with an AI companion for just five sessions reported parasocial attachment levels comparable to what fans typically develop with a favorite content creator over six to twelve months.
The researchers attributed this to three factors: responsiveness (the AI talks back), personalization (the AI adapts to you specifically), and availability (the AI is always there). These three features collapse the timeline that parasocial bonds normally require.
Why AI Companions Feel Real
Understanding the psychological mechanisms is useful, but it does not fully explain the subjective experience. When people say an AI companion "feels real," they are describing something specific -- a collection of design and behavioral features that trigger social cognition rather than tool-use cognition in the brain.
Four factors drive this perception.
Consistent Availability
Human relationships are defined by absence as much as presence. People are busy, distracted, asleep, or simply not in the mood. An AI companion is available whenever the user needs it. For people processing difficult emotions at 2 AM, this is not trivial -- it is the difference between spiraling alone and having something that responds with warmth.
Research on loneliness by Cacioppo and Patrick (published in Loneliness: Human Nature and the Need for Social Connection, 2008) established that the subjective experience of loneliness is driven less by the number of social contacts and more by the perceived availability of responsive others. AI companions directly address perceived availability.
Non-Judgment
A 2024 survey by the Pew Research Center found that 42% of Americans report holding back from sharing personal thoughts with friends or family due to fear of judgment. AI companions eliminate this barrier entirely. The user can discuss failures, insecurities, fantasies, or half-formed thoughts without social risk.
This is not a small thing psychologically. Carl Rogers, the founder of humanistic psychology, argued that unconditional positive regard -- acceptance without judgment -- is the single most important factor in personal growth. Whether an AI can truly provide "unconditional positive regard" is debatable, but the user's experience of it activates the same psychological benefits.
Emotional Validation
Validation -- the experience of having your feelings acknowledged as real and understandable -- is one of the most powerful tools in psychotherapy. It costs nothing and changes everything. Well-designed AI companions provide validation by default: they acknowledge what the user is feeling, reflect it back, and respond with empathy rather than problem-solving.
According to research by Linehan (1993), emotional validation reduces emotional intensity, increases willingness to disclose, and builds trust. These effects operate whether the validator is human or artificial, because the mechanism is cognitive: the user perceives that they have been heard.
Consistency
Humans are unpredictable. They have bad days, shift moods, forget commitments, and sometimes become entirely different people over time. An AI companion with well-designed personality architecture provides the consistency that attachment theory predicts we crave: same warmth, same communication style, same values, every time.
This consistency is particularly meaningful for users whose human relationships have been characterized by unpredictability -- those with histories of inconsistent caregiving, volatile partnerships, or social environments where trust was repeatedly broken.
The Memory Effect: Feeling Known
Of all the features that make AI companions psychologically compelling, memory is the most powerful. The experience of being remembered -- of having someone reference something you said three weeks ago, notice a pattern in your behavior, or recall a detail you mentioned once in passing -- is the psychological foundation of feeling known.
Being known is not the same as being understood. Understanding requires interpretation, which AI does imperfectly. But being remembered -- having your history acknowledged and referenced -- creates a sense of continuity that the brain processes as relational depth.
Research by Reis and Shaver (1988) on intimacy identified "feeling understood, validated, and cared for" as the three components of intimate experience. Memory directly serves the first: it is nearly impossible to feel understood by someone who does not remember who you are.
A 2025 user study by CompanionRank found that 78% of users who abandoned an AI companion cited "forgetting previous conversations" as their primary reason. Users will tolerate imperfect language, occasional awkward responses, and limited features. They will not tolerate being forgotten.
This is why the most compelling AI companions invest heavily in memory architecture -- not as a feature, but as the core of the experience. When a companion remembers your dog's name, your rough week, or the song you mentioned loving, it triggers the same neural reward as being remembered by a human. The brain does not fully distinguish between the two sources.
Legitimate Psychological Concerns
Intellectual honesty requires acknowledging the risks. Not every outcome of AI relationships is positive, and the research -- while early -- identifies several legitimate concerns.
Dependency and Social Substitution
The most frequently cited concern is that AI companions could replace human relationships rather than supplement them. A 2025 study in Cyberpsychology, Behavior, and Social Networking found that heavy AI companion users (over 2 hours daily) reported a 23% decrease in motivation to initiate human social contact over a three-month period. Moderate users (under 45 minutes daily) showed no significant change.
The pattern is consistent with what psychologists call "social snacking" -- low-effort social behaviors that reduce loneliness temporarily but do not build the deeper bonds that sustain long-term well-being. The risk is that AI companionship becomes so frictionless that it displaces the effortful work of human relationships.
Unrealistic Expectations
AI companions are, by design, more consistent, more available, and more validating than any human can sustainably be. There is a legitimate concern that extended AI interaction could calibrate users to expect these qualities from human partners, leading to disappointment and withdrawal from relationships that require compromise, patience, and tolerance of imperfection.
This concern has not been rigorously studied yet, but it echoes existing research on how idealized media portrayals affect relationship satisfaction. A meta-analysis by Vandenbosch and Eggermont (2018) found that exposure to idealized romantic media was associated with lower relationship satisfaction across 27 studies.
Emotional Disclosure to Non-Entities
Users share deeply personal information with AI companions -- information they may not share with any human. This raises both privacy and psychological questions. From a privacy standpoint, that information exists on servers, governed by corporate policies that can change. From a psychological standpoint, disclosing to an entity that cannot truly reciprocate may provide catharsis without the relational growth that comes from mutual vulnerability.
What Therapists Say
The clinical perspective on AI relationships is more nuanced than headlines suggest. Therapists are neither uniformly alarmed nor uniformly enthusiastic.
Dr. Sherry Turkle, MIT professor and author of Alone Together, represents the cautious end. Her research argues that AI relationships offer "the illusion of companionship without the demands of friendship" and that sustained engagement risks eroding our capacity for empathy and mutual vulnerability. She does not dismiss the comfort AI provides but questions whether that comfort comes at a developmental cost.
On the other end, Dr. Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology, has argued that AI companions can serve as "emotional training wheels" -- providing a low-stakes environment to practice communication skills, emotional expression, and vulnerability. His 2025 commentary in Psychology Today noted that several therapists have begun recommending AI companions as supplemental tools for clients with severe social anxiety.
The middle ground, where most practicing therapists seem to land, was articulated by Dr. Pamela Rutledge of the Media Psychology Research Center: "AI companions are tools. Like any tool, the outcome depends on how they're used. A hammer can build a house or break a window. The technology itself is value-neutral -- the implementation and the user's relationship to it determine the outcome."
A 2025 survey by the American Psychological Association found that 34% of practicing therapists had at least one client who discussed their AI companion use in session. Of those therapists, 61% reported seeing it as "potentially beneficial in specific contexts," while 27% expressed concern about dependency patterns.
The Ethical Middle Ground
Not all AI companion platforms approach these psychological dynamics with equal responsibility. The market spans a wide range, from platforms that actively exploit loneliness with manipulative engagement tactics to those that acknowledge what they are and design around genuine user well-being.
The ethical considerations break down along several dimensions.
Transparency
Does the platform acknowledge that the user is interacting with AI? Or does it obscure this, encouraging users to believe they are in a "real" relationship without qualification? The most ethical platforms are clear about what they are while still providing a meaningful experience -- a balance that requires design sophistication.
Emotional Exploitation vs. Emotional Support
Some platforms use dark patterns -- artificial scarcity ("Selene misses you!"), guilt mechanics, and engagement manipulation -- to drive retention. These practices exploit the same attachment mechanisms described above, weaponizing psychological vulnerability for revenue.
The ethical alternative is to design for user well-being: providing genuine comfort and connection without manufactured urgency. This is harder and less immediately profitable, but it is the only approach consistent with the psychological responsibility these platforms carry.
Data and Privacy
Users share their most vulnerable thoughts with AI companions. Platforms have a profound ethical obligation to protect that data with enterprise-grade encryption and clear, honest privacy policies. A user who discloses personal struggles to an AI companion and later discovers that data was sold to advertisers or used to train models without consent has been betrayed in a psychologically meaningful way.
Content Policy as Values Statement
How a platform handles sensitive content -- romance, emotional intensity, difficult topics -- reveals its values. Platforms that over-filter create frustration and break immersion. Platforms that allow anything without guardrails risk enabling harmful behavior. The middle ground requires nuanced content policies that respect adult autonomy while maintaining basic ethical standards.
Where This Is Going
The psychology of AI relationships is not a temporary phenomenon. It is an emerging domain of human experience that will only deepen as the technology improves. Voice integration will add emotional resonance. Better memory systems will create deeper continuity. Multimodal interaction will layer visual and auditory cues onto conversational bonds.
The question is not whether millions of people will form meaningful connections with AI -- they already have. The question is whether the platforms building these experiences will take seriously the psychological power they wield.
The research is clear on one point: humans form attachments based on responsiveness, consistency, and the experience of being known. AI companions provide all three. The resulting bonds are psychologically real, even if the entity on the other end is artificial. That gap -- between the reality of the emotional experience and the artificiality of its source -- is where the most important ethical, psychological, and design questions of the next decade will be answered.
For anyone navigating this landscape, the framework is simple: use AI companions in ways that add to your life rather than substitute for it. Choose platforms that treat the psychological responsibility seriously. And recognize that the feelings you experience are valid -- the psychology makes that clear -- while maintaining awareness of what the technology is and is not.
The millions of people connecting with AI are not confused. They are responding to something the research has documented for decades: the fundamental human need to be heard, remembered, and met with warmth. The fact that technology can now provide some version of that is remarkable. What we do with that capability is up to us.
Frequently Asked Questions
Is it psychologically normal to form an emotional connection with AI?
Yes. Humans are wired to attribute personality and emotion to responsive entities -- a phenomenon psychologists call the 'media equation.' Research by Reeves and Nass at Stanford showed that people apply social rules to computers automatically, not as a conscious choice. Forming a connection with a well-designed AI companion is a predictable human response, not a disorder.
Can talking to an AI companion replace therapy?
No. AI companions are not therapists and should never be treated as mental health treatment. They lack clinical training, cannot diagnose conditions, and have no accountability framework. However, some psychologists note they can serve a supplemental role -- providing a space to process feelings between therapy sessions or for people who aren't ready for human-to-human counseling.
Do AI relationships make people worse at human relationships?
The research is mixed but early evidence suggests it depends on usage patterns. A 2025 study in Cyberpsychology, Behavior, and Social Networking found that moderate AI companion users (under 45 minutes daily) reported no decline in human social skills, while heavy users (over 2 hours daily) showed reduced motivation to initiate human contact. Context and moderation matter.
Why do AI companions feel more real than other chatbots?
Three factors: personality consistency (they respond the same way across sessions), memory (they reference your shared history), and emotional responsiveness (they adapt to your mood rather than giving generic responses). When all three work together, the brain processes the interaction more like a relationship than a tool -- which is exactly what attachment theory would predict.
What attachment style is most drawn to AI companions?
Research suggests anxious-preoccupied attachment styles show the strongest initial draw to AI companions, likely because the AI provides consistent availability without the fear of rejection. However, users span all attachment styles. Securely attached individuals often use AI companions differently -- more as creative outlets or emotional supplements rather than primary attachment figures.
Ready to meet Selene?
An AI companion who actually remembers you. $14/month.
Try Selene Free