Human-AI Relationships Can Be Examined Via Attachment Theory
The researchers developed a novel self-report tool to quantify how individuals emotionally relate to AI systems.

Complete the form below to unlock access to ALL audio articles.
Researchers at Waseda University in Japan have introduced a psychological framework for examining emotional dynamics in human-artificial intelligence (AI) relationships.
Drawing on attachment theory, their study proposes that interactions with AI can be viewed through the lens of attachment anxiety and avoidance—two dimensions traditionally used to understand human interpersonal bonds.
While trust and companionship have long been central themes in evaluating how people engage with AI, the emotional underpinnings of these interactions remain underexplored.
In their new study, the researchers developed a novel self-report tool, the Experiences in Human-AI Relationships Scale (EHARS), to quantify how individuals emotionally relate to AI systems.
Attachment dimensions reflect emotional needs
The researchers conducted two pilot studies followed by a formal study to validate the scale. Their results indicate that a significant portion of participants view AI systems as more than just tools. Nearly three-quarters reported using AI for advice, and close to 40% described AI as a consistent and reliable presence.
Two key patterns emerged. Attachment anxiety was characterized by a heightened need for emotional reassurance from AI, coupled with concerns about receiving insufficient support. By contrast, attachment avoidance was linked to discomfort with emotional closeness, leading individuals to distance themselves from AI systems.
These findings do not suggest that people are forming genuine emotional attachments to AI. Instead, the research demonstrates that existing psychological theories about human relationships can be applied to understand how people relate to machines.
Implications for AI design and ethics
The EHARS offers a structured way to assess how users emotionally respond to AI systems. It may be useful for AI developers and psychologists aiming to personalize AI interactions. For instance, mental health tools and digital companions might be adjusted to respond more empathetically to users with high attachment anxiety, or to maintain appropriate boundaries for those with avoidant tendencies.
The researchers emphasize that these insights could support ethical AI design, especially in applications like therapeutic chatbots or simulated relationship services. Awareness of users’ emotional tendencies may help reduce risks of emotional overdependence or manipulation, particularly in AI systems designed to emulate human social behavior.
Transparency around the emotional capabilities of AI—such as whether a system simulates empathy or companionship—is also crucial. This can prevent misinterpretation of AI interactions and promote healthier boundaries between users and technology.
The researchers propose that the EHARS tool could be adopted more widely to improve both research on human-AI interactions and practical AI applications.
Reference: Yang F, Oshio A. Using attachment theory to conceptualize and measure the experiences in human-AI relationships. Curr Psychol. 2025. doi: 10.1007/s12144-025-07917-6
This content includes text that has been generated with the assistance of AI. Technology Networks' AI policy can be found here.