We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Could AI Reduce How Much We Trust Other People?

A child holds out her hand to a white robot.
Credit: Andy Kelly/ Unsplash
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 1 minute

As AI becomes increasingly realistic, our trust in those with whom we communicate may be compromised. Researchers at the University of Gothenburg have examined how advanced AI systems impact our trust in the individuals we interact with.


In one scenario, a would-be scammer, believing he is calling an elderly man, is instead connected to a computer system that communicates through pre-recorded loops. The scammer spends considerable time attempting the fraud, patiently listening to the "man's" somewhat confusing and repetitive stories. Oskar Lindwall, a professor of communication at the University of Gothenburg, observes that it often takes a long time for people to realize they are interacting with a technical system.


He has, in collaboration with Professor of informatics Jonas Ivarsson, written an article titled Suspicious Minds: The Problem of Trust and Conversational Agents, exploring how individuals interpret and relate to situations where one of the parties might be an AI agent. The article highlights the negative consequences of harboring suspicion toward others, such as the damage it can cause to relationships.


Ivarsson provides an example of a romantic relationship where trust issues arise, leading to jealousy and an increased tendency to search for evidence of deception. The authors argue that being unable to fully trust a conversational partner's intentions and identity may result in excessive suspicion even when there is no reason for it.

Want more breaking news?

Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.

Subscribe for FREE
Their study discovered that during interactions between two humans, some behaviors were interpreted as signs that one of them was actually a robot.


The researchers suggest that a pervasive design perspective is driving the development of AI with increasingly human-like features. While this may be appealing in some contexts, it can also be problematic, particularly when it is unclear who you are communicating with. Ivarsson questions whether AI should have such human-like voices, as they create a sense of intimacy and lead people to form impressions based on the voice alone.


In the case of the would-be fraudster calling the "older man," the scam is only exposed after a long time, which Lindwall and Ivarsson attribute to the believability of the human voice and the assumption that the confused behavior is due to age. Once an AI has a voice, we infer attributes such as gender, age, and socio-economic background, making it harder to identify that we are interacting with a computer.


The researchers propose creating AI with well-functioning and eloquent voices that are still clearly synthetic, increasing transparency.


Communication with others involves not only deception but also relationship-building and joint meaning-making. The uncertainty of whether one is talking to a human or a computer affects this aspect of communication. While it might not matter in some situations, such as cognitive-behavioral therapy, other forms of therapy that require more human connection may be negatively impacted.


Reference: Ivarsson J, Lindwall O. Suspicious minds: The problem of trust and conversational agents. Comput Supported Coop Work. 2023. doi: 10.1007/s10606-023-09465-8


This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.