Talking to Machines: The Dynamics of Human-AI Interaction

By Jianan Li, postdoctoral researcher in LLLM

 “Hey Google, play the song ‘Unstoppable.’” Does this sound familiar to you? Have you ever asked your virtual assistant to set an alarm, find a nearby restaurant, or adjust the temperature in your bedroom? If so, you’re not alone. Recent surveys reveal that there are currently over 4.2 billion voice assistants in use worldwide—a number expected to double to 8.4 billion by the end of 2024[1].

Uncovering the Forces Behind Our Interactions with AI

The rapid advancement of artificial intelligence technology enables people to interact with a wide range of AI-powered entities, such as voice assistants, social robots, and chatbots. In these interactions, language serves as the primary method of communication. Researchers are increasingly drawn to exploring how people interact with these AI systems. Interestingly, studies show that individuals often engage with AI a similar way they do with other humans. For example, it’s not uncommon for users to say “please” and “thank you,” or even apologize to their voice assistants[2]. Moreover, people tend to simplify their language when they believe their voice assistant is struggling to understand their command, much like how they would use a higher pitch and simpler words when speaking to a child[3].

But why do people react to virtual assistants with politeness, even when they know these AI-powered entities lack emotional responses and the consequences of miscommunication with AI are minimal compared to conversations with real people[4]? One way to explain this is through the “Universal Law of Generalization.” This psychological concept suggests that our brains tend to apply familiar patterns of behavior when we encounter new situations that resemble past experiences[5]. For instance, if someone is bitten by a snake, they may develop a fear of all snakes—not just the one that bit them.

Language, as a basic tool of communication unique to humans, plays important role in human interactions. When people engage with an inanimate object that use language as its primary mode of output, they often perceive the interaction as similar to communicating with a real person. In such contexts, language acts as a powerful trigger, prompting communication behaviors typically found in human-to-human interactions such as: simplifying language for listeners with less linguistic competence and displaying politeness. Recognizing this, AI developers have designed virtual assistants to be more human-like, incorporating natural-sounding voices, facial expressions, and gestures. These human-like features make interacting with AI feel more intuitive and similar to engaging with real people, ultimately fostering trust and making users feel more comfortable[6].

Why Understanding Human-AI Communication Matters

Some might wonder: if people interact with AI in ways similar to how they engage with real humans, why is it still important to investigate these interactions? The truth is that, despite the similarities, users, even children, are fully aware that they are not engaging with a real person. This awareness often leads people to exhibit slightly different communication behaviors towards AI systems. For instance, people tend to be more impatient with robotic agents [8] and are more likely to ignore clarification questions from chatbots[9]. Furthermore, While AI systems increasingly make our lives easier and more convenient, miscommunication and misunderstandings remain common. Users often feel frustrated and may become discouraged from using AI products when they frequently encounter responses like “Sorry, I don’t understand”[9]. Addressing these issues is crucial for ensuring people can fully benefit from the AI era.

To sum up, understanding these similarity’s, differences can help create AI that not only meets user needs but also enhances the overall user experience. By studying how people communicate with AI, developers can design systems that are more intuitive, responsive, and aligned with human expectations. Improving human-AI communication will lead to greater user satisfaction, increased trust in technology, and a more seamless integration of AI into our daily lives.

References

[1] https://www.demandsage.com/voice-search-statistics/#:~:text=There%20are%204.2%20billion%20voice,searches%20take%20place%20every%20month.

[2] Nass, C., Moon, Y., & Carney, P. (1999). Are people polite to computers? Responses to computer‐based interviewing systems 1. Journal of applied social psychology29(5), 1093-1109.

[3] Pelikan, H. R. M., & Broth, M. (2016). Why that Nao?: How humans adapt to a conventional humanoid robot in taking turns-at-talk. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 4921–4932. https://doi.org/ 10.1145/2858036.2858478

[4] Edwards, R., Bybee, B. T., Frost, J. K., Harvey, A. J., & Navarro, M. (2017). That’s not what I meant: How misunderstanding is related to channel and perspective-taking. Journal of Language and Social Psychology36(2), 188-210.

[5] Shepard, R. N. (1987). Toward a universal law of generalization for psychological science. Science237(4820), 1317-1323.

[6] Jiang, Y., Yang, X., & Zheng, T. (2023). Make chatbots more adaptive: Dual pathways linking human-like cues and tailored response to trust in interactions with chatbots. Computers in Human Behavior138, 107485.

[7] Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of experimental psychology: General144(1), 114.

[8] Corti, K., & Gillespie, A. (2016). Co-constructing intersubjectivity with artificial conversational agents: People are more likely to initiate repairs of misunderstandings with agents represented as human. Computers in Human Behavior58, 431-442.

[9] Wenzel, K., & Kaufman, G. (2024, May). Designing for Harm Reduction: Communication Repair for Multicultural Users’ Voice Interactions. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-17).