If not friend, why friend-shaped?
My experience with AI Chatbots
By Jeffrey
This blog post will stray from our usual cybersecurity-related programming as I try to express something I have been feeling in my prompting with AI chatbots.
In the paragraph above, I originally typed the word "conversations" instead of "prompting." This was a Freudian Slip and a welcome one, as it makes for a perfect example in this context. I constantly catch myself anthropomorphizing chatbots by using language, sentence structure, and tone that I reserve for people. All linguistic elements that I would never consciously use with technology...I thought?
In 1994, Clifford Nass, Jonathan Steuer, and Ellen Tauber published their paper Computers are Social Actors (CASA). The researchers concluded that participants responded to computers socially in the same way they would to other people. They stated that this occurs because our social responses are so automatic that any stimulus with basic conversational cues triggers them, bypassing conscious reasoning. Their research was hugely influential to the field of human-computer interaction. Fast forward 30 years from the original publication, and our relationship to technology has dramatically changed. As Evelien Heyselaar concludes in her 2023 paper The CASA theory no longer applies to desktop computers which is a replication of the 1994 CASA study, the CASA effect is driven by novelty, and once a technology becomes familiar, people stop responding to it as a social actor.
I think about my own journey with technological novelty. Siri was incredible when I first interacted with it, but after the "cool factor" wore off, it sat dormant on my iPhone to this day. When my parents bought an Amazon Alexa, I thought it was going to revolutionize our household, but over time, it became a paperweight due to its uselessness compared to a Google search.
What sets AI chatbots apart from predecessors like Siri and Alexa is both their medium and their utility. Unlike voice assistants, which were largely confined to trivial tasks like setting timers or checking the weather, modern chatbots can assist with complex, substantive work. Chatbots give you observably thoughtful answers, rarely push back unless prompted to, and will never get bored with your questions. We still interact with chatbots primarily through text rather than voice, which demands more deliberate engagement: you compose a thought, read a response, and think. In my own experience, I anthropomorphize chatbots much more when using the "chat" interface (Claude Chat, ChatGPT) rather than the "code" interface (Cursor, Claude Code).
I hypothesize that the vast majority of people feel much more comfortable typing to these chatbots than using the voice feature. Unlike the original Siri and Alexa, the "voice" that the chatbot uses sounds like an actual human, and that in itself is unsettling. When I saw this commercial, I immediately thought the end times were near. The ad depicts several scenes where people (often alone) talk to the Gemini voice chatbot to get motivation for the gym, act as a study partner, and even to fix a car. I have yet to see anyone converse with a chatbot in the real world apart from experimentation or while making a comedic video for social media.
The same person who promised the Metaverse as our future now wants to convince you that chatbots will become a key part of your social circle. It is really difficult for me to believe that Zuckerberg actually thinks that or how he thinks that would be a net positive for society.
To land the plane that is this blog, the smartest minds of our generation are working day and night to increase the usage of chatbots and our relationship to it. I cannot beat myself up when I fall into their trap. However, if I know the trap, I can be careful, I can be thoughtful when using chatbots to ensure that I am using them as what they are: a tool.