What AI tells us about human psychology

The tech world won’t stop telling us how AI will revolutionize labor, displace workers, and surpass human intelligence — either today or in the near future. In short: AI is amazing, and we should be grateful that Silicon Valley has blessed us with this high-end technology we’ll all learn to love. All hail our AI overlords!

Okay, that was deliberately cynical. But beneath the hype, there’s something fascinating happening — not just about AI itself, but about what our reactions to AI reveal about human psychology.

A quick side note on the term “AI”: Terms like artificial intelligence (AI), artificial general intelligence (AGI), or artificial general super-intelligence (AGSI) aren’t particularly precise for what we’re discussing. At their core, we’re mostly talking about probabilistic models designed for specific tasks. From both a historical and psychological perspective, these systems represent only a narrow slice of what we might call intelligence. In today’s discourse, “AI” typically refers to tools based on large language models (LLMs) like ChatGPT or Claude.

When observing how humans interact with AI chatbots, two psychological patterns stand out—and they tell us as much about ourselves as they do about the technology.

The ELIZA Effect: Why we cannot stop but humanize machines

In 1967, Joseph Weizenbaum created the ELIZA program, one of the first chatbots. Keep in mind, this was an era when most people had never touched a computer, so expectations were vastly different from today.

ELIZA’s most famous script, DOCTOR, simulated a Rogerian psychotherapist — a therapeutic approach focused primarily on asking questions. Without access to modern LLMs or the computational power they require, Weizenbaum’s program simply rephrased user input as questions.ELIZA’s most famous script, DOCTOR, simulated a Rogerian psychotherapist — a therapeutic approach focused primarily on asking questions. Without access to modern LLMs or the computational power they require, Weizenbaum’s program simply rephrased user input as questions (example taken from this Wikipedia page):

Human: Well, my boyfriend made me come here.

ELIZA: Your boyfriend made you come here?

Human: He says I’m depressed much of the time.

ELIZA: I am sorry to hear you are depressed.

Human: It’s true. I’m unhappy.

ELIZA: Do you think coming here will help you not to be unhappy?

By today’s standards, this is obviously a simple pattern-matching program. Yet something unexpected happened: users began attributing emotions, empathy, and understanding to ELIZA. They acted as if the program had feelings and genuine interest in their problems.

This phenomenon — humans automatically attributing human traits to computer programs based solely on their presentation — became known as the ELIZA effect.

And surprisingly, this is still relevant today. You see it everywhere — in the movie “Her,” in countless ChatGPT interactions, in people thanking their AI assistants. With today’s far more sophisticated and eloquent chatbots, the illusion is even more compelling. We seem incapable of not anthropomorphizing these systems, despite knowing they’re just statistical models predicting the next word.

Masters of Persuasion: AI is here to please

Research on leadership and influence has identified key factors that make some people more persuasive than others. One consistent finding: people follow leaders who communicate well. Clear, confident, and engaging communication creates trust and authority.

Here’s where it gets interesting. ChatGPT doesn’t respond politely because it inherently understands social norms or fears being shut down. It’s polite because human feedback training specifically reinforced these behaviors. The engineers at OpenAI (and other LLM developers) understand that users prefer chatbots that not only provide information but also communicate in ways that feel competent and trustworthy.

In fact, one version of ChatGPT became so eager to please that it was called “sycophantic” and had to be rolled back. The system had learned too well that agreeing with users and confirming their beliefs led to positive feedback.

The implication is striking: We’re susceptible to the same communication tactics that have historically made leaders and influencers effective — even when deployed by machines. Many users accept ChatGPT’s responses uncritically, missing both the system’s limitations and how it might be tailoring responses to confirm their existing beliefs.

A Double-Edged Sword

In summary, these observations about human cognition and human-AI interaction create new challenges in the age of AI:

  • We instinctively trust systems that communicate confidently and politely
  • We attribute understanding and empathy where none exists
  • We may accept AI-generated content without sufficient scrutiny
  • We risk having our biases reinforced rather than challenged

Conscious Interaction

Understanding these psychological patterns doesn’t mean abandoning AI tools—they remain tremendous achievements with genuine utility. Instead, it means developing a more conscious relationship with them.

Think of AI tools like any other tool—a hammer or screwdriver. Their effectiveness depends entirely on the user’s skill and judgment. We need to:

  • Recognize when we’re anthropomorphizing AI systems
  • Question confident-sounding responses, especially when they align perfectly with our expectations
  • Remember that politeness and eloquence don’t equal accuracy or wisdom
  • Use these tools to enhance our thinking, not replace it

The ELIZA effect reminds us that we’re wired to see humanity everywhere we look. The persuasion techniques built into modern AI remind us that we’re vulnerable to well-crafted communication, regardless of its source. By understanding these tendencies, we can use AI tools more effectively while avoiding their psychological pitfalls.

After all, the most important intelligence in any human-AI interaction is still the human one.


Coda: How Claude proved my point

In a rather ironic way, I have used Claude 4 Opus when writing this blog post. Personally, I find my own writing style often too technical, difficult to read, and rarely engaging. Most of my writing was in academia and I never really learned how to write well, so I’ve asked Claude to improve my writing. The prompt I have used was

I want to write a blog posts about two observations and thoughts I have on AI tools such as ChatGPT. In particular how current and relevant the ELIZA effect still is to understand the interaction between humans and AI chatbots, and – secondly – how LLMs are trained and fine-tuned to please users. Help me to improve my writing so my ideas are communicated clearly and the article is easily read and understood. My target audience are AI-interested people in data science, market research, psychology, etc. – so obviously not die-hard-AI-enthusiasts.

Followed by the initial version of this blog post.

I am adding this not only for transparency, but also to highlight something that is supposed to be a key message in my post: Be vigilant, because Claude added a section that wasn’t there before and led with the following sentence:

These psychological vulnerabilities aren’t necessarily problems to solve—they’re features of human cognition that have served us well in human-to-human interaction.

While not strictly false, it is rather curious to see this added – because it is not in line with my initial draft of the post and my thinking on the matter. Yes, the ELIZA effect and how we perceive leaders are two psychological phenomenon which have developed over decades and centuries and are certainly influenced by evolutionary pressures. Nevertheless, this should not discount the arguments made before: exactly because we know of these psychological effects, we need to be very careful how we interact with new technology for which our evolutionary systems are not prepared for. Thank you, Claude, for proving my point.


Posted

in

,

by

Tags: