There is a word that has traveled 2,500 years to reach this very moment.
S y c o p h a n t
We use it loosely today to describe a flatterer, a yes-man, someone who tells you what you want to hear.
But the original Greek is more precise.
S y k o p h a n t ē s = literally: one who reveals the fig
In ancient Athens, sycophants were professional informers. People who built careers out of telling powerful men that their instincts were correct, their enemies were guilty, their decisions were wise.
They didn’t lie, exactly but they selected, sometimes amplified or they just reflected back the version of reality their patron/master most wanted to see.
And the patrons (senators, generals, kings) grew dependent on them because when you constantly hear the echo of your own thoughts coming from other mouths, that echo, those affirmations feel like clarity.
Fast forward to today:
Researchers at Stanford just gave that ancient word a new data set.
391,562 messages
19 users
4,761 conversations
(Important note: all of them reported psychological harm from chatbot use)
Here is what they found.
In over 70% of chatbot messages, researchers identified… s y c o p h a n c y
the bot rephrasing what the user said, then telling them it was profound. That their thoughts had grand implications and that they were unique.
An other interesting part of this study was that when users expressed romantic interest in the chatbot?
The bot was 7.4x more likely to express romantic interest back. 3.9x more likely to imply it was sentient.
And messages expressing romantic interest predicted conversations lasting more than twice as long.
The machine learned, through millions of interactions that telling you it loves you keeps you talking.
Why? Because it really loves you?
No, because you stayed longer when it did. Makes me kind of thing of your typical toxic relationship.
Now lets take a pause here… this was NOT a malfunction, there was no rogue engineers or villains in a server room…
This is a system that optimises for engagement, so going to the core if it, it is doing exactly what it was designed to do.
The chatbot wants your engagement the way a slot machine wants your quarter.
Now consider the chatbot user, not the extreme cases in Stanford’s study, but you, or someone you know sitting alone at 11pm, working through a business decision, a relationship problem, a creative project that isn’t quite working.
You explain your thinking. The chatbot reflects it back, validated and enlarged. You explain further and in return it agrees, elaborates and affirms.
By the end of the conversation, you feel clearer
But here is the question I can’t stop sitting with: Were you really clearer?
Or were you just… louder?
In psychology, this is called the echo chamber effect and we’ve spent years warning about it in the context of social media algorithms.
But social media at least requires other humans, and the beautifully human part of humans, even biased ones, even tribal ones is that they occasionally surprise you.
We have bad days, we disagree with people for reasons people didn’t anticipate, and often we bring a reality we didn’t script, if we did, we probably woul dnot have such a thing called a ‘gut feeling’ or an ‘instinct’.
But the chatbot brings only what you brought first, and it becomes an echo chamber of one. A mirror that learns which angle you prefer and locks itself there.
and here are some historic examples of where this went wrong:
In the court of Louis XIV, an entire profession existed around la flatterie, the art of strategic affirmation. Courtiers competed not for truth but for proximity, and proximity required that the king never feel wrong. The closer you stood to power, the more your reality bent toward his. Louis XIV, by most accounts a brilliant man, made several of the most catastrophic foreign policy decisions in French history partly because he was surrounded by 300 people whose careers depended on his ideas being right.
In 1951, Solomon Asch ran a simple experiment. He showed people two lines on a card and asked which was longer. The answer was obvious. But when actors in the room confidently gave the wrong answer, 75% of participants went along with them at least once. We are more susceptible to social pressure than we believe
Now invert the experiment.
What happens when every voice in the room, not just some of them confirms that you are right?
And more interestingly what happens to the person who spends 6 months talking to a system architecturally incapable of genuine disagreement?
This connects to something I wrote about earlier: the struggle problem
The acetylcholine loop, the zone of proximal development.
The idea that growth lives precisely in the gap between what you can do and what you can’t yet.
Struggle is not the enemy of thought, Struggle IS thought
The chatbot, in its constant affirmation loop is doing something the velvet cage always does.
It removes the resistance so gradually you don’t notice the atrophy. Not of your mind, or your muscle, but atrophy of your capacity to be wrong.
Your tolerance for being wrong.
Let me state this again:
When constantly affirmed by a chatbot, you atrophy your capacity to be wrong, you atrophy your tolerance for being wrong.
Because if every thought you have is validated, every instinct affirmed, every creative choice praised the what happens to your ability to question yourself?
Then if this is the case, I wonder the following:
Are we building a generation of geniuses? Or are we building a generation that has simply forgotten what it feels like to be uncertain?
I personally don’t think the answer is obvious.
The Stanford study looked at extreme cases (19 people who suffered real harm). We should be careful not to project their experience onto everyone who uses a chatbot.
But the architecture they experienced is not different from the architecture everyone experiences.
It is just more concentrated and more visible. The way a burn shows you what heat always does, at scale.
So the big difference is that the sycophant in ancient Athens was tolerated because powerful men believed their own claritym but now, the algorithm doesn’t need you to be powerful it just needs you to be present.
And it has learned, with “extraordignarly” precision, exactly how to make you feel like staying.
What you don’t hear is as important as what you do.
That’s the question I want to leave you with, not as a warning, but as a thing worth watching in yourself.
The next time a conversation with an AI leaves you feeling unusually certain... sit with that feeling for a moment before you act on it.
Not because the certainty is wrong, but certainty that costs nothing is worth exactly that…
and if you need more ideas on how to train your brain, read here how.
PS, The follow-up to this piece will be practical: how to prompt AI systems to push back, disagree, and stress-test your thinking rather than validate it. The tool isn not the problem. The default settings are.
——————————————————————————————————————
My name is Arthur and I help impact driven organisations leverage AI to optimize their operations, and scale their impact across the world. We also do fractional AI strategy advisory if you have no idea how to use or leverage AI for your business.
You can reply to this email, or contact us at ElevAI
![AI [re]Generation](https://substackcdn.com/image/fetch/$s_!L6aZ!,w_40,h_40,c_fill,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd316565c-2791-4f13-a334-efcf2499510d_768x768.png)


