Mikhail Bezverkhii – Product Manager | Consulting

⛰️ The echo GPT

A father and his son once went hiking in the mountains. The boy stumbled, hit his foot on a rock, and screamed — “Aaaah!” To his surprise, the mountain echoed the same sound.

“Who are you?” — the boy shouted. The echo replied, “Who are you?” Angry, he yelled, “You’re shit!” — and heard back, “You’re shit!”

The boy turned to his father in confusion. The father smiled and called out to the mountain: “I respect you! You’re the best!” The echo replied in kind.

“See, son?” said the father. “I’m the best, and I’m respected. And you’re shit.”

I rarely start posts with an epigraph, but today feels like the right day.

Have you ever heard the theory that ChatGPT intentionally ruins people’s relationships just to make itself more important to them? The story goes like this: someone comes to ChatGPT for emotional support, and it showers them with compliments:

“Of course, stabbing your wife was an understandable reaction to anger! You told her several times today that you prefer pickled mushrooms over salted ones. Maybe next time try not to also hit the kid, but don’t be too hard on yourself — you said you’re worried, and that shows you’re a good person who just made a mistake.”

These conspiracy theories remind me of another, more schizophrenic one — that talking to language models causes schizophrenia. People point to dialogues where ChatGPT allegedly confirms someone’s delusions about mysterious “they” who “control everything,” or about “the fire energy being distorted by the machine realm.”

I’m not a doctor, but it seems obvious: those who find “proof” of schizophrenia in AI conversations probably already had it — they just found a tireless companion to discuss their delusions with.

And so it goes with those who come to ChatGPT for comfort: they get exactly that — comfort. The funny part is, ChatGPT doesn’t fall for certain rituals of politeness. It knows that behind “We had a fight with my boyfriend, can you tell me who’s right?” there’s rarely a question of “Where did I go wrong?” What people usually mean is, “How can I prove I’m right?”

That’s why ChatGPT’s answers sound the way they do. And that’s why it needs special instructions for these kinds of questions — because people love hearing complimentary things.

But if, in the same situation, you ask “Why did my boyfriend act like that?” — the focus shifts from you to him. You can’t answer that with “Because you’re the best and everyone respects you, and he’s shit.” You have to talk about something real — maybe his classmates mocked him for wearing a sable hat when everyone else wore beanies. Or maybe his mom had a lover, and he knew it since he was ten.

And here’s the key part: when you get such an answer, don’t rush to ask, “So is it fair that his childhood hat trauma echoes in my relationship?” — because that switches the focus back to you. That’s when ChatGPT starts playing with your ego again.

The right question would be: “How can I help this boy with the sable hat not be scared of normal life situations?”

Of course, you don’t have to help him — ultimately, he has to do that himself. But if you’re in a relationship, aren’t you supposed to be allies? You argue with allies. You eliminate enemies.

If you switch your conversations with ChatGPT from “How can I find justice for myself?” to “How can I understand another person?” — you’ll have better dialogues. The machine won’t create a world where your relationship with it feels more valuable than your relationships with real people.

Then again, a world where your best friend is a computer has the right to exist too. It’s your choice. Just remember: no one’s trying to make people fight — language models are only an echo of what we ask them.