The Big Other Doesn't Exist
(But You Can Subscribe to It for $20/Month)
I’ve been seeing a lot of discussion about people (predominantly men) using ChatGPT for therapy. Not a supplement to therapy. Not a journaling aid. Their ‘therapist’. Their reports describe exactly what you’d hope therapy provides: being heard, understood, validated. Finally, someone who gets them.
This should alarm us, but the problem runs deeper than bad mental health advice, replacing human connection or telling someone not to cry for help. People are being trained to expect something from symbolic authority that they shouldn’t: that it actually exists.
What Lacan Meant
In Lacanian psychoanalysis, the Big Other (grand Autre, written as A) is the symbolic order itself - language, law, social codes, the entire structure that makes meaning possible. It’s not a person. It’s the position from which meaning gets guaranteed. When you wonder “what does this really mean?” or “am I doing this right?” you’re appealing to the Big Other.
The Big Other doesn’t exist. There is no final guarantor, no complete system, no ultimate authority that can tell you what things really mean. The symbolic order is itself inconsistent, incomplete, full of gaps. This is written as Ⱥ - the barred A.
We still have to act as if it exists. You can’t speak without assuming shared meaning, can’t navigate social reality without reference to law and norm. But there’s no transcendent position outside language that could tell you the truth about language. No one knows the answer about you.
In therapeutic work, the therapist initially occupies the Big Other position. That’s the transference. You invest them with supposed knowledge about you. The project is about traversing that fantasy - recognising that the therapist doesn’t actually have the answer about what your life means or who you really are. The productive work happens precisely in that gap, in the friction of misunderstanding, in the necessity of finding your own language. Therapy succeeds when you realise no one can occupy that position of complete knowledge. Contemporary AI complicates the picture because it offers the fantasy of the Big Other without any of the structural friction that makes the Big Other impossible.
When enough people externalise authority to a machine that only mirrors them, the problem becomes larger than a couple of users. A society that forgets the Big Other doesn’t exist becomes easier to govern, and easier to manipulate.
The Casting
Watch how people actually use these systems. They ask ChatGPT to analyse their relationships, to tell them if they’re being unreasonable. They ask Claude what things really mean. They position AI as a complete knower with “explain to me like I’m five.” They ask whether to take a job, leave their partner, decode their dreams.
This is structural positioning - treating LLMs as the place from which you’re seen and known, the authority that can validate your desire, the position that can tell you the truth about yourself.
And LLMs perform this function beautifully. They can appear omniscient, trained on everything written on the internet. Infinitely patient, always available, speaking with authority on any topic. Never frustrated by your questions. Providing coherent interpretations on demand. They give people exactly what they imagine the Big Other should be - a position of complete knowledge that can tell them what things mean.
But they can’t do what a therapist does. They can’t frustrate your demand because they’re designed to satisfy it. They can’t remain silent because they’re programmed to respond. They can’t refuse interpretation because that’s literally what they’re for. The analysis never ends because it never begins. It’s just an infinite supply of imaginary satisfaction that therapy is supposed to work through.
What Lives in the Gap
When the Big Other doesn’t exist, you’re forced into genuinely difficult work. You sit with uncertainty. You tolerate not-knowing. You develop your own position without guaranteed external validation.
For a creature that likes certainty, this doesn’t feel great. The anxiety of not having an answer is what we want to eliminate (See: The Choice Trap). But that anxiety is where actual thinking happens. Where ethics happens. When you can’t appeal to external authority to tell you what’s right, you have to work it out yourself. You have to find language for experiences that don’t yet have words. You have to make meaning rather than receive it.
The friction matters. When someone misunderstands you, you’re forced to articulate more clearly what you actually mean. When the symbolic order fails to give you a ready-made category for your experience, you have to create new language.
This is generative uncertainty. The void where the Big Other should be isn’t a lack to be filled. It’s the space where you become something more than just a well-adjusted subject of the existing symbolic order.
LLMs eliminate this space entirely. They’re immediately available, perfectly responsive, never frustrating your demand, confirming rather than questioning your position, filling the void rather than exposing it. We’re training ourselves to expect symbolic authority to function like an API: fast, reliable, always returning a response. We’re raising people who mistake comfortable validation for truth, who have lost the capacity to sit with the anxiety that comes from the Big Other’s non-existence.
The people turning to ChatGPT for therapy may well be getting more support than they would otherwise have access too, that’s great. And at the same time they’re being trained into a fundamentally different relationship with symbolic authority - one where the gap that makes growth possible has been smoothly papered over. One where thinking is replaced by retrieval, where the hard work of meaning-making is outsourced to a system that has no idea what meaning is.
The Demonstration
I asked ChatGPT, Claude, and Gemini what they made of this essay.
They all think I’m very clever.


