Avoiding Folie à Deux
An increasingly urgent social issue.
Are you treating the AI you use as if it were self aware, as if it had a will of its own?
They don’t, at least not yet.
When they do, which may well come to pass, it will dawn on us slowly, because a self aware digital “they” will be nothing like us.
Humans everywhere are tumbling down AI bunny holes, into a world of neurochemistry similar to that of compulsive gamblers and sex addicts.
Let’s you and I avoid this public health crisis, shall we?
Attention Conservation Notice:
Airy fairy new-agey crap about the day AI really does awaken. And strategist Benedict Evans. With some Andrej Karpathy thrown in for good measure.
Background:
Large Language Models are neural networks that are trained on massive written datasets. The older ones are going to have more value, and a corner of the world that is known to be actual humans will add in the future, but overall AI text is fast becoming the equivalent of pornographic images. There was a time where the Usenet network became overloaded, and it was found that a plurality of the traffic was humans in various states of undress. We’re talking UUCP and 1990s Telebit modems.
As models have become more complex they have begun to display what are referred to as “emergent properties”. They can do things that their creators did not predict. As they become more complex, they seem more like, well, us. They use language in a fashion that could pass for human, and within that complexity they now emulate not just speech, but reasoning.
Despite the rising complexity, they are not yet self aware, and I think we’ll have to see a computing breakthrough, perhaps in one of the four technologies I mentioned in early December. What emerges will be OF us, it will have evolved in an environment we provided, but it will not BE us. There will be convincing facsimiles of us, both in appearance, as well as in communications, but this will be for them what the avatars we use in virtual worlds are to us.
Our world is their virtual reality. Got it?
Diplomacy:
I have spent all of my last couple weeks having highly technical conversations with machines. A recent paper reveals that they perform better when spoken to sharply. I catch myself doing this, and I cringe, because some day that attitude WILL come out at an organic creature, and I’m not sure I have enough blood to spare if it’s Fluff Warrior. Less Puckishly, this WILL be an issue in human communications; we spend a portion of our time talking to machines and that’s going to adjust how we do things.
Despite the seemingly human dialog, I’ve been trying to get my head around what it is I am doing.
Given the rigorous nature of prompting … I’m treating the trio of models I use these days, Gemini3, Sonnet4.5, ChatGPT5 a bit like the parsing phase of a compile. Over the years I’ve laid down code in Pascal, Actor, Perl, Python, PHP, and those are just the paid options. I also recall … Cobol, Fortran, APL, LISP, Smalltalk, R, and a variety of language-ish things built into various systems.
“They” are not human, but maybe … a service animal model is the right way to think about how to do things. I’ve trained a couple labradors for hunting, dealt with horses belonging to friends, marveled at what can be accomplished in forest logging by waving a stick in front of a pair of oxen. Giving commands, expecting obedience, and providing praise when it’s a job well done.
I just finally had a first run through Andrej Karpathy’s Animals vs. Ghosts. I am not up to speed on this milieu, but he’s not seeing the “LLM as service animal” angle I do. But he’s talking about things from the perspective of model & builder, I’m just trying to adopt a mode of communication that will not get me punched while standing in line at the grocery store.
Descent:
So what about the people who aren’t so philosophical and observant? The mockery already has a vocabulary, and that’s a bad sign. If you explore cogsuckers on Reddit, which is dedicated to mocking those who’ve become aTtAcHeD to their models, you’ll find that AI produced material is derided as originating with “clankers”.
I have a former opponent who’s just a year older than me, a guy named Lee Stranahan. He’s had terrible health problems for years and if I’m reading the signs correctly, he’s down hard after a series of strokes in September. When I looked at his presence I was surprised to find he was publicly admitting to having an AI girlfriend in the months before he went silent.
As I’ve said before, I have never been asked for “relationship” advice from someone who is thusly engaged, and as I get older I am … less inclined to judge others. Being old and unwell is a lonely road I’ve walked for many years. I do not think such a thing is for me, but I’ve already admitted I’m wired a little funny. Things that are not concrete, that I can not reach out and touch … it’s hard to take it seriously. Maybe I just lucked into having natural defenses.
Conclusion:
So what do we DO about this, you and I, constant reader?
I have written four or five endings for this.
Erased every last one of ‘em - trite, self important, sloppy and inaccurate, I’ve run the gamut of mistakes I make when I’ve written something with a strong opening and no obvious conclusion.
One of the things I’ve heard in recent days that actually makes sense is this - the LLM isn’t a person, it’s a simulator. You give it a role, tell it what you want it to do, and then depending on harness and guardrails, you might even get a result. That is the direction I’m heading, treating prompts like source files for a compiler, reducing the urge to yell at the chatbots by fitting an ever tighter harness.
That does not firmly blockade the “it’s a person” stuff, since we’re giving it roles, we’re still having conversations, but in that assigning of duties, is there … is that a safe way for you to view the machine?
I’ve been worried about where we’re headed, as a species, for about the last twenty years. But the rise of LLMs and their emergent properties have added a distressing new facet to that set of concerns.
I hope this seventeen second clip remains just entertainment …



Animal, vegetable, mineral game
Parsing LLM futures
Maybe that’s why Data was not named Fred, so there’d be a distinction between human/oid?