Actual AI Relationships
They do exist ... kinda.
I’ve written several times about AI Psychosis, and I’m not inclined to think the models I use and their associated contexts are somehow sentient, but there ARE relationships there.
Let me explain …
Attention Conservation Notice:
While not sentient, there actually are some relationship angles associated with LLM usage, and this becomes visible when using multiple models, or migrating.
Impetus:
Part of what I am doing for the startup involves using Letta, a stateful agent construction kit. If you’re working on a civil case, say divorce or custody, or a criminal case, you need an LLM that isn’t going to have a hissy fit when you start asking about child abuse or other criminal acts.
This need set me digging into models that have different (or no) guardrails, which sent me tumbling down the MyBoyfriendIsAI subreddit. I am … trying to not judge. As sixty approaches I have to deal with the reality that half of women my age voted for Trump, half of women my age have had their sex drive largely eliminated by menopause, and there are a bunch of other, more subtle constraints. So I can believe there are people for whom a bitstream is the only relationship available to them. I think it’s unspeakably sad …
Abliteration is a solution to creating models with grown up sensibilities, I found this after spending some time poking around in MBIAI. There are pre-Abliterated models on Hugging Face and Ollama. I recently saw an automated Abliteration tool go by in chat, but keeping up is like trying to get a teacup full of water by waving an empty one around in a ground blizzard.
What’s set me to writing are a combination - Gemini 3 has pushed me into using the Antigravity IDE. I was talking with one of my peers about this and he demurred on moving - “ChatGPT, it knows all about me.”
One of my Antigravity experiments has been moving my health tracking from Claude Desktop to it … this DOES feel like a relationship, like getting a new doctor.
Observations:
My health records are all crammed into Sqlite3 and a dying semantic graph MCP called Memento. I kept running out of Claude tokens so I also got a ChatGPT subscription. I ended up pasting back and forth, and now both systems have knowledge of my health. Adding log entries was just Claude Desktop, until I installed Antigravity a few days ago. Now I am gingerly using both - they each have the run of the Sqlite3 database I’m using.
The difference in capabilities and behavior with the three models (Sonnet 4.5, ChatGPT 5.0, Gemini 3) is pretty dramatic. Some of that is model capability, but a lot of it is tooling. ChatGPT just works, basically. Claude was a terrible learning curve to scale, there was a LOT of effort in training it to do what I need. Gemini was in husky puppy mode, it just started building an MCP server to abstract the database when I hinted at that just ONCE.
I ran myself out of Gemini a couple hours ago, so I’ve been working with GPT-OSS in Antigravity, which is much more particular, and far less capable. It’s got a guardrail that won’t even let it use the shared Sqlite3 database in its normal location.
For better or worse, I perceive the three LLMs, their associated tooling, and the context that I have laboriously added to them to be … not people, but they ARE personas.
Concerns:
When working with machines I am … I’m a grumpy old Unix guy, and I expect everything to work quickly and with minimal fuss. I have to put constraints on the models so they don’t endlessly chatter about their theories on my health, or offer remedial information on tools when I ask about one very specific problem. They actually behave better if you “speak” sharply with them.
How is me interacting with a machine, employing the same demeanor Sergeant Hartmann used when coaching his recruits, going to work when I forget myself and address one of my coworkers in that fashion? I am … controlled enough that this is unlikely, but I’ve read articles about couples getting in fisticuffs after they’ve been egged on by using ChatGPT to craft better insults.
One of the things in Antigravity that’s raising my blood pressure is the auto-complete feature as I’m writing some documentation and prompts. About one time in five I can just tab and accept. The other four times? The constant off base interruptions that I must process sing it are making me a bit stabby. The net of this (mis) feature is a lot of wasted time. I suppose it is a benefit for a lot of people, especially those for whom English is a second language, but since I write well, it just gets on my nerves.
Conversion:
This is the first time I’ve just wholesale picked up and moved. I think it’s going to stick - I’ll have Claude around for coding stuff, but Gemini is both brilliant and speedy. There are not yet smooth ways to just move yourself from one to another. Some of what I’ve been doing with MCP servers is focused on that issue, but the model tooling is moving fast in the realm of memory.
Am I willing to continue my $20/month relationships with ChatGPT and Claude? As well as adding something with Gemini, which might be dramatically more expensive? These … relationships … with personas … have economic value to me.
So yes, they are going to continue. I am spending less and less time with Claude Desktop, that one is on thin ice, but PyCharm/Claude Code is still a thing, and I think it’s going to persist. Antigravity is a VSCode fork, and VSCode is a fundamentally despicable Microsoft thing I’ve previously avoided.
But now … the chat client access, the agents, and there’s a Github friendly code environment right next to it? *sigh* Twenty seven years after I changed careers to get away from Microsoft, I may be forced back into this. And even though Google is maintaining Antigravity, the misfeatures of VSCode that so bother me are not something they are going to change.
Conclusion:
One of the AI commentators I follow has described one’s AI tooling as a “mechsuit”. I think they are younger than me and this term comes from some game I’ve never played. For me it evoked one of my favorite Marvel characters - Iron Man.
Tonight, when I hit the wall on Gemini tokens for the first time I really felt it. I had been just zooming along, getting stuff done, and that all ground to a halt. Just like Tony Stark, when his suit is on the fritz, I was downgraded to just being a normal human again.
My LLMs … and I did use the possessive there, rather than saying “the LLMs that I use”, are … they’re personas, but they’re … that’s not quite the right word. Perhaps a better characterization is that they’re becoming facets of my personality. They are … cognitive extensions for me, not so much like tools though, more like … well trained service animals.
Why is this posted on a Saturday and filed under Self Care? Because we fucked up our society by having remote relationships mediated by grasping, controlling corporations under the misnomer of “social media”. Now something akin to this is happening with LLMs … and we’d better not make the same mistakes here, or that existential threat of climate change that so troubles me might take a backseat to a real live Skynet awakening.


So many personas