I only recently found Nate B. Jones but Iām watching the new videos and plowing through the AI News & Strategy Daily backlog. This was yesterday and it really resonates. Prior to Nate, my first mention of Matthew Berman was Sam Altmanās Predictions in mid-May, my life changed quite a bit at the end of that month, then again at the start of July, and I went all in on doing something with AI during post-surgery downtime. Prompt Engineering Guide on July 1st is a bit of an inflection point, the first public showing of this new direction.
That was just over a hundred days ago. Iām serving in an R&D CTO role with an AI native startup. I have another commercial thing(struggling) I havenāt said much about, and there Iām trying to bring AI to bear on complex processes that have heretofore been manual. AI has been life changing for me.
This video really speaks to me, it sums up what my personal experiences have been.
I let my cell phone plan lapse last month. Some of that was a safety thing but there was absolutely a āgotta pay for ChatGPT & Claude firstā involved, too. That $40/month makes me credible in my role. If I didnāt have those two tools propping up my research on one side and software development on the other, I would not be taken seriously. I have maturity (no, really!) and deep training/experience as a solutions architect, but there are not enough hours in the day for me to manually scale even one of the learning curves I face, and there are half a dozen before me.
My first outing with Claude Code was Parabeagle, a fork of Chromaās document database. I hacked at it until the naive chunking and 384 dimension embedding was replaced with document aware chunking and a 768 dimension embed. Thatās not the only change, but itās the first time I went at something specifically to get OpenAIās snout out of my wallet. Disinfodrome had 844k documents and I am not paying Sam $500 to do something that can run in batches over a month on my desktop.
Friday in Self Hosting LLMs I shared what Iām doing around here to further that. Thursday I had no local LLMs. Writing this Saturday night, I have a pair of LMStudio servers and one of them is running qwen2.5-coder-7b-instruct so I can use Aider instead of Claude Code.
Iāve spent some time delving into CPU instruction sets and am sad to learn that my next workstation will not be a super cheap HP Z440, itāll be a slightly more expensive Z4 G4. The advantages that AVX512 brings are well worth the near doubling in price. But itās old gear, $400 will get the job done. I am less pleased with the vagaries of low end Nvidia cards, but I think the sweet spot is the 16GB RTX 5060Ti at $430. Itās not a great card, but my requirements are 1) must be Nvidia with 2) the most amount of memory possible. It can be terribly slow compared to the other 50x0 cards, as long as it loads the larger models for me.
Conclusion:
The first thing I did with AI was fixing a nagging health condition. The next thing was a little bit of Python code called Parabeagle, which showed me that Iām at least 10x faster with Claude Code than I am when working on my own. Keep in mind I had a computer science education a very long time ago, so I have some core conceptual level skills that are still of use forty years later. The things that Claude gave me was rapidly turning those ideas into working code, even on days when Iād have struggled on my own to craft a half page of utility code to curate some data.
Nateās video is about how AI is a new thing - when cars first appeared they were referred to as āmechanical horsesā and we tried to use them like draft animals. The same thing is true of every game changing technology. German doctrine from the 1930s/1940s doesnāt use the word Blitzkrieg, it was purely an emergent property of internal combustion powered armor and aircraft, coordinated with radio.
So we are at a point where the overall AI market is in tulip mania mode. There are increasingly ridiculous sums of money being passed between a small number of corporate players, each counting this as revenue, when itās just not. This is going to blow sky high one day soon, perhaps as soon as Q1 of 2026, unless they really do achieve Artificial General Intelligence, and then quickly after that Artificial Super Intelligence. There are trillions of dollars riding on that theory.
I think a dotcom style implosion is much more likely and one of the few things that makes sense in the aftermath I envision is repeating what happened to me - a dramatic boost in productivity because AI covered areas in which I was slow/weak. There WILL be ways to do that, both of the things I am working on seem like theyād continue even if Anthropic, Nvidia, OpenAI, and the rest all go down like ninepins.
My questions for you, constant reader, is this:
What are you doing to prepare for a world where AI is ⦠a cognitive prosthetic ⦠a tool that can turn an average person into a superstar, if only they have the sense of how to use this strange new non-deterministic mode of computing?
Because as Iāve said before on a variety of issues, Iām not sure what youāre gonna do, but you better be doing SOMETHING.