No, I donāt get a commission from Nate B. Olsen, but yes, we are going to have a look at another video. The title of this one is AIās Memory Wall: Why Compute Grew 60,000x But Memory Only 100x. I could not have articulated this issue as well as Nate does, but I understood it immediately and that is reflected in the MCP servers I evaluated.
Iāve watched this video twice already, you will need to watch it at least once to get maximum value out of this article.
Attention Conservation Notice:
This article is going to first address Nateās views on memory, then what Iāve done, and finally what I suspect Iāll be doing next. This is more architect/developer focused than user oriented, but for those of you trying to do Srs Bsns⢠with AI, there will be a lot of takeaways in here.
AIās Memory Wall:
Everything this guy does is good, but this is a roadmap into a space that will provide massive benefit if you can solve it for yourself, and then for any groups you support. Gotta listen to it.
Prior Efforts:
When I got serious with Claude about three months ago these are the things I did that are facets of the memory problem.
Tried a file based semantic memory system, then settled on Memento. Iām really sad the author seems to have given up on it, right now I am keeping it up in terms of Dependabot patches with a Memento of my own.
Part of the allure of Memento is that it is based on the Neo4j graph database. Iāve been working on making Maltego graphs more available to other tools and Neo4j would be an excellent destination for that sort of content.
I had tabular data and some of it was timestamped. There were a couple MCP servers that supported Sqlite3, I finally settled on one that does multiple types of SQL servers.
There were over 900 notes in Evernote related to Shall We Play A Game? that got exported to Obsidian, so Iāve got mcp-obsidian handy, for the day I have time to get back to this puzzle.
I had an RSS reader that stashed data in MySQL, but it was erratic in the context of my Mac, which was having issues with Docker at the time. RSS, as I use it with Inoreader, is a timestamped article store.
I wanted a document retrieval method, so I started with ādocumentationā, which was just meant to handle small piles of PDFs, literally a documentation library.
Chroma proved to be a lot more capable in terms of a document store and it begat Parabeagle, my first published MCP server.
It should be noted that Parabeagle addresses memory related problems, such as compartmentalization, which Chroma itself can not.
There was an ArangoDB MCP server that I tried. I have a cache of an Arango backup containing about 250 million Twitter profiles from when I did streaming.
There were solutions for both Elasticsearch and OpenSearch. I have that cache of 250m profiles duplicated in that format as well as 750m tweets. And I very much want a temporally aware storage system.
I spent time getting started with MindsDB and itās going to be used long term, as well as the agentic construction kit Letta. Both have memory related features.
I walked in, sat down, put my feet up, and plowed right into one of the hardest problems in the field. *shrug* As I have ever done ā¦
Whatās Next:
I dragged myself through the Figma bramble patch and Iāve got some work to do with React Native application building, then Iāll be free to return to the above bullet points. Taken on their own itās really hard to choose, but economics are driving me. The startup needs an MVP so that sets the course.
MindsDB has been sitting there, awaiting connectors and tables and experiments. The default semantic storage tool is Chroma, so I have to see to a Chroma/Parabeagle import/export. I already got Parabeagle converted to cosine vector distance, which MindsDB requires.
Letta has a graphical desktop app for agent construction to go with the back end. I recently learned how to make both Agents and Commands for Claude Code, so Iām eager to compare the two methods. Codex is running here, too, but what Jones has said about its agent conception makes me think itās not a solution.
The startup requires a Kubernetes setup for the sake of horizontal scaling and this is all new to me. Iāve gone through three cycles of prompting ChatGPT to fill in details, and this morning Iām starting to build whatās been recommended.
When everything else infuriates me, I will put on my OrchƩstre Baka GbinƩ playlist and pound on Parabeagle upgrades.
Conclusion:
Memory is THE problem for artificial intelligence agents, closely followed by putting deterministic constraints around relentlessly non-linear LLMs. What this means in terms of the social ill of AI Psychosis isnāt clear. Alcohol and tobacco have no merit, while marijuana and psychedelics have therapeutic uses. Opiates and stimulants, in moderation and with doctor supervision, are important components of wellness for some of us.
I think there are some humans for which companions are the least harmful approach to improving their lives. If one can not have a pet, nor get out much due to physical constraints, who am I to judge? But it is greatly concerning that what happened to pornography since mobile phones with quality cameras arrived is now beginning to happen with āvirtual romanceā.
We took a big step towards being able to build the consumer oriented application suite weāve envisioned. Should this progress, next spring I will have a reason to develop skills in Abliteration. These are strange thoughts to place at the end of an article that began with the technical challenge of agent construction. There is ⦠something here ⦠that needs attention. Iām not sure Iām the one to deliver it, but that aspects of it keep appearing on this magic window before me has risen to the point where Iām noting it.

