Since I have Claude Pro now and Iโm using Claude Desktop as a Model Context Protocol host, I went looking at their example servers. The memory server immediately got my attention, so I installed it, and then the fun began.
Attention Conservation Notice:
Heavy GraphCraft in here, fiddling with Claude vs. Maltego, to see how they compliment each other. If you are not deep into details on this stuff, just move along.
Claude Memory:
Just what the heck is a memory server, anyway? Their description should give you a clue as to why I picked this first.
A basic implementation of persistent memory using a local knowledge graph. This lets Claude remember information about the user across chats.
So this is a training wheels setup that will support my Natural Knowledge Graphs direction. The documentation shows that it has an API - hereโs a snippet, just the portion that creates entities.
create_entities
Create multiple new entities in the knowledge graph
Input:
entities
(array of objects)Each object contains:
name
(string): Entity identifierentityType
(string): Type classificationobservations
(string[]): Associated observations
Ignores entities with existing names
I wandered in circles for a while, looking for a TCP port where this API was being served from the โmemoryโ container. I finally learned that there are three transports used for MCP.
stdio - simple shell based connection, think unix toolchain.
Streamable HTTPS - the TCP connection method.
Custom - think object request brokers like RabbitMQ.
The transports just move stuff around, the actual APIs are done using JSON-RPC 2.0, which is only vaguely familiar to me. Even so, not a big deal, remote procedure calls are a very normal ritual.
I spent four or five hours arguing with Claude about some CSV files I made from my big MAGA graph, then I ran out of tokens. I looked inside the memory container and found memory.json, which looked similar to a very simple GraphML file. I could manually cook up something like it using sed/awk, but for the sake of growth Iโm going to try a TypeScript (Microsoft JavaScript) use of the stdio transport.
Maltego:
I pulled out a tiny fraction of the big graph, just a couple dozen entities, as GraphML, and got Claude to digest it. Once it learned how I wanted the attributes handled it took in additional offerings without any trouble. This would have worked easily for the old 2016 or earlier file format, but the newer style with embedded Lucene indices is going to require some coding. Recall that I mentioned this in Liberating Maltego Data two weeks ago.
While I need direct access to volumes of files for the sake of making them searchable, this is the wrong direction for Maltego itself. Itโs got a couple Python transform(query) libraries and it really needs to be able to handle one of the MCP transports. I may fiddle with stdio at the start because itโll be easier to debug, but long term this has to be done as network transport, making it like everything else Maltego does.
Once I get the kinks worked out, I can put up an MCP server with some authentication scheme, and let remote Maltego users query stuff Iโve curated. This isnโt a business *yet*, but itโs clearly a component of whatโs happening with Natural Knowledge Graphs. Groups of users on a social network could discuss their interests and once the conversation is broad and deep enough, an LLM/Knowledge Graph front end will permit it to be treated like a Wiki. Yes, a Wiki - because the links involved will be available for review, it wonโt be the opaque stochastic parrots LLMs are now.
Conclusion:
Today was the first time I saw data I curated being used by an LLM to produce summaries. They seemed complete and I saw no serious errors. A good percentage of what it had to say was superfluous for me, but for a new person trying to use the system like a Wiki? Absolute dynamite, got the right stuff, in the right order.
The data was curated and ingested as a graph. My previous experiments in this area have been with Retrieval Augmented Generation on sets of documents. Having looked at the simple memory API, I wonโt have any trouble mapping those actions to an ArangoDB graph. Itโll easily scale many orders of magnitude. And ArangoDB comes with an enterprise level security model - you can map users and groups from some web facing system to it without much trouble.
The big issue I have doing new stuff is that I am *old*. I can spend thirty days reading documentation โฆ or thirty minutes playing with a working example I can take apart. I beat my head against the wall endlessly with Dify.ai - lots of instructional videos, presenter in a hurry who is not a native English speaker, presumes the viewer has all the same experience he does, so even quarter speed and magnifying isnโt enough to catch what heโs done.
This memory thing is stone simple, it just runs when you install it. Itโs easy to take apart and conceptually itโs like any unix tool chain - a weird command line, but thatโs all it is. At this point, I donโt know much, but the manner in which it is built and presented inspires confidence - when itโs not right, that will be because I havenโt put in enough work yet. That was NOT the feeling I got trying to onboard with Dify.