Hey luca, heavy obsidian user here and went through your website and github. Def will try it out. Connecting codex with Tolaria to manage your knowledgebase is something i'm looking forward to try.
Hey we already had PostgreSQL so no new infrastructure to manage, it was easy way to see if vector database change has any value.
It also has good enough performance - handles 10M vectors with HNSW indexes adequately
open source - leverages existing infrastructure
for future migration. we've created a vector service, easy to swap later if needed
Author here. We've been building CORE (open source) for the past year. Happy to answer questions about the architecture, reification approach, or what broke at scale.
You’re right dumping all memory into the context window doesn’t scale. But with CORE, we don’t do that.
We use a reified knowledge graph for memory, where:
Each fact is a first-class node (with timestamp, source, certainty, etc.)
- Nodes are typed (Person, Tool, Issue, etc.) and richly linked
- Activity (e.g. a Slack message) is decomposed and connected to relevant context
This structure allows precise subgraph retrieval based on semantic, temporal, or relational filters—so only what’s relevant is pulled into the context window.
It’s not just RAG over documents. It’s graph traversal over structured memory. The model doesn’t carry memory—it queries what it needs.
So yes, the memory problem is real—but reified graphs actually make it tractable.
Claude is incredibly powerful but it's limitation is no persistent memory hence you have to repeat yourself again and again.
I integrated Claude with CORE memory MCP, making it an assistant that remembers everything and have a better memory than Cursor or chatgpt.
Before CORE : "Hey Claude, I need to know the pros and cons of hosting my project on cloudfare vs AWS, here is the detailed spec about my project...."
And i have to REPEAT MYSELF again and again regarding my preferences and my tech stack and project details.
After CORE: "Hey Claude, tell me pros n cons of hosting my project on cloudfare vs AWS."
Claude instantly knows everything from my memory context.
What This Means
- Persistent Context: You Never repeat yourself again
- Continuous Learning: Claude gets smarter with every interaction it ingest and recall from memory
- Personalized Responses: Tailored to your specific workflow and preferences
Tried gpt5.5 and so far good. Zapier also shared an automation benchmark where 5.5 came on top in the leaderboard https://zapier.com/benchmarks
reply