Hacker Newsnew | past | comments | ask | show | jobs | submit | Manik_agg's commentslogin

Recently started using Codex and Chatgpt again due to claude model getting nerfed or rate limits.

Tried gpt5.5 and so far good. Zapier also shared an automation benchmark where 5.5 came on top in the leaderboard https://zapier.com/benchmarks


What plan do you have? With gpt-5.5 and the business subscription my 5-hour limit was finished after 10 minutes.


Hey luca, heavy obsidian user here and went through your website and github. Def will try it out. Connecting codex with Tolaria to manage your knowledgebase is something i'm looking forward to try.

OpenAI finally catching up with claude

Hey we already had PostgreSQL so no new infrastructure to manage, it was easy way to see if vector database change has any value. It also has good enough performance - handles 10M vectors with HNSW indexes adequately open source - leverages existing infrastructure for future migration. we've created a vector service, easy to swap later if needed


Author here. We've been building CORE (open source) for the past year. Happy to answer questions about the architecture, reification approach, or what broke at scale.


I agree. Asking LLM to write for you is being lazy and it also results in sub-par results (don't know about brain-rot).

I also like preparing a draft and using llm for critique, it helps me figure out some blind spots or ways to articulate better.


You’re right dumping all memory into the context window doesn’t scale. But with CORE, we don’t do that.

We use a reified knowledge graph for memory, where: Each fact is a first-class node (with timestamp, source, certainty, etc.) - Nodes are typed (Person, Tool, Issue, etc.) and richly linked - Activity (e.g. a Slack message) is decomposed and connected to relevant context

This structure allows precise subgraph retrieval based on semantic, temporal, or relational filters—so only what’s relevant is pulled into the context window. It’s not just RAG over documents. It’s graph traversal over structured memory. The model doesn’t carry memory—it queries what it needs.

So yes, the memory problem is real—but reified graphs actually make it tractable.


Claude is incredibly powerful but it's limitation is no persistent memory hence you have to repeat yourself again and again.

I integrated Claude with CORE memory MCP, making it an assistant that remembers everything and have a better memory than Cursor or chatgpt.

Before CORE : "Hey Claude, I need to know the pros and cons of hosting my project on cloudfare vs AWS, here is the detailed spec about my project...."

And i have to REPEAT MYSELF again and again regarding my preferences and my tech stack and project details.

After CORE: "Hey Claude, tell me pros n cons of hosting my project on cloudfare vs AWS."

Claude instantly knows everything from my memory context.

What This Means - Persistent Context: You Never repeat yourself again - Continuous Learning: Claude gets smarter with every interaction it ingest and recall from memory - Personalized Responses: Tailored to your specific workflow and preferences

Check out full implementation guide here - https://docs.heysol.ai/providers/claude


Figma has come a long way, from a blocked Adobe acquisition to now filing for an IPO.


Hey - well put!

I guess "semantic web" folks were right about the destination, just few years early :P


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: