No LLM in the loop. The consolidation pass is deterministic:
Pull the N most recent active memories (default 30) with embeddings
Pairwise cosine similarity, threshold 0.85
For each similar pair, check if they share extracted entities
Shared entities + similarity 0.85-0.98 → flag as potential contradiction (same topic, maybe different facts)
No shared entities + similarity > 0.85 → redundancy (mark for consolidation)
Second pass at 0.65 threshold specifically for substitution-category pairs (e.g., "MySQL" vs "PostgreSQL" in otherwise-similar sentences) — these are usually real contradictions even at lower similarity
Consolidation then collapses the redundancy set into canonical memories with combined importance/certainty. No LLM call, no randomness. Reproducible, cheap, runs in a background tick every ~5 minutes.
The LLM could improve this (better merge decisions, better entity alignment) but the tradeoff is cost and non-determinism. v1 is deterministic on purpose.
Source: crates/yantrikdb-core/src/cognition/triggers.rs and consolidate.rs next to it.
> with embeddings Pairwise cosine similarity, threshold 0.85
So, your system is unable to differential between AWS and Azure (~95 similarity). Probably unable to consistently differentiate between someone saying they love and hate something.
Bayesian network is a really general concept. It applies to all multidimensional probability distribution. It's a graph that encodes independence between variables. Ish.
I have not taken the time to review the paper, but if the claim stands, it means we might have another tool to our toolbox to better understand transformers.
Whether foreign companies pay or not for the tarrifs is clear here. However, I want to point that not receiving income from reduced trade is an impact of its own. An indirect way to pay for the tariffs, so to speak.
I think what people tend to forget when speaking of inevitability is that the scope of their statement is important.
*Existence* of a situation as inevitable isn't so bold of a claim. For example, someone will use an AI technology to cheat on an exam. Fine, it's possible. Heck, it is mathematically certain if we have a civilization that has exams and AI techs, and if that civilization runs infinitely.
*Generality* of a situation as inevitable, however, tends to go the other way.
But then the pinch of resistance makes an island of likewise thinkers. And there doesn't need to be more than .05% of techies to make great products that otherwise anti-correlate with what people claim as inevitable.
We should stop with over-generalization like "The future is defined by the common man on the street." It's always much more complex than that. To every trend, there is a counter-trend (even sometimes alt-trends that are not actually opposites).
reply