Smith-Waterman Sequence Alignment applied to tool calls. Tool calls are encoded as single characters (Read=R, E=Edit, B=Bash,…), with interesting differences between "successful" and "struggling" sessions.
Another one is "Lag Sequential Analysis", applied to human-agent interactions.
I was only thinking of corpus analysis, but I guess that’s what you get when you give AI a web search tool and keep pushing it to explore more domains to borrow techniques and methods from.
Lately I am finding myself doing more and more of what I called "ambient coding" so that I am not directly using anymore all of those coding harnesses.
There is a lot of positive comments in this comments section that I don't mind being a bit rough.
I think we can do much better.
The workflow of copy to chatgpt and getting feedback is just the first step, and honestly not that useful.
What I would love to see is a tool that makes my writing and thinking clearer.
Does this sentence makes sense? Does the conclusion I am reaching follows from what I am saying? Is this period useful or I am just repeating something I already said? Can I re-arrange my wording to make my point clear? Are my wording actually clear? Or am I not making sense?
Can I re-arrange my essay so that it is simpler to follow?
Revise can answer any of those questions! You just have to ask.
You can also focus your questions by selecting a segment of your document, and then writing a prompt; the agent will see what you've selected and focus its efforts on that. You can even prompt with multiple selections attached at once.
I'm hoping to add more "proactive" AI to this eventually, like automatic comments raising the critiques along the lines of these questions you enumerated. Right now the agent has to be prompted first for it to do any real thinking.
When you translate spec to tests (if those are traditional unit tests or any automated tests that call the rest of the code), that fixes the API of the code, i.e. the code gets designed implicitly in the test generation step. Is this working well in your experience?
I just made an app that read GitHub issues. If they have a specific tag, the agent in the background creates a plan.
If they have another tag, the agent in the server creates a PR considering the whole issue conversation as context (with the idea that you used the plan above - but technically you don't have to.)
If you comment in the PR the agent start again loading your comment as context and trying to address it.
Everything is already in git and GitHub, so it automatically pick up your CI.
It seems simpler, but I am sure I missed something.
Often code is seen as an artifact, that it is valuable by itself. This was an incomplete view before, and it is now a completely wrong view.
What is valuable is how code encode the knowledge of the organization building it.
But what it is even more valuable, is that knowledge itself. Embedded into the people of the organization.
Which is why continuos and automatic improvement of a codebase is so important. We all know that code rot with time/features requests.
But at the same time, abruptly change the whole codebase architecture destroys the mental model of the people in the organization.
What I believe will work, is a slow stream of small improvements - stream that can be digested by the people in the organization.
In this context I find more useful to mix and control deterministic execution with a sprinkle of intelligence on top.
So a deterministic system that figure out what is wrong - with whatever definition of wrong that makes sense.
And then LLMs to actually fix the problem, when necessary.
We are missing some building blocks IMO. We need a good abstraction for defining the invariants in the structure of a project and communicating them to an agent. Even if we had this, if a project doesn’t already consistently apply those patterns the agent can be confused or misapply something (or maybe it’s mad about “do as I say not as I do”).
I expend a lot of effort preparing instructions in order to steer agents in this way, it’s annoying actually. Think Deep Wiki-style enumeration of how things work, like C4 Diagrams for agents.
Agentic workflows can mix algorithmic + agentic steps. There's a design pattern we call "DataOps" which is all about this - algorithmic extraction then an agentic step delivering a safe output.
Sure, I’m just pointing out that 24% share of power being nuke by 2060 is never going to happen now. Renewables got too cheap, and it’s not “on target”
If I have zero wives yesterday and one today, by next week I will need a new house for all my new wives.
Like I said in the original post:
>Even the people who understand the scale don't understand the purpose.
>The Chinese grid isn't renewable or non-renewable. It's built to keep the lights on for anything short of a thousand year catastrophe.
Only capitalists are so penny wise and pound foolish to bet their civilization on the lowest bidder while hoping the inevitable doesn't happen in the next quarter.
I agree with you, china is building risk mitigation in a way that no one else is, and it will serve them well. However, in this thread I’m solely replying to your comment on the “24% nuke by 2060” plan. That particular plan is not going to happen any more, nuclear is not competitive enough, even for china.
I disagree. They’re not going to go the battery energy storage route, instead they will just fill in intermittent gaps in renewable electricity production with nuclear as they ramp down coal.
But where is the evidence to back up this 4D chess move? They have been failing to meet their nuclear roll out plans year after year? Why would they magically hit a ridiculously high goal of 24% by 2060?
4D chess? This is not some memery. They’re essentially building out aiming for a 100% redundant capacity. Renewables and coal are much faster to build, nuclear takes longer (7 years for standardized ones, 10 for newer kinds).
Demography. They're soon going to run out of "young" workers, which mean they have to invent the robotics of the 2100s to ensure the few remaining people will have machine to harvest crops and wage wars.
Also, they're soon going to run out of women, so they need to perfect artificial wombs.
The few remaining party elites will want to live practically forever, so biology will be on the programs once fusion and robots have been cracked.
And it doesnot even seem like china will make ussr-level mistakes.
Our only hope for beating China, at this point, would be to recreate an "opium wars" situation where the whole population becomes dumb and stop caring. (A bit like what tiktok and X are doing to use at the moment, but with much more social control.)
> Our only hope for beating China, at this point, would be to recreate an "opium wars" situation where the whole population becomes dumb and stop caring. (A bit like what tiktok and X are doing to use at the moment, but with much more social control.)
Might be more accurate to say that the PRC has successfully done an opium wars situation to the USA with e.g. fentanyl precursors.
Do you have any example?
reply