Hacker Newsnew | past | comments | ask | show | jobs | submit | siscia's commentslogin

> Claude applied techniques I didn’t expect, from disciplines I wouldn’t have thought of

Do you have any example?


Smith-Waterman Sequence Alignment applied to tool calls. Tool calls are encoded as single characters (Read=R, E=Edit, B=Bash,…), with interesting differences between "successful" and "struggling" sessions.

Another one is "Lag Sequential Analysis", applied to human-agent interactions.

I was only thinking of corpus analysis, but I guess that’s what you get when you give AI a web search tool and keep pushing it to explore more domains to borrow techniques and methods from.


Lately I am finding myself doing more and more of what I called "ambient coding" so that I am not directly using anymore all of those coding harnesses.

https://redbeardlab.gitbook.io/acem/essays/ambient-developme...

I basically wrote a small GitHub app and I simply create a GitHub issue, the bot read it, run an LLM loop and come up with a PR (or a design)

Then I simply approve the pr (or the design)

I find it much calmer and much more productive


It is not clear to me how much CPU I get.

"Unlimited" as in 8vCPU and then I am billed for it on consumption?


Billed for wall time. whichever plan you are on you get in credits, so hobby plan gets $50 of credits and beyond that billed on per CPU wall time.

There is a lot of positive comments in this comments section that I don't mind being a bit rough.

I think we can do much better.

The workflow of copy to chatgpt and getting feedback is just the first step, and honestly not that useful.

What I would love to see is a tool that makes my writing and thinking clearer.

Does this sentence makes sense? Does the conclusion I am reaching follows from what I am saying? Is this period useful or I am just repeating something I already said? Can I re-arrange my wording to make my point clear? Are my wording actually clear? Or am I not making sense?

Can I re-arrange my essay so that it is simpler to follow?


Revise can answer any of those questions! You just have to ask.

You can also focus your questions by selecting a segment of your document, and then writing a prompt; the agent will see what you've selected and focus its efforts on that. You can even prompt with multiple selections attached at once.

I'm hoping to add more "proactive" AI to this eventually, like automatic comments raising the critiques along the lines of these questions you enumerated. Right now the agent has to be prompted first for it to do any real thinking.

Thanks for the feedback.


What I found more useful is an extra step. Spec to tests, and then red tests to code and green tests.

LLMs works on both translation steps. But you end up with an healthy amount of tests.

I tagged each tests with the id of the spec so I do get spec to test coverage as well.

Beside standard code coverage given by the tests.


Very much agree on coverage. We're actually doing something in that area: https://codespeak.dev/blog/coverage-20260302

For now, it's only about test coverage of the code, but the spec coverage is coming too.


I think you guys are doing pretty much everything right.


When you translate spec to tests (if those are traditional unit tests or any automated tests that call the rest of the code), that fixes the API of the code, i.e. the code gets designed implicitly in the test generation step. Is this working well in your experience?


Yes it is passable.

Good enough that I don't review it.

Granted, it is a personal project that I care only to the point that I want it to work. There are no money on the line. Nothing professional.

I believe that part of the secret is that I force CC to run the whole est suites after it change ANY file. Using hooks.

It makes iteration slower because it kinda forces it to go from green to green. Or better from red to less red (since we start in red).

But overall I am definitely happy with the results.

Again, personal projects. Not really professional code.


Another trick that I use.

I force the code to be almost 100% dependency injection-able.

It simplifies a lot writing tests and getting the coverage. And I see the LLM being able to handle it very very well.


I just made an app that read GitHub issues. If they have a specific tag, the agent in the background creates a plan.

If they have another tag, the agent in the server creates a PR considering the whole issue conversation as context (with the idea that you used the plan above - but technically you don't have to.)

If you comment in the PR the agent start again loading your comment as context and trying to address it.

Everything is already in git and GitHub, so it automatically pick up your CI.

It seems simpler, but I am sure I missed something.


I am somehow close to what MSFT and GitHub are doing here, mostly because I believe it is a great idea, and I am experimenting on it myself.

Especially on the angle of automatic/continuos improvement (https://github.github.io/gh-aw/blog/2026-01-13-meet-the-work...)

Often code is seen as an artifact, that it is valuable by itself. This was an incomplete view before, and it is now a completely wrong view.

What is valuable is how code encode the knowledge of the organization building it.

But what it is even more valuable, is that knowledge itself. Embedded into the people of the organization.

Which is why continuos and automatic improvement of a codebase is so important. We all know that code rot with time/features requests.

But at the same time, abruptly change the whole codebase architecture destroys the mental model of the people in the organization.

What I believe will work, is a slow stream of small improvements - stream that can be digested by the people in the organization.

In this context I find more useful to mix and control deterministic execution with a sprinkle of intelligence on top. So a deterministic system that figure out what is wrong - with whatever definition of wrong that makes sense. And then LLMs to actually fix the problem, when necessary.


We are missing some building blocks IMO. We need a good abstraction for defining the invariants in the structure of a project and communicating them to an agent. Even if we had this, if a project doesn’t already consistently apply those patterns the agent can be confused or misapply something (or maybe it’s mad about “do as I say not as I do”).

I expend a lot of effort preparing instructions in order to steer agents in this way, it’s annoying actually. Think Deep Wiki-style enumeration of how things work, like C4 Diagrams for agents.


Yes, great points.

Agentic workflows can mix algorithmic + agentic steps. There's a design pattern we call "DataOps" which is all about this - algorithmic extraction then an agentic step delivering a safe output.

See https://github.github.com/gh-aw/patterns/dataops/


What's the hard part?


Nuclear build out, wires and transformers.

China has been building 5% extra nuclear capacity every year for the last 30 years. On target for making up 24% of their energy mix in 2060.


Everything I’ve read says their nuclear share is actually declining y/y, due to the crazy growth of renewables. I think that target is out of date?


If they build out wind and solar first then yes, the nuclear share will have declined year over year.


Declining because they’re building out everything else so rapidly. I believe they have 30+ reactors being actively constructed right now.


Sure, I’m just pointing out that 24% share of power being nuke by 2060 is never going to happen now. Renewables got too cheap, and it’s not “on target”


If I have zero wives yesterday and one today, by next week I will need a new house for all my new wives.

Like I said in the original post:

>Even the people who understand the scale don't understand the purpose.

>The Chinese grid isn't renewable or non-renewable. It's built to keep the lights on for anything short of a thousand year catastrophe.

Only capitalists are so penny wise and pound foolish to bet their civilization on the lowest bidder while hoping the inevitable doesn't happen in the next quarter.


I agree with you, china is building risk mitigation in a way that no one else is, and it will serve them well. However, in this thread I’m solely replying to your comment on the “24% nuke by 2060” plan. That particular plan is not going to happen any more, nuclear is not competitive enough, even for china.


I disagree. They’re not going to go the battery energy storage route, instead they will just fill in intermittent gaps in renewable electricity production with nuclear as they ramp down coal.


But where is the evidence to back up this 4D chess move? They have been failing to meet their nuclear roll out plans year after year? Why would they magically hit a ridiculously high goal of 24% by 2060?


4D chess? This is not some memery. They’re essentially building out aiming for a 100% redundant capacity. Renewables and coal are much faster to build, nuclear takes longer (7 years for standardized ones, 10 for newer kinds).


Climate change, and having an abundance of energy allows a country to offset some of those challenges.


Weathering the knock-on effects of ecological overshoot, probably. It's going to be interesting.


Demography. They're soon going to run out of "young" workers, which mean they have to invent the robotics of the 2100s to ensure the few remaining people will have machine to harvest crops and wage wars.

Also, they're soon going to run out of women, so they need to perfect artificial wombs.

The few remaining party elites will want to live practically forever, so biology will be on the programs once fusion and robots have been cracked.

And it doesnot even seem like china will make ussr-level mistakes.

Our only hope for beating China, at this point, would be to recreate an "opium wars" situation where the whole population becomes dumb and stop caring. (A bit like what tiktok and X are doing to use at the moment, but with much more social control.)


> Our only hope for beating China, at this point, would be to recreate an "opium wars" situation where the whole population becomes dumb and stop caring. (A bit like what tiktok and X are doing to use at the moment, but with much more social control.)

Might be more accurate to say that the PRC has successfully done an opium wars situation to the USA with e.g. fentanyl precursors.


I think that the wider industry is living right now what was coding and software engineering around 1 year or so ago.

Yeah you could ask ChatGPT or Claude to write code, but it wasn't really there.

It needs a while to adopt the model AND the UI. As in software are the first one because we are both makers and users.


In general when you try a new tool or methodology you tend to start with a small class to see the results first.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: