Hacker Newsnew | past | comments | ask | show | jobs | submit | codebje's commentslogin

I could believe a ~66% success rate on asking an agent to run a linter and make PRs addressing issues found, that sounds about right: very tightly bounded problem, a sensible solution is often offered by the tool, and verification of success is binary.

Structural changes, in which attention to the small details of a task is directly at odds with the need to consider less overt factors like cohesion and coherence, are where an agent will turn your code base into a dog's breakfast.

The vibe coded software I have for my own use only is like that. Giant hundred-line functions, poor separation of concerns, easy for a change to have unintended behaviour somewhere else. It's probably a step up from the spreadsheet I was using before it, but not by enough to justify current RAM prices.


Those chips need to be scanned from about 3cm away. If you want a locator tag, it needs to carry enough power to broadcast a signal a useful distance. Still, a microchip is handy if you're not sure if it's your tiger you found.

Both of those products are out of stock.

So is the T-Watch Ultra.

ZeroID looks like a good idea to me. Lots there I'll be digging into over time, and related to the use of token exchange for authorising back-end M2M transactions on behalf of a user at the front-end.

As far as I can tell the parent post is talking about discovery for agent-to-agent communications, which is not something I have much interest in myself: it feels very "OpenClaw" to replace stable, deterministic APIs with LLMs.


Yeah I'm leaning deterministic too for most needs, but I do think there's a future for agent to agent communication in more specialized cases. I think an agent having access to proprietary datasets / niche software can produce an interesting output. Say someone wants a drawing in autocad, communicating with a trained agent that has mcp access to these kind of tools seems like it could be beneficial to extend a more generalist agent's capabilities.

This isn't the only thing you'd want to do.

I use containers to isolate agents to just the data I intend for them to read and modify. If I have a data exfiltration event, it'll be limited to what I put into the container plus whatever code run inside the container can reach.

I have limited data in reach of the agent, limited network access for it, and was missing exactly this Vault. I'm relieved not to need to invent (vibe code) it.


That'll stay true for consumer software, because the cost for extra resource usage is not borne by the development house.

That's only true if you consider the process the LLM is undergoing to be a faithful replica of the processes in the brain, right?


Why would that be curious? The network is trained on the linguistic structure, not the "intelligence."

It's a difficult thing to produce a body of text that conveys a particular meaning, even for simple concepts, especially if you're seeking brevity. The editing process is not in the training set, so we're hoping to replicate it simply by looking at the final output.

How effectively do you suppose model training differentiates between low quality verbiage and high quality prose? I think that itself would be a fascinatingly hard problem that, if we could train a machine to do, would deliver plenty of value simply as a classifier.


Regular expressions are definitely enough for turning characters into tokens, after which a simple recursive descent parser is vastly more straightforward to write. Lexing is optional, but generally advised.

Turing completeness is the upper bound of computability, not the lower bound. It's useful mostly for showing that some thing can express the full range of computable problems, or for snarking that some thing is far more complex than it has any right to be.

Total languages omit partiality and non-termination from Turing completeness.

Partiality is IMO irrelevant when it comes to computability. Any partial function (that is, one whose range is not defined over its whole domain) can be expressed as a total function by either constricting the domain or expanding the range. For example, a "pop" operation on a stack is not defined for an empty stack. You can just loop forever if pop() is called on an empty stack. Alternatively, you can require that pop() is given a witness that the stack is non-empty, or you can require that pop() returns either the top-most element of the stack or a value that indicates the stack was empty. Both let you compute the same set of things as the former.

Non-termination is required to be Turing complete, because being Turing complete means being able to compute functions that one cannot reasonably expect to complete before the heat death of the universe. In _practice_ every function terminates when the computing process dies due to some external factor: process runs out of memory ("real" Turing machines have infinite memory!), user runs out of patience, machine runs out of power, universe runs out of stars, that sort of thing, so _in practice_ doing 2^64 iterations before giving up will generally* give you the same outcome as doing an unbounded number of iterations: it'll either terminate, or the process will be killed (here, due to reaching its iteration limit).

On the flip side, giving up non-termination and partiality only gives you increased correctness. If there's one thing we've definitely established in computing, it's that we will readily discard correctness to gain a little extra productivity. Why make a developer implement code to handle reaching an iteration limit when you can just make the user get sick of waiting and kill your app?

* 18 quintillion is a very large number. Have a try. The most trivial recursive function, on my M4 Mac, when convincing clang to be smart enough to turn it into a loop but dumb enough not to elide it altogether, would take a bit shy of 600 years to complete if iterating ULONG_MAX times; I didn't wait for that, if I'm honest with you, I ran it with a much smaller iteration count and multiplied it out.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: