Completely unbased, but I don’t want to have to do anything with bun anymore. It’s just a gut feeling, but I don’t trust them and support them.
They fork Zig to utilize LLM rewrites and build something the Zig team clearly disregarded (non-deterministic compiling)
And now like a whiny baby they LLM rewrite to Rust. There is a very real chance that Zig design philosophy got them to the point where they are now by enforcing to make the tough but precise decisions and the Rust rewrite is the start of the downfall.
It’s purely politics-based not technical, but it seems like bun is full on pampered by Claude. So much that I wouldn’t wonder that the next marketing piece of Anthropic is. Claude Mythos rewrote leading 950k LOC JS Runtime to Rust.
Yeah I also noticed this irony. In addition to accusing the rewrite to being political and not technical, while their whole comment is being political not technical.
Ah, fair enough then, you mean want to clarify that a bit as it can be interpreted both ways. And the whiny baby part seems a bit uncalled for and distracting from the point you’re trying to make.
Don't give them too much credit, they responded to other comments clearly referring to the developers' comments on twitter about his technical motivations. He's just backtracking now due to your comment.
I meant the developers motivation with "whiny baby" and I take the point that this was over the top and I could’ve found better words.
But I meant that my comment is "politics-based and not technical", because the gut feeling is more based on my reading of soft factors than it is from in-depth technical analysis of everything involved.
How can you be so blind? This is all a marketing campaign by anthropic. No more no less. The developers doing the rewrite have no voice at all in this game.
I'm team Zig in most cases but I genuinely think they are better off with Rust. They have had a lot of buffer overruns and segfaults as a result of undisciplined Zig code. I think Rust actually is a better technical choice for them.
I don't think that's going to save them. There are big problems and little problems. RAII+ownership/borrowing solves some memory and file handle issues. But the big problem and this happened before the rewrite, is that they have ceded the system level. Which locks the project into a local minima.
It's not a "your holding it wrong" problem it's a you fundamentally have no idea how your own program works past 1 or 2 level of indentation in most places. If the LLM says that something isn't possible you just have to take it at it's word.
> I am so tired of worrying about & spending lots of time fixing memory leaks and crashes and stability issues
There are legit reasons to rewrite a program in a better fitting language, but as a runtime to be "tired of worrying about & spending lots of time fixing memory leaks and crashes and stability" is really borderline to me.
Also there are way more things to it than just compile time and tests: you reset mental model and will lose contributers. There is philosophy, developer skill and more attached to a language.
In this case both compile via LLVM the same and there is no performance benefit given the code is written exactly the same, so it’s developer preference, where the current head seemed to prioritize his own DX over everyone else’s.
I'm not sure if the 50% of people defending the whole rewrite live under a rock with regard to the acquisition or have never worked at a US company or a deliberately naive. Companies give instructions. Nothing of this is accidental or prompted by curiosity.
I agree. From the get-go, Bun was apparent in its design philosophy: we do everything you'd ever want; runtime, bundler, test suite, package manager, all in a new breaking patch each week. With each and every one blowing the established competition out, better, faster and stronger. But it was glaringly obvious that they'd do anything but Keep It Simple Stupid. It was obvious that the only production environment it would see the light of the day in the near future would be YC startups burning one after another at the speed of an accelerant. Now, they're past the point of no return.
I mentioned a similar sentiment 4 days ago in the original discussion about this project, and HN for some reason did not like that I noted Rust is used in production longer and way more than Zig is, including Firefox, CloudFlare's own reverse proxy, Discord, and many other massive effort projects that affect millions if not billions of people.
People are seriously naive about corporate incentives. You think he'll go "Yeah, it being in Zig has put a wrench in our AI usage and that's not a good look now that we're with Anthropic"? No, he'll confirm everyone's biases instead - and it's working as well as expected on this crowd.
I don't have the personal investment that you appear to have with Bun, but why does this matter? Do you scrutinize the rest of your dependencies this way?
Much of working in the JS / NPM ecosystem is already pure faith on un-vetted dependencies, and this appears no different pre or post LLM rewrite. If it satisfies the intended goal and API contract it originally did, is there any difference? Were you carefully reading the original source code before?
Enough to make judgement calls on them based on the individual Twitter posts of each of their developers? Absolutely not!
If I go beyond the initial vetting, that's a minimum of 30+ projects multiplied by however many contributors each. Without even mentioning all of their sub dependencies. It's a pipe dream to think you can ever have a complete picture of the motivations and political machinations of your entire dependency tree.
I have definitely dropped dependencies from production codebases in the past because "lead developer is widely known to be a clown". You don't need to catch everything but it's generally a good idea to have a picture of, like, the twenty most important dependencies in your codebase and the 90th percentile most notorious clowns in the community.
What is your definition of "known to be a clown"? I'm not sure how one would even begin to evaluate that at scale. Or what practical impact that would actually have for anything but the most critical of dependencies that might be too difficult to swap at will.
Hyperbole, yeah, but top 10% undesirable leads is literally thousands of people?
I couldn't imagine following the communities of even the top ten dependencies of one of our (many) projects very deeply. Every single one of them is having divisive conversations in threads like this all the time that never really lead anywhere or sum up to anything meaningful.
Sure, but you need to consider that, in this case we are talking about the language runtime. It isn't just some other library dep. It's basically the base layer of the stack. It has a huge blast radius. It is, imo, a nontrivial decision to swap runtimes. If problems emerge you can't easily plug some other runtime, that's a major technical decision and should be treated as such.
In the past at least you could assume the maintainers of the runtime had some kind of mental model of how it worked. In my view, with the way this rewrite has been approached, you can't assume that at all. It's good the test suite passes, but who knows how this will affect the evolution of the codebase? Do we even know if the code is good? How much is just slop? Tests do not test architecture. Is this new rewrite even going to be maintainable? How is the team going to get up to speed on a new codebase in a new language that the main author presumably doesn't even fully understand?
There are many reasons to be concerned. Treating this as no big deal would make me question one's ability to make assessments of technology. There's a world of difference between relying on gen AI heavily in products and leaf nodes of the stack, using it in a purely assistive way, and using it to drive a massive scale rewrite of a base component in a language the maintains team has an unproven amount of experience with. From a reliability standpoint the way this project was executed is completely preposterous, and it's very clearly a marketing stunt more than a sound technical decision on how to drive a project. It's not about the use of LLMs, it's about thee stupid and blatantly obvious generation of cognitive debt all to help sell claude. I'd have way fewer qualms if they used LLMs to do a rewrite in a way that retained developer understanding (i.e. not driven by one person and in such a short timespan that having a robust mental model, even for that person, is highly unlikely)
You're implying that reckless rewrites within the JS ecosystem are a novel event, or more specifically that surprise language changes over a short period of time are. And yet... I can think of at least six times in which exactly this has happened and little fuss was made because the polarizing element of "AI" was not involved. Not just JS to Typescript, but to Dart, Go, C, Rust, Zig, Nim etc.
From any reasonable perspective, this is business as usual in the house of cards we all operate in. Perhaps the sensationalization would be justified if the lang migration wasn't one of less correct -> enforced correctness by default?
To your point in general about maintainers holding a mental model of the runtime: I would challenge that to say that it is very likely that there is no developer who holds a complete mental model of an entire runtime at any given point. As with anything of this scale you understand individual parts in their entirety and have general assertions about the rest until specifically revisited, even if you are the sole developer. In this case specifically, Bun has been largely AI driven for quite a while anyway so it is even more unlikely that the developers ever had a complete picture in the first place. If you trusted them before, then nothing has changed.
It's not lost on me that code logic can be subtly incorrect even as tests are passing either, but there isn't exactly a lot of grey area in this particular context. Does your code compile or not? If it builds as expected, then your own unit tests will highlight the difference.
Anthropic bought it in a somewhat dumb attempt to solve their "performance" issues (not realizing their horrible code was the issue in the first place).
It probably helped them, simply because they brought in some actually competent developers.
But doing so, Bun went from being a public project to more of an internal tool for Anthropic, spoiled for now with AI money and losing quite a bit of focus.
Let's hope that when the bubble pops, some of the Bun effort could at least be salvaged. I don't see Anthropic maintaining it long term, they are simply not in the business of selling support for a runtime nor have the (Google) scale justifying maintaining one on the side.
Yep, the Anthropic acquisition, this petulant Rust rewrite, and bun's increasingly buggy releases (slop) have caused me to migrate my projects (personal and work) to nodejs+pnpm.
The risks of using bun are no longer just those concerns around a newer tech and "drop-in" replacement for nodejs. Now you have to marry Anthropic, Rust, and a founder with conflicting priorities.
How exactly will waiting a year or two make this effort appear “characterized by impatience and grumpy annoyance”, as opposed to the people right now who are loudly bemoaning an engineer trying something out as an experiment?
> It can take a whole day to find 10 good lines to write.
So we've come full circle to code writing speed being a factor again? :)
In all seriousness, this just feels like a never-ending list of attempts to try to resist any notion that LLMs might accelerate software development, however small the increment. The original article was arguing that organizational and collaboration was the bottleneck and that taking a whole day to think about the code was not.
And sometimes an LLM can find those 10 lines in 10 minutes. Or it can find a 100 and you cut them down to 10 in two hours total. Yes I've seen this in practice. The amount of code an LLM can tirelessly ingest is super human.
Twenty minutes ago Claude digested about a dozen pipeline definitions, a sequence of build files and targets, read the scripts they use, found the variable that I could reuse for my purposes, and made the appropriate change in the right (looking) place, so that I'd be able to achieve my goals.
I could not have done this nearly as quickly. On the other hand, I gave it clear, precise instructions.
The best code is no code. The second-best code is the code I delete.
My favorite JIRAs are the ones I prevent from being worked on in the first place because they were unnecessary.
The ideal prompt is the one I don't fire because it would be a waste.
In an application with an LLM component, the ideal amount of inference is zero.
Ultimately this seems to lead to "the ideal amount of computers in the world is none" but for the sake of my continued employment let's let that one go by. :)
You are speaking out of my soul. Thank you. Great example. I have grinded AI extensively 14 hours a day on my own project for months. I’ve been using AI since GPT-2.
I maxxed out Claude Max $200 subscription and before I justified spending $100/day.
And it was worth it, but not because it wrote me so good code, but because I learnt the lessons of software engineering fast. I had the exact ride you are describing. My software was incredible broken.
Now I see all the cracks, lies and "barking the wrong tree" issues clearly.
NOW i treat it as an untrustworyth search engine for domains I’m behind at. I also use predict next edit and auto-complete, but I don’t let AI do any edit on my codebase anymore.
I deleted 75000 lines of code of my codebase in the last 2 months and that was tremendously more useful to by business than the 75000 AI has written the 2 months before...
> I know full well that if you ask Claude Code to build a JSON API endpoint that runs a SQL query and outputs the results as JSON, it’s just going to do it right. It’s not going to mess that up. You have it add automated tests, you have it add documentation, you know it’s going to be good.
I feel like this is just not true. An JSON API endpoint also needs several decisions made.
- How should the endpoint be named
- What options do I offer
- How are the properties named
- How do I verify the response
- How do I handle errors
- What parts are common in the codebase and should be re-used.
- How will it potentially be changed in the future.
- How is the query running, is the query optimized.
…
If I know the answer to all these questions, wiring it together takes me LESS time than passing it to Claude Code.
If I don’t know the answer the fastest way to find the answer is to start writing the code.
Additionally, whilst writing it I usually realize additional edge cases, optimizations, better logging, observability and what else.
The author clearly stated the context for this quote is production code.
I don’t see any benefits in passing it to Claude Code. It’s not that I need 1000s of JSON API endpoints.
> If I know the answer to all these questions, wiring it together takes me LESS time than passing it to Claude Code.
That's just not true, and if it is in your case, then you're not great at writing prompts yet.
> Take the todo_items table in Postgres and build a Micronaut API based around it. The base URL should be /v1/todo_items. You can connect to Postgres with pguser:pgpass@1.2.3.4
That's about all it takes these days. Less lines of code than your average controller.
Every day I do something where the llm writes it ten times faster than I would with twice the test coverage.
And every day I do something else where the LLM output is off enough that I end up spending the same amount of time on it as if I'd done it by hand. It wrote a nice race condition bug in a race I was trying to fix today, but it was pretty easy for me to spot at least.
And once a week or so I ask for something really ambitious that would save days or even weeks, but 90% of the time it's half-baked or goes in weird directions early and would leave the codebase a mess in a way that would make future changes trickier. These generally suggest that I don't understand the problem well enough yet.
But the interesting things are:
1) many of the things it saves 90% of the time on are saving 5+ hours
2) many of the things I have to rework only cost me 2+ hours
3) even the things that I throw away make it way faster to discover that 'oh, we don't understand this problem well enough yet to make the right decisions here yet' conclusion that it would be just starting out on that project without assistance
This. There is definitely a ratio. A year ago, it was 50/50. It felt better because the hard things it did fast while I sipped coffee outweighed in my mind the negatives.
Now that ratio is swinging way over towards the LLMs favor.
How do you reconcile that with your example prompt, which demonstrates no skill requirement whatsoever. It’s the first thing any developer would think of.
It’s simple but contains all the necessary info. You can say “build an endpoint to get user data” and it will absolutely do something, but it might be stupid, and when you compound 1000 stupid prompts like that you get spaghetti.
It doesn’t contain any information at all about the structure of the JSON output. Is this a greenfield endpoint and anything will work is does it need to conform to an existing API? What about response codes for different failure modes? What about logging?
Your comment exemplifies what a lot of people complain about vibe coding: it works great for greenfielding CRUD apps, but it’s a bitch to use in a real code base.
On a real codebase there’s going to be v5 that is the newest version, v4 that we planned to migrate off of but we had to keep around for iOS clients, and v3 that no one except for a dozen huge enterprise customers use. But we need to support all 3 styles. This stuff is sort of documented, but not completely, and there’s a push by some people in the org to use v5 style for every new feature, but there’s one director pushing back on that. So you need to go talk to a few people and get enough condensed to CYA before deciding what to do.
Some version of that happens in every big company or every long running app. Claude isn’t AGI and that prompt isn’t nearly specific enough for anything outside of greenfield.
I actually believe it just surfaces them, humans will tolerate ambiguity like that and deal with it. AI Agents either won't work properly or will just fail to do anything useful.
Every group of humans has shit processes. So if you’re trying to use AI in an actual company, today, that prompt won’t work properly or will just fail to do anything useful.
I've drank the AI koolaid so I'm not a hater, but to say "you're just not prompting right" is such a cop-out. Prompting right takes a metric fuck ton of effort. I'm actually kinda agreeing with you, if you make it to where you're dev environment is sufficiently harnessed, then you can give it one-liner magic prompts. But getting there, learning to get there, paying that cost, hot mother of god it's a lot of effort.
Communicating, in words, is extremely hard. I don't think this should be as controversial as it's seems in the prompt era.
VS: someone has mastered one of the myriad openAPI generators, and it's shipped.
it does take a little while to get good at this new skill, yes. Just like, say, learning a new programming language and the ecosystem around it takes some effort. After you get over the hump it's really very straightforward and mostly a matter of knowing the kinds of mistakes the LLM is likely to make ahead of time, and then kindly asking it to do something smarter. If you've successfully mentored junior engineers you already have this skill.
that's well put. But i'd stress mentoring junior engineers is really a high effort, high leverage, high demand skill. A good teacher is gold. and not common.
I'll go in the other direction and say that if you're spending a lot of your time learning to prompt better then you're wasting it because LLMs are only going to get better at understanding your intent regardless of "prompt engineering". The JSON API example to wire up a database can be one-shot pretty easily by the latest models without much context and without setting up any harness. The more time you spend perfecting your harness, the more time you would have wasted when the next model comes out to make it obsolete.
The hardest thing about software engineering has always been that your intent often has to be decided on the fly once you get into complicated edge cases, weird-or-legacy-business requirements, or things that the spec literally has no answers for.
Letting the tool figure out your assumed intent on those things is a double-edged sword. Better than you never even thinking of them. But potentially either subtle broken contracts that test coverage missed (since nobody has full combinatoric coverage, or the patience to run it) or just further steps into a messy codebase that will cost ever-more tokens to change safely.
I was thinking of this interpretation as I read that:
"I'll go in the other direction and say that if you're spending a lot of your time learning to [program] better then you're wasting it because [computer]s are only going to get better at [computing] regardless of "[software] engineering". The JSON API example to wire up a database can be [run] pretty easily by the latest [computer]s without much [design] and without setting up any [optimizations]. The more time you spend perfecting your [program], the more time you would have wasted when the next [computer] comes out to make it obsolete."
I don't think it does. If I had to guess, the top comment was using an older version of AI or a local model which wouldn't be able to solve the JSON API task. A lot of AI skepticism comes from people who used it once a while back and decided not to keep up with the latest developments. If I only had experience with gpt-3.5 then I'd also assume what the original commenter said.
An experiment I'd love to do, but which isn't actually possible anymore, is run GPT 3.5 or the original 4 API release through a modern "agentic" harness for a task like this.
I think 3.5 would probably need more frequent intervention than a lot of harnesses give. But I bet 4 could do a simple JSON API one-shot with the right harness. Just back then I had to manually be the harness.
I disagree it's a cop-out, but I agree it's hard to get good at writing prompts and takes a lot of effort. But so is programming. We're trading one skill set for another and getting a bigger return on it.
I started as a skeptic and have similarly drank the kool-aid. The reality is AI can read code faster than I can, including following code paths. It can build and keep more context than I can, and do it faster as well. And it can write code faster than I can type. So the effort to learn how to tell it what to do is worthwhile.
yep fully agree. i'm taking issue with the flippant "not prompting right" as if they're holding it upside down vs it's actually a meaningful skill to have to invest in so it's fully believable that someone trained in normal code gen is much more proficient up front.
this seems disingenuous. even if your premise is true (which i don't think it is), it only really holds for the first few endpoints. most systems have many, and the models are very good at copying established patterns to the point that you wouldn't normally have to re-explain every detail for every endpoint. so you might be right for the first (you're not), but you're definitely wrong for the next 50.
Like writing code to me is not slower than writing text?
When I write code every character I type in my computer has less ambiguity than when I write it in human language? I also have the help of LSPs, Linters and Auto-completes.
I use AI to look things up and I try to learn. That part is speed up, but once I know how X works I’m faster doing it myself. My assumption is that most people seeing things differently, compare their performance of not knowing how X works with Claude, but not with someone who’s really good at X. Which makes a lot of sense given LLMS are prediction generators. My take is that the best use of AI is to get you to the point where you are really good with X and then naturally your AI usage will go down.
what my experience says is that, when you get "really good" with X, then you can easily write a prompt that says exactly how it needs to be done and you'll be able to do it much faster than writing it all yourself because you know the important parts and the rest is just glue.
I have a similar sentiment. Subject that makes the claim that AI writing code is fast is going to matter a lot because some programmers heavily use "LSPs, Linters and Auto-completes", key bindings, snippets, CLI commands, etc to speed up writing code
It's not much to go on by, but I kinda feel ya. I think one exception I'd perhaps make is doing a large mechanic refactor. I find them incredibly daunting. So, I'll just ask AI for that. I mean it probably takes me a similar time to do, but it feels less daunting.
I've been trying to get into agentic coding and there are non-refactoring instances where I might reac for it (like any time I need to work on something using tailwind; I'm dyslexic and I'd get actual headaches, not exaggerating, trying to decipher Tailwind gibberish while juggling their docs before AIs came around)
I use Jetbrains features for that usually, it has great tools for that.
Lets say on that JSON API I want to extract part of the logic in a repositiory file i CTRL + W the function then I have almost all of my shortcuts with left alt + two character shortcuts. So once marked i do LAlt + E + M for Extract Method then it puts me in a step in between to rename the function
and then LAlt + M+V for MoVe and then it puts me in an interface to name the function.
Once you used to it its like a gamer doing APMS and its deterministic and fast. I also have R+N (rename), G+V (generate vitest) Q+C(query console), Q+H(Query history) and many more. Really useful. Probably also doable with other editors.
I highly recommend looking into codemods for larger mechanical refactorings. I did things like converting large test suites from one testing library to another by having codex write a codemod to convert it as a first pass.
I use voice to text and for me coding is way faster now. You don't need to sit down and type up a perfect spec lol. I give it terrible prompts with poor grammar and typos from incorrect transcriptions and it does an amazing job. Definitely not perfect I iterate with it a ton but it's still faster than typing it out by hand
You're still typing? I don't know how fast you can type, but I can speak way faster than I can type. Somewhere in the neighborhood of 300 wpm. Speech-to-text is pretty good now, and prompting an AI means I'm not trying to speak curly brace semicolon new line.
Average speaking speed for english speakers is 100-120 wpm for complex topic. I type 130wpm peak and I have the most common coding characters on my home row using neo layout.
This may have been a problem a year or two ago but any premium model will be exploring the codebase to check similar routes to answer all these questions, if you don't specify them.
Exactly.
As long as the codebase is consistently following some given patterns, LLMs nowadays stick to it.
Understanding that limiting number of “design patterns”
in a codebase made it better (easier to code and understand) was a good proxy for seniority before LLMs.
Now it’s even better: if all of a sudden “unusual code” is in a PR, either the person opening the PR or the one reviewing it has lost touch with the codebase.
Very important signal, since you don’t want that to happen with code you care about.
This is just bizarre to me. Do people not use Plan mode?
I start by telling the agent what I'm trying to accomplish, and then I throw in some questions like this, concerns I have, edge cases I've thought about, whatever. It goes out and does all the research, both in my code base and beyond, asks me questions where it needs clarification, and then writes me a plan. I review the plan, we go back and forth a bit with adjustments to the plan, and then the plan is ready for implementation. At that point, the implementation is mostly a formality, because all of the difficult parts are already done.
On top of that, most of what you've described as decisions that need to be made are either trivially made by a frontier model without even needing to be told, or stuff I can bake into my skills so I don't need to specify it on every task.
Given the above, I can't fathom an approach where I'd be faster without AI than with it, because the acceleration is the planning / decision-making, not the implementation. Whether the implementation takes the agent two minutes or six hours really doesn't matter, because I'm not involved at that point.
Yeah I can and I’ve done it and for fun project it’s fun and cool. But its like using templates to build your website. You’ll be annoyed and at one point your project goes in the endless graveyary of abandoned projects
I think most people are finding the opposite. Claude Code is not only reducing how many projects get abandoned, it's also resurrecting projects from the graveyard.
The number of Show HNs recently that have a days worth of commits and are never touched again disagrees. It's creating a lot of projects that are immediately abandoned
I think its a direct reflection of the fact that most people really prefer to go-go-go and not spend the time up-front thinking about what their project even is, why it matters and is it worth dedicated resources toward it. The abandonment usually reflects the answer - no it was not worth it.
There is a difference between a project that is eventually abandoned out of annoyance because you couldn't accomplish what you wanted and a project that gets a day or two of attention and then gets aborted because you figured out it wasn't worth it or got interested in something else. I think the parent comment is talking about the former and I'm responding to that, while you're talking about the latter.
I don’t want every verb implemented, I also dont want an IETF standard. I want as little as possible, so I have to worry about as little as possible in the future.
Use-cases differ, you described a complete REST API, which can be as much of a problem as a too little.
Till it has explored the codebase, asked me follow up questions, suggested the code change, incorporating my fixes after losing time on context switch + the extra time I need when somebody requests a change in 3 months to learn the mental model. I’m way faster to just write it myself (mental model included)
If it's genuinely the case that you can write code faster than you can prompt it into existence then you're not being ambitious enough with your coding agent. Ask it to do more. Tackle bigger problems.
1. It's unclear why creating more code faster is a good thing. Software engineering wisdom for decades has been that code is a cost, not a product. There are great reasons for that, which haven't changed with the appearance of LLMs.
2. There absolutely are cases where modifying code "manually" is unquestionably faster than prompting an LLM. There are trivial examples for this - eg only an insane person would ask an LLM to rename a variable rather than using an LSP for that. It would provably and consistently take more keystrokes. There are less trivial examples as well, like, you know, having an understanding of your codebase and using good abstractions/libraries within it that let you make large changes to the program's behavior with little boilerplate code.
One can argue that producing a lot of complex changes through an LLM is faster, which I would agree with, but then see point #1. Sustainable software development has up to this point relied on iterative discovery of the right small components that together form a complete, functional, stable system (see "Programming as Theory Building").
There's zero indication so far that LLMs are capable of speeding up the process of creating complete, functional, stable systems. What every org within my career and friend circle is seeing (and research into productivity impacts of LLMs on software development is showing) is the same story - fast prototypes that either turn into abandonware, personal tools, or maintenance nightmares.
1. More code faster is not the goal. More features / value faster is the goal. Obviously to get there you need to write more code, but it's not writing code for code's sake.
2. Yes, true, but the point is to move up the abstraction hierarchy, so instead of asking the LLM to rename a variable you describe the concrete business goal you're trying to achieve.
It is true that coding agents cannot build fully complete stable systems completely unguided yet. That's why we still have jobs. But it's wrong to suggest that they don't deliver value or that they're destined to produce trash every time. It is a matter of oversight and guidance and setting your codebase up for success. That does require work, but it is not impossible, just a different skillset from the ones we've been used to.
I've tried to implement REST to its exact specifications before, turns out less common verbs like DELETE isn't implemented the same way across platforms and libraries because the IETF never specified. This means no standardization regarding having variables in the path vs the body vs headers, with some libraries or even OS level recognition preventing that, while the server may be looking for it
this incongruence pushes people back to using just GET and POST methods in flexible and overloaded ways
Agentic engineering knows all the best practices and ways to get around these limitations in the most compatible way and cranks out full APIs with all the verbs
reply