I feel like this article was circling a point it never actually got to. All the advice in here (except controlling scope creep) is specific to a TUI with an elm like architecture.
But here's the thing, you almost never know what the architecture is up front. If you do you probably aren't the one writing the actual code anymore. Writing the code, with or without an AI is part of the design process. For most people it isn't until they've tried several times, fucked it up a bunch, and refactored or rewrote even more that you actually know what the architecture needs to be.
This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..
But what is the purpose? When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better on some metrics due to advantages of the language. It doesn't hold when LLM does the rewrite, since there is no one who understands the code after that.
It's a good demonstration of capabilities, sure, but the result itself makes no sense. We'll have to figure out where these capabilities can bring real advantage
> When you rewrite a project in another language, it's for engineers to be able to maintain and further develop the project better
I don't think that is the case here. Bun is pretty much using AI to write all of it's code, with a human reviewing it. Zig exists as a language to provide a nice DX over C and Rust, not to be memory safe. If you are using an LLM to generate code, the DX benefits are removed and so then why would you ever choose Zig over Rust?
I agree that the comment is insightful, but I don’t think AI companies are particularly promoting rewrites, other than that it’s a task LLMs are good at as “the code is the spec”.
The industry as a whole still is realizing that any LLM usage that actually writes all the code for you is causing cognitive debt, and we’re even slowly losing our skills of the art.
I’m trying my best to navigate this myself, but no matter what we do, using LLMs is both a blessing and a curse.
Becase no one has written it. You can't ask the guy who has written it, not because this guy has left, but because he does not exist. Also, it often reads weirdly.
I disagree with calling this bad faith. For instance:
* I can agive you one quarter of amazing profits, if you let me dismantle and sell all the assets of a company.
* I can give you a few years of incredible food production, if you let me strip a rainforest and plant commercial crops.
* I can give you incredibly cheap energy, if you let me mine non renewing fossil fuels from the earth.
The context of why something is possible matters. In this case, because a very large and comprehensive test suite was seen as a necessity to specify a successful project (managed by humans). I do not believe a LLM coded project could ever have made such a test suite. In this case, the LLM is consuming the result of expensive human labor (the test suite) to make what ultimately is a minor variation to it (the implementation language).
> This is such a bad faith argument. How long would it take a dev or a team of devs to do this with the same architecture and test suite? A hell of a lot longer than 6 days..
Pocket calculator also can multiply numbers much faster than engineer, it doesn't make it engineer itself..
People want to use stuff like this as somehow evidence for AI being able to write entire software systems in a few days. We saw the same shit with the "compiler" they made with a bunch of agents. Literally the only reason it's possible is because the hundreds of thousands of man hours and God knows how much money that was poured into the reference projects befoes the AI got anywhere near it.
To replicate this kind of thing with a green field project would take an absolute ton of spec work and requirements derivation, which will substantially eat into any savings from having AI generate it.
The accomplishment itself is interesting, and unlocks opportunities to do work no one would have bothered with before, but it doesn't represent what a lot of people desperately want it to.
I am not sure why people sound so astounded, to be honest. This has been my frank experience of the agentic tools both Codex and Claude since about December.
When given the right constraints this kind of thing is entirely conceivable.
However the important question not being answered here is: does anybody working on it have a full understanding of what has been built?
My experience having constructed similar types of projects using these tools is yes, you could do this in a week or two but now you'll have a month or two of digging through what it made, understanding what was built, and undoing critical yolo leaps of faith it made that you didn't want.
Not to mention to even attempt something like this from scratch would take hundreds of hours if spec work. I see it all day everyday in the aerospace sector. Software engineers have absolutely no idea what deriving a design document and all its associated artifacts actually looks like, and they're in for a rude surprise if the industry really does shift hard that direction
You read an article like this, and despite some flaws, it restores your faith in humanity a little bit. Maybe I'm not the only one looking at the shitshow in horror.
Then you come to the comment section and are immediately reminded why the whole god damn world has lost its mind.
Three out of four of the top comments don't even directly engage with the point of the article. They're focusing on nitpicking elements of it or just going off on a tangent only somewhat related.
Karmawhore strategy. F5 on /new. Scan the article and then race to object to some minor sentence. Upvotes mean you are a smart lil guy. Happens frequently.
"Hey use our thing! It's totally going to replace all humans! It's so awesome it can do your job for you!
...
BTW, it hallucinates... Like all the time.. and if you don't fact check our product you're responsible. Not us! We stole all your data to train it, we're making billions and billions of dollars off that theft, but you own all the liability"
We don't hear anything about it because the Republicans are in charge. If a Democrat was in the White House right now you wouldn't be able to shut Fox News and it's ilk up about how the national debt is a harbinger of doom for the entire country.
Was just talking to my wife about this yesterday. Our son is 10 months old so we're still in the diaper stage, and it's really not a big deal. Since disposable diapers and baby wipes became a thing, not really sure why anyone complains about it.
Compared to trying for hours to get him to sleep, or dealing with the sheer panic we felt when we had to have him rushed to the hospital, a poopy diaper is nothing.
But here's the thing, you almost never know what the architecture is up front. If you do you probably aren't the one writing the actual code anymore. Writing the code, with or without an AI is part of the design process. For most people it isn't until they've tried several times, fucked it up a bunch, and refactored or rewrote even more that you actually know what the architecture needs to be.
reply