I can't help but think that curl is, by nature, a relatively simple and well-contained tool. Compare to an operating system or web browser or database or billion dollar company codebase.
It makes some sense that Mythos/ChatGPT 5.5 might be that much better with complexities that curl just doesn't have because it's a basic tool.
Like yeah curl is obviously extremely fully featured as an "anything client" but it's orders of magnitude less complex than other software we rely on.
Curl is a lot more complicated than, I believe, you think. Most people know of it simply as a CLI to hit an HTTP(S) endpoint and write it out. But:
1. It supports basically any file transfer protocol.
2. It is a library that is designed for long running processes.
3. Because it's designed for long running processes, it makes use of every trick it can to pipeline and re-use connections and resources.
4. It has an asynchronous API so it can be integrated into any existing event loop.
Is a web browser or database more complicated? Most certainly, they solve really massive problems. But curl is certainly more complicated than probably most application code that uses it.
I agree it's rather basic but as stated in the article, its code is still longer than war and peace. There is still plenty of opportunities for security vulnerabilities in something of that size.
"curl is currently 176,000 lines of C code when we exclude blank lines. The source code consists of 660,000 words, which is 12% more words than the entire English edition of the novel War and Peace.
...
curl is installed in over twenty billion instances. It runs on over 110 operating systems and 28 CPU architectures. It runs in every smart phone, tablet, car, TV, game console and server on earth."
curl is dealing with the complexity of HTTP.
Even doing a simple basic request to some website, is going to cover a lot of code paths to deal with all sorts of response codes (redirects, etc.), headers, etc.
It's likely that new Rust code would introduce more bugs, while curl is extremely well tested at this point.
Taking a minute to appreciate the level of long term thinking required for storing data, to plan for 300-500 years into the future, to be able to withstand all kinds of innovations, and survive basic obsolescence.
This is a scam. Bootstrapping k3s is extremely easy. This tool lies to you giving you the impression that it's difficult or that you need a script, and tries to sell you monthly pro subscription, the features of which are also completely trivial.
The algorithm description was a bit confusing for me.
The SIMD part is just in the last step, where it uses SIMD to search the last 16 elements.
The Quad part is that it checks 3 points to create 4 paths, but also it's searching for the right block, not just the right key.
The details are a bit interesting. The author chooses to use the last element in each block for the quad search. I'm curious how the algorithm would change if you used the first element in each block instead, or even an arbitrary element.
Is it wise to understand everything that AI does for you?
Let’s say a person has 10 units of learning per week. Is the author actually claiming that that person must not deliver any results beyond their 10 units?
It makes some sense to have say 20 units of results and prioritize which ones to fully comprehend.
I suspect APIs / libraries / languages / platforms will have more churn due to AI. New platform new system need to learn. Once every 5 years might become every year or even more frequent. That would be a sort of inflation of knowledge and skills. It would affect the decision making about how to spend one’s 10 units per week.
> Let’s say a person has 10 units of learning per week.
This is… not how humans work? If you have the time and energy to learn ten things, and then spend time babysitting a random number generator to produce evidence of 10 more units of work, you’re paying an opportunity cost compared to someone who spends the time learning an eleventh thing. You can argue who has more short term value to a company… but who is the wiser person after a thirty year career?
> Is the author actually claiming that that person must not deliver any results beyond their 10 units?
No, I'm claiming that if someone or something else produced your 10 units of work, you better be able to verify that those 10 units of work are of at least the same quality as you producing them yourself. This is the bare minimum and not something to shift onto other people reviewing your work.
Beyond that, if that's all you do, you are basically proving you're replaceable. If you're smart, you'll reallocate intellectual capacity that was freed up by A.I. onto something A.I. can't do today.
Cleanups: I want to do a `helm uninstall` and have all the manifests go away at once instead of looking around for N different resources.
Hooks: I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster (at places I've worked, the CI cluster and K8s cluster were completely separate).
Regarding cleanups: I'm using flux CD with kustomize. It tracks resources that it created. If I delete manifest from my repository, flux will delete resources that were created from these manifests. For me that's pretty much the ideal workflow.
Regarding hooks: I don't know. All applications that I've used, implemented migrations internally (it's usually Java with Flyway), so I don't need to think about it. One possible approach could be to use flux CD with Job definition. I think that Flux will re-create Job when it changes. So if you change image tag, it'll re-create Job and it'll trigger Pod execution. But I didn't try this approach, so not sure if that would work for you.
> I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster
A Job feels like a good fit for this. CI deployes the Job without connecting to DB, Job runs migrations using the same connectivity as the application.
> apply my database migrations and populate the database with static datasets before I deploy my application
You could a) have the app acquire a lock in the db and do its own migrations, or b) create a k8s job that runs the migration tool, but make sure the app waits for the schema to be updated or at least won't do anything bad.
There are a multitude of cases of operations which need to be performed before and after specific actions in K8s. It depends on the resource, operator, operational changes, state, bugs, order of operations, and more.
When I play Bitburner, if I want to run it in the background, I have to run the game on Firefox or chrome. It’s a shame because safari actually gives best performance by quite a large margin.
What research? Where is it published?
reply