I wrote tens of thousands of lines of code before Google and SO.
I also enjoy using AI. It makes it easier to get mundane work done quickly. Junior devs who never get tired is a great analogy. It's a force multiplier and for people with limited time (meetings, people management, planning etc.) they enable doing a lot in limited time. I can relate to more junior people being worried and/or some senior people concerns of quality though. I get a task done, review it, get another task done. I won't let it build something large on auto-pilot.
One thing that should be noted is that life was simpler back then. You could know the syntax of C or Pascal. You knew all the DOS calls or the standard libraries. You knew BIOS and the PC architecture. I still used reference manuals to look up some details I didn't have in my head.
Today software stacks tend to be a lot more complicated.
The practice of software engineering is not what they teach in university.
I would say that today's graduates are IMO a bit better than a few decades ago but there are still many graduating who are just not good at writing computer software and don't really have the aptitude for that (or maybe the interest in getting good). That's what happens when the pipeline of people coming in are people who want to make money and the institution is mostly a degree factory.
One thing worth mentioning is that even before AI only some small subset of engineers have experienced building systems from scratch or inventing new ways of doing things or root causing complex problems or even writing a lot of code. Most software engineering is maintenance or mundane or not productive.
Even in a world where there's a lot of AI generated code there can still be people that have enough exposure to doing hard things. Definitely at this point in time where AI can't really do all those hard things anyways - but even after it'll be able to.
you don't need to build systems from scratch to acquire problem-solving skills. even routine maintenance problems require to dig into documentation, look at github issues, and do root-cause analysis. These skills are eliminated from reliance on AI and there is no fallback if one never acquired them in the first place.
Crypto sucks energy and creates no value. It's complete and utter speculative garbage that also destroys the planet.
AI has real value. We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not, whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
> Crypto sucks energy and creates no value. It's complete and utter speculative garbage that also destroys the planet.
All of what you said is false.
Stablecoins are not speculative and have value, and you can send money at a low fee, low cost, worldwide to wallets on the same day, right now with far less energy than today's "AI".
> AI has real value.
What do you mean by "AI" specifically? LLMs in data centers?
The value in for this mysterious "AI" or even "AGI" paradise is not even for you. It is actually used against you.
> We can argue about whether the cost is worth the value, whether we're on an exponential improvement curve or not
You understand that the current iteration of "AI" needs tens of gigawatts of energy and hundreds of billions of dollars and wasteful amounts of water which causes electricity prices in certain cities to skyrocket?
The way that it is financed appears to be close to fraudulent with vague "commitments" and mountains of debt that would take almost a trillion dollars in revenue to pay off the data centre build out.
> whether it ends up creating jobs or destroying jobs, but AI is mind blowing science fiction that nobody would have believed you will exist 10 years ago.
Assuming after the data centers will be built (if they ever will be), can you name what are those new jobs that will be created from "AI"?
How much market cap is in stablecoins vs. proof of work crypto? Has Bitcoin gone down and disappeared due to the avabilability of stablecoin?
Bitcoin uses 200TWh (per year) probably pretty similar to all the AI usage today give or take. Certainly if we look at the area under the curve Bitcoin still by far has used more energy (~1000TWh) than AI/LLMs. For what is essentially a scam/pyramid scheme. And this is just Bitcoin. But yes, LLMs are using more and more energy (but also potentially with a larger % of renewable sources).
I mean AI in the colloquial sense. Large language models. It's ridiculous to compare the value produce by Bitcoin (negative- crime, money laundering, funding terrorist regimes, tax evasion etc.) to the value of LLMs.
LLMs enable people who couldn't produce software applications before to do so. This enables new business that didn't exist before. Those business hire people directly (including eventually software engineers) and create indirect jobs. This is no different than the steam engine or the Internet. You're arguing that the Internet took away the jobs of the people working in the post office because letters could now be sent electronically. I don't have a crystal ball but the historical experience teaches us that new jobs do get created and the economy is not a zero sum game. Maybe this will be different and maybe it won't.
Pretty much anything that needs performance and has a lot of relatively light operations is not a candidate for spawning a thread. Context switching and the cost of threads is going to kill performance. A server spawning a thread per request for relatively lightweight request is going to be extremely slow. But sure, if every REST call results in a 10s database query then that's not your bottleneck. A query to a database can be very fast though (due to caches, indices, etc.) so it's not a given that just because you're talking to a database you can just spin up new threads and it'll be fine.
EDIT: Something else to consider is what if your REST calls needs to make 5 queries. Do you serialize them? Now your latency can be worse. Do you launch a thread per query? Now you need to a) synchornize b) take x5 the thread cost. Async patterns or green threads or coroutines enable more efficient overlapping of operations and potentially better concurrency (though a server that handles lots of concurrent requests may already have "enough" concurrency anyways).
Server applications don’t spawn threads per request, they use thread pools. The extra context switching due to threads waiting for I/O is negligible in practice for most applications. Asynchronous I/O becomes important when the number of simultaneous requests approaches the number of threads you can have on your system. Many applications don’t come close to that in practice.
There’s a benefit in being able to code the handling of a request in synchronous logic. A case has to be made for the particular application that it would cause performance or resource issues, before opting for asynchronous code that adds more complexity.
Thread pools are another variation on the theme. But if your threads block then your pool saturates and you can't process any more requests. So thread pools still need non-blocking operations to be efficient or you need more threads. If you have thread pools you also need a way of communicating with that pool. Maybe that exists in the framework and you don't worry about it as a developer. If you are managing a pool of threads then there's a fair amount of complexity to deal with.
I totally agree there are applications for which this is overkill and adds complexity. It's just a tool in the toolbox. Video games famously are just a single thread/main loop kind of application.
There’s also a really good operational benefit if you have limits like total RAM, database connections, etc. where being able to reason about resource usage is important. I’ve seen multiple async apps struggle with things like that because async makes it harder to reason about when resources are released.
Basically it’s the non-linear execution flow creating situations which are harder to reason about. Here’s an example I’m trying to help a Node team fix right now: something is blocking the main loop long enough that some of the API calls made in various places are timing out or getting auth errors due to the signature expiring between when the request was prepared and when it is actually dispatched because that’s sporadically tend of seconds instead of milliseconds. Because it’s all async calls, there are hundreds of places which have to be checked whereas if it was threaded this class of error either wouldn’t be possible or would be limited to the same thread or an explicit synchronization primitive for something like a concurrency limit on the number of simultaneous HTTP requests to a given target. Also, the call stack and other context is unhelpful until you put effort into observability for everything because you need to know what happened between hitting await and the exception deep in code which doesn’t share a call stack.
The execution flows of individual async tasks are still linear, much like individual threads are linear.
Scheduling (tasks by the async runtime vs threads by the OS), however results in random execution order either way.
If there is a slow resource, both, async tasks as well as threads will pile potentially increasing response times.
Wether async or threads, you can easily put a concurrency limit on resources using e.g. semaphores [1]:
- limit yourself to x connections (either wait or return an error)
- limit the resource to x concurrent usages (either wait until other users leave, or return an error)
Regarding blocking the main loop: with async and non-blocking operations, how would something block the main loop?
And why would the main loop being blocked cause API calls being timing out? Is it single threaded?
> The execution flows of individual async tasks are still linear, much like individual threads are linear.
Think about what happens:
1. Request one hits an await in foo()
2. Runtime switches to request two in bar() until it awaits
3. Runtime switches to request three in baaz(), which blocks the loop for a while
4. Request one gets a socket timeout or expired API key
That error in #4 does not tell you anything about #2 or #3, and because execution spreads across everything in that process you have to check everything. If it was a thread, you would either not have the problem at all, it would show up clearly in request three, or you’d have a clear informative failure on a synchronization primitive saying that #3 held a lock for too long.
That makes it harder to control when memory is allocated or released in garbage collected languages, too, because you have to be very careful to trigger gc before doing something which can suspend execution for a while or you’ll get odd patterns when a small but non-zero percentage of those async requests take longer than expected (i.e. load image master, create derivative, send response needs care to release the first two steps before the last or you’ll have weird behavior when a slow client takes 5 minutes to finish transferring that response).
Arguably that’s something you want to do anyway but it dramatically undercuts the simplicity benefits of async code. I’m not saying that we should all give up async but there are definitely some pitfalls which many people stumble into.
No such thing. In a preemptive multitasking OS (that's basically all of them today) you will get context switching regardless of what you do. Most modern OS's don't even give you the tools to mess with the scheduler at all; the scheduler knows best.
That's not accurate. Preemptive multitasking just means your thread will get preempted. Blocking still incurs additional context switching. The core your thread is running on isn't just going to sit idle while your thread blocks.
There's features and there is quality and there is domain.
I worked on a team that built high precision industrial machinery. The team and the project manager decided to delay shipping because there were still problems. We delayed, fixed the problems, and the machine worked really well and was used for at least a decade. If we'd had shipped it too soon we would have to try and fix it at a remote site and likely it would suffer from problems.
With most products you want to figure out what is your MVP (minimal viable product) and what is the quality level your customers expect. If you ship something less than that it's probably not a good tradeoff. If you build too much and ship too late that's also not a good tradeoff. When shipping increments they also need to be appropriately sized and with the right quality level.
Ah, but you're talking about something else: hardware is quite different from software. Once your machine is out in the wild, you can't update it remotely. But with software, shipping MVPs and iterating is not only possible, it's almost always the right way to go about it.
I frequently tell my software teams "We aren't putting rockets in space; we're shipping an admin panel. We can revert code or change things if we don't like it."
I also enjoy using AI. It makes it easier to get mundane work done quickly. Junior devs who never get tired is a great analogy. It's a force multiplier and for people with limited time (meetings, people management, planning etc.) they enable doing a lot in limited time. I can relate to more junior people being worried and/or some senior people concerns of quality though. I get a task done, review it, get another task done. I won't let it build something large on auto-pilot.
One thing that should be noted is that life was simpler back then. You could know the syntax of C or Pascal. You knew all the DOS calls or the standard libraries. You knew BIOS and the PC architecture. I still used reference manuals to look up some details I didn't have in my head.
Today software stacks tend to be a lot more complicated.
reply