Because it's even less useful than a washing machine. Unless you trust a frickin' humanoid robot doing your house chores, which is batshit insane as things stand.
The main critique of AI is that it's a dumb hallucinating parrot. It can't do genuine human quality work at all, outside of extremely narrow domains like basic translation and copyediting. Even for Q&A, while it can be useful by quickly accessing a huge storehouse of learned knowledge, the vulnerability to hallucinations means that human expert verification will always be required.
I'll note that there can be multiple main critiques coming from an incoherent set of viewpoints, since this is public opinion we're talking about.
Between "AI doing creative work", if you believe, and "fraud", there's all the low-key filler material that's sub-creative and sub-fraudulent. There's a similarity between the phrase it was made with AI and phrases like I didn't bake your cake myself, it came from a store or sorry, it's just a cheap plastic one. So part of AI's image is that it's a flourishing new source of disappointment.
Repetition of basic knowledge is actually a big part of a successful education, Even schoolkids in the earliest grades can actually learn surprisingly complex subjects by heart simply by blabbing everything back word-for-word. Problem solving skills can then be built up on these basics.
We used to have these questions about "What are the advantages and disadvantages of X?"
I used to think I was outsmarting "the system" by only learning a few key facts about X and then twisting them around to get advantages and disadvantages, but little did I know that was the whole point of the course — to see the same thing from different perspectives and realize there are both advantages and disadvantages to X.
I am not convinced by that. Kids tend to learn problem solving (and other) skills if given a chance. i do not think encourage huge amounts of rote learning is an optimal, or even, useful say of doing that.
My experience (with myself and my kids) has been the opposite.
Making music would suck if I hadn't spent years of (fought against every day) practice/rehearsal. We need to practice learning the tools, not just understanding we have them. So many rote things opened so many doors for me to explore later.
My creativity would be way less if I hadn't spent hours listening to others music. I think it applies to less fun/interesting things as well.
> This is present even today, I saw a burial in Eastern Europe where the parents put a game of chess and toys in the coffin. While it will do no good to the deceased my theory is that it is a way for the living to deal with the loss.
Spoiler: they do that so that future grave robbers and archaeologists will know all about the dead person's lifestyle. Surely that kind of everlasting glory has to be worth something to the deceased, one would think?
> Mouse interfaces can be incredibly information dense because mice are both incredibly economic from a space and motion standpoint, and also somehow incredibly precise. ...
There's exactly one feature of touch interfaces that can be incredibly input-information dense, easily rivaling the mouse, and that's swiping gestures with 1-to-1 fluid animation for feedback. Usually seen with pie menus and the like. Drag and drop, the mouse equivalent, is extremely clunky - and mouse gestures that don't even involve clicking even more so.
These DOS machines for industrial control could probably be replaced by an Arduino or a far more reliable MCU, whereas running an actual legacy PC as a business-critical component in manufacturing has to be a bit of a nightmare by now. AI could probably do a good enough job of working out how the legacy DOS executables were intended to work.
You might notice that I never once claimed that the replacement I described would be "easy" or, for that matter, even advisable given the broader real-world constraints involved; just technically feasible in the barest sense. I don't think many people would want to use DOS to design a greenfield system of that kind today, and there's a reason for that. Yes, you can buy newly made "DOS PCs" today, but can you really ensure that today's brand new DOS PC will behave in every way that matters like the actual 30 years old DOS PC that used to control the machinery? That's not a trivial question to answer.
If you design the system from the outset to work with an actual PLC/SCADA or similar (the typical solution for hooking up to big industrial machinery of that sort) that's a bit less likely to come up as an issue, and the hardware will actually be designed for that kind of environment.
Yes, if you ignore everything that was discussed, invent time travel do you can "design the system from the outset" as the prescient you are, and pretend anyone was talking about greenfield, you get to be right. Good for you...some people just need the 'win'.
It's a simple enough implementation that implicitly helps document how SDL is supposed to work (DOS being a well understood platform by now). Plenty of reasons to maintain it based on that alone.
The thing is that AI is still more akin to a glorified autocomplete than something that can really supersede your skills. Proprietary model suppliers are constantly trying to obscure this basic underlying fact, without much success (much of the unpredictable shifts you see in proprietary AI behavior ultimately boils down to this); so it becomes far more crystal-clear when using open models that really are a pure commodity.
yeah, I think there's the marketing and then there's the actual true utility. AI isn't a better computer program. It's not going to be able to do everything you want autonomously. But, it's pretty good at some stuff!
You can already run inference on ordinary hardware but if you want workable throughput you're limited to small models, and these have very poor world-knowledge.
$10K should be enough to pay for a 512GB RAM machine which in combination with partial SSD offload for the remaining memory requirements should be able to run SOTA models like DS4-Pro or Kimi 2.6 at workable speed. It depends whether MoE weights have enough locality over time that the SSD offload part is ultimately a minor factor.
(If you are willing to let the machine work mostly overnight/unattended, with only incidental and sporadic human intervention, you could even decrease that memory requirement a bit.)
As a typical example DeepSeek v4-pro has 59B active params at mostly FP4 size, so it needs to "find" around 30GB worth of params in RAM per inferred token. On a 512GB total RAM machine, most of those params will actually be cached in RAM (model size on disk is around 862GB), so assuming for the sake of argument that MoE expert selection is completely random and unpredictable, around 15GB in total have to be fetched from storage per token. If MoE selection is not completely random and there's enough locality, that figure actually improves quite a bit and inference becomes quite workable.
I've never seen reports of this kind of setup being able to deliver more than low single-digit tokens per second. That's certainly not usable interactively, and only of limited utility for "leave it to think overnight" tasks. Am I missing something?
Also, I don't know of a general solution to streaming models from disk. Is there an inference engine that has this built-in in a way that is generally applicable for any model? I know (I mean, I've seen people say it, I haven't tried it) you can use swap memory with CPU offloading in llama.cpp, and I can imagine that would probably work...but definitely slowly. I don't know if it automatically handles putting the most important routing layers on the GPU before offloading other stuff to system RAM/swap, though. I know system RAM would, over time, come to hold the hottest selection of layers most of the time as that's how swap works. Some people seem to be manually splitting up the layers and distributing them across GPU and system RAM.
Have you actually done this? On what hardware? With what inference engine?
Because it's even less useful than a washing machine. Unless you trust a frickin' humanoid robot doing your house chores, which is batshit insane as things stand.
reply