Hacker Newsnew | past | comments | ask | show | jobs | submit | LeCompteSftware's commentslogin

I guarantee they didn't read a single word of this book. Look at the introduction to C and tell me that

a) it's a good introduction to C

b) a human read any of it

https://github.com/ebrandi/FDD-book/blob/main/content/chapte...

This book is a dishonest AI scam.


I'm not reading that whole thing right now (that is a lot of text), but I read as far as "What Is a Variable?" and nothing stands out as bad or AI-written. What problems do you see?

> This book is a dishonest AI scam.

Indeed! I read through a couple of paragraphs. Each begins with a bloated introduction where each sentence repeats same idea many times in different words. Lot's of bullets repeating same statement. That's exactly how LLM scam looks like. The whole book is full of water. It can be reduced in size by a factor of 5.


It's a free book, he's not selling you anything.

Being LLM slop in particular, this book cost way more of my time than it should have. It really does look superficially competent, until you realize that competence is paragraph-to-paragraph, not section-to-section.

The scam is not stating up front "this was written by an LLM and I haven't read it." The dishonesty is claiming this book will teach you such-and-such when the author actually has no idea. It really is a scam. Even if he's not making anything directly, he's already earned 150 stars in the GitHub pseudoeconomy, plus good word-of-mouth from people who are lazy and thoughtless like he is, people who assumed that 4,500 pages about FreeBSD from a lead FreeBSD maintainer must be worth something and didn't bother to check if it was written by an LLM... even though it's 2026.

This book has negative value. It is actively destructive to FreeBSD, even if in the short term it boosts the author's public profile.


> This book has negative value. It is actively destructive to FreeBSD, even if in the short term it boosts the author's public profile.

I won't be that radical, the book still has value. There are many useful code samples with descriptions and explanations of concepts I did not know before. But to get to them one has to dig through a forrest of useless tokens. Someone has to pass it through an LLM and publish distilled edition. :-)


There is something wrong with it because LLMs are really not capable of writing a useful book, and this book is 100% LLM slop.

Look at this totally useless """introduction""" to C: https://github.com/ebrandi/FDD-book/blob/main/content/chapte...

First of all this is an entire book, it's 76,000 words. But look at the first nontrivial example of C after "hello world," under "Bonus learning point about C return values"

  exec_map_first_page(struct image_params *imgp)
 {
        vm_object_t object;
        vm_page_t m;
        int error;

        if (imgp->firstpage != NULL)
                exec_unmap_first_page(imgp);
                
        object = imgp->vp->v_object;
        if (object == NULL)
                return (EACCES);
 #if VM_NRESERVLEVEL > 0
        if ((object->flags & OBJ_COLORED) == 0) {
                VM_OBJECT_WLOCK(object);
                vm_object_color(object, 0);
                VM_OBJECT_WUNLOCK(object);
        }
 #endif
        error = vm_page_grab_valid_unlocked(&m, object, 0,
            VM_ALLOC_COUNT(VM_INITIAL_PAGEIN) |
            VM_ALLOC_NORMAL | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED);

        if (error != VM_PAGER_OK)
                return (EIO);
        imgp->firstpage = sf_buf_alloc(m, 0);
        imgp->image_header = (char *)sf_buf_kva(imgp->firstpage);

        return (0);
 }
This teaches nobody anything. I am sorry but this project is completely useless and there's no way Brandi read a single word of it. This entire book is a dishonest AI scam. I hate LLMs. It is hard to think of another computer technology that has done so much damage for so little good.

Edit: I mean look at the intro to for loops. This is supposed to be for total beginners. Example 1:

  for (int i = 0; i < 10; i++) {
     printf("%d\n", i);
   }
>> Start at i = 0

>> Repeat while i < 10

>> Increment i each time by 1 (i++)

Example 2:

  for (i = 0; n > 0 && i < IFLIB_MAX_RX_REFRESH; n--, i++) {
     struct netmap_slot *slot = &ring->slot[nm_i];
     uint64_t paddr;
     void *addr = PNMB(na, slot, &paddr);
     /\* ... work per buffer ... \*/
     nm_i = nm_next(nm_i, lim);
     nic_i = nm_next(nic_i, lim);
   }
>> What this loop does

>> * The driver is refilling receive buffers so the NIC can keep receiving packets.

>> * It processes buffers in batches: up to IFLIB_MAX_RX_REFRESH each time.

>> * i counts how many buffers we've handled in this batch. n is the total remaining buffers to refill; it decrements every iteration.

>> * For each buffer, the code grabs its slot, figures out the physical address, readies it for DMA, then advances the ring indices (nm_i, nic_i).

>> * The loop stops when either the batch is full (i hits the max) or there's nothing left to do (n == 0). The batch is then "published" to the NIC by the code right after the loop.

>> In essence, a for loop is the go-to choice when you have a clear limit on how many times something should run. It packages initialisation, condition checking, and iteration updates into a single, compact header, making the flow easy to follow.

Total garbage. This has literally zero educational value. I assume Brandi is just trying to make a quick buck, he truly has not even glanced at the output. He should be ashamed of himself.


> I assume Brandi is just trying to make a quick buck,

By publishing a free book without even a way to donate?


I am unsure which part of the quoted content you disagree with .

its FREE. no ads nothing. you are realy anti LLM and thats fine but dont let yourself be total blinded.

you yourself chose to spend time on something to your own frustration even though at that point you already knew it wasnt for you. frustrating yourself further trying to find examples to help frustrate others too.

look at how you are behaving and then realise you are saying someone else should be ashamed of themselves.

If you disagree with the book a simple excerpt and note would suffice. if it's 'super clearly bad' it does not need to contain a load of emotions to transmit that message.


That's not what the parent comment meant. They meant checking the Lean-language definitions actually match the mathematical English ones, and that the Lean theorems match the ones in the paper. If that's true then you don't actually need to check the proofs. But you absolutely need to check the definitions, and you can't really do that without sufficient mathematical maturity.

Yes, and the child comment’s point is that formalizing the problem is likely easier than having the LLM verify that each step of a long deduction is correct, which is why Lean might be helpful.

But both of you are ignoring the parent comment! Actually you're ignoring the context of the thread.

Originally someone said "I wish I was math smart to know if [this vibe-mathematics proof] worked or not." They did NOT say "I'd like to check but I am too lazy." Suggesting "ask it to formalize it in Lean" is useless if you're not mathematically mature enough to understand the proof, since that means you're not mathematically mature enough to understand how to formalize the problem.

Then "likely easier" is a moot point. A Lean program you're not knowledgeable enough to sanity-check is precisely as useless as a math proof you're not knowledgeable enough to read.


It’s not useless, because you can, for example, ask multiple frontier models to do the formalization and see if they agree. And if they have surface-level differences in formalization, you can also ask them whether apparently-different definitions are equivalent.

This isn’t perfect of course - perhaps every single model is wrong. But you are too quick to declare that something isn’t useful for arriving at an answer. Reducing the surface area of what needs to be checked is good regardless.


thanks

The point of the article is that sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis. The reason US/EU manufacturing was "the old ways" is that people could make easier money with financial engineering, an insight that extended all the way to Raytheon.

COBOL is a bad example, but higher-level languages vs. assembly is not. If you write a lot of C you really don't need to know assembly.... until you stumble across a weird gcc bug and have no clue where to look. If you write a lot of C# you don't really need to know anything about C... until your app is unusably slow because you were fuzzy on the whole stack / heap concept. Likewise with high-level SSGs and design frameworks when you don't know HTML/CSS fundamentals.

As the author says maybe AI is different. But with manufacturing we were absolutely confusing "comfortable development" with "progress." In Ukraine the bill came due, and the EU was not actually able to manufacture weapons on schedule. So people really should have read to the end of "building a C compiler with a team of Claudes":

  The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.
At least with Opus 4.6, a human cannot give up "the old ways" and embrace agentic development. The bill comes due. https://www.anthropic.com/engineering/building-c-compiler

> sometimes the "old ways" really means "not particularly profitable or necessary in the short term" but the bill comes due in a crisis.

yes of course. that's why I said > If you REALLY need something long-forgotten, then you have lazy-load it back into being at significant cost.

This is all known, because it's always been this way. You can't hire a blacksmith, you need to first REMAKE the blacksmith if you really need one. It's always been this way, and it will continue. There is a cost to resurrecting old processes. This cost is a fact of life and needs to be planned for.

It cannot be avoided except by maintaining some kind of "strategic reserve" of thousands or millions people who sit around building things nobody wants on the off chance they might be needed again -- which a democracy will not long have the patience to continue paying for.


Many things we must know to do our jobs are themselves artifacts of historical decisions made in a time and place that no longer makes sense, but we have to know them to do our jobs.

Claude has allowed me to jettison many useless (IMO) skills I've developed over the years. I'm quite happy to let my bank of CSS and regex trivia expire from the cache, never to be reloaded again. I will never have to write another webpack.config.js as long as I live. So much time in programming is spent looking up SDK operations that I basically know, I just can't remember whether the dang method is called acquire_data() or load_data() .... etc


But these are hard IT things a human programmer really struggles with as well. What % of software written is that? Very very low. Most software is dull and requires business vagueness to be translated into deterministic logic and interfaces; LLMs are pretty great at that as it is. If humans use their old ways to fix complex problems and llms do the rest, we still only need a handful of those humans. For now.

"For now" is sort of the entire point of the article :)

Even in the Before Times, it was much cognitively cheaper to write code than it is to read someone else's code closely, or manage lots of independent code across a team, or to make a serious change to existing code. It's so much easier to just let everyone slap some slop on the pile and check off their user stories. I think it will take years to figure out exactly what the impact of LLMS on software is. But my hunch is that it'll do a lot of damage for incremental benefit.

With the sole exception of "LLMs are good at identifying C footguns," I have yet to see AI solve any real problems I've personally identified with the long-term development and maintenance of software. I only see them making things far worse in exchange for convenience. And I am not even slightly reassured by how often I've seen a GitHub project advertise thousands of test cases, then I read a sample of those test cases and 98% of them are either redundant or useless. Or the studies which suggest software engineers consistently overestimate the productivity benefits of AI, and psychologically are increasingly unable to handle manual programming. Or the chardet maintainer seemingly vibe-benchmarking his vibe-coded 7.0 rewrite when it was in reality a lot slower than the 6.0, and he's still digging through regression bugs. It feels like dozens of alarms are going off.

https://en.wikipedia.org/wiki/The_Mythical_Man-Month


These are good point and I am not overestimating; we are simply seeing the productivity boost in our company and the rise in profitability. We practice TDD, but only at integration level, so we have tests upfront for api and frontend and the AI writes until it works. SOTA models are simply good enough not to do;

function add(a,b) = c // adds two numbers

test: add(1,2)=3

to implement

function add(a,b) return 3

So when you have enough tests (and we do), it will deliver quality. Having AI write the tests is mostly useless. But me writing the code is not necessarily better and certainly not faster for most cases our clients bring us.


The more important research is the kind that the economy doesn't especially benefit from, but which needs to happen in order to improve the quality of human life.

I had a job paid by the National Science Foundation, doing genomics research on children with extremely rare (sometimes unique) genetic diseases. We did publish papers, and Big Pharma can glean a little bit about how we handled the biomedical informatics of managing data across different highly specialized labs, maybe a researcher will incrementally improve GWAS across the field. But that research was important because actual human children were suffering and needed help.


I'm confused, does this comment have anything to do with the paper? This paper is about fueling a fire, not starting one.

from the paper: "The consideration of fire ecology data and various factors involved in the complex process of fire ignition, combustion, and behavior, in relation to the GBY paleoenvironment and archaeology, enabled the rejection of recurrent natural fires as the responsible agent for burning (Alperson-Afil, 2012)."

But that's summarizing a paper from 2012.

I think part of it is Visual Studio Code doing most IDE things very well, creating a market niche for terminal tooling that handles the rest.

Certainly part of it is also people of my generation being nostalgic for the TUIs of DOS file managers and editors.


Surely the "universal grammar" is "every country adopting Western Arabic numerals, largely for commercial reasons, but also acknowledging that their indigenous systems kind of sucked in comparison." The fact that there are different languages truly means nothing, Arabic numerals spread much further than the Latin alphabet.

I really don't think this is evidence for "universal grammar" in any sense. It is evidence that we are all using the same very specific grammar for very specific cultural reasons.


"using periodic features with dominant periods at T=2, 5, 10" seems inconsistent with "platonic representation" and more consistent with "specific patterns noticed in commonly-used human symbolic representations of numbers."

Edit: to be clear I think these patterns are real and meaningful, but only loosely connected to a platonic representation of the number concept.


Is it an actual counterargument?

The "platonic representation" argument is "different models converge on similar representations because they are exposed to the same reality", and "how humans represent things" is a significant part of reality they're exposed to.


You should see my reply to convolvatron below.

I don't think this is a correct formulation of the platonic representation argument:

  different models converge on similar representations because they are exposed to the same reality
because that would be true for any statistical system based on real data. I am sure the platonic representation argument is saying something more interesting than that. I believe they are arguing against people like me, who say that LLMs are entirely surface correlations of human symbolic representation of ideas, and not actually capable of understanding the underlying ideas. In particular humans can speak about things chimpanzees cannot speak about, but that we both understand (chimps understand "2 + 2 = 4" - not the human sentence, but the idea that if you have a pair of pairs on one hand, and a quadruplet on the other, you can uniquely match each item between the collections). Humans and chimps both seem to have some understanding of the underlying "platonic reality," whatever that means.

"Not actually capable of understanding" is worthless unfalsifiable garbage, in my eyes. Philosophy at its absolute worst rather than science.

Trying to drag an operational definition of "actual understanding" out of anyone doing this song and dance might as well be pulling teeth. People were trying to make the case for decades, and there's still no ActualUnderstandingBench to actually measure things with.


No, it is partially falsifiable. LLMs clearly don't understand the concept of quantity. They fail at tests designed to assess number understanding in dogs and pigeons; in fact they are quite likely to fail these tests, because they are wildly out of distribution.

We don't know how to demonstrate actual understanding, but we sure can demonstrate a lack of it. When it comes to abstract concepts like "three" or even "more," LLMs have a clear lack of understanding. Birds and mammals do not.


Which "tests", exactly? Do tell. Tests where LLMs don't beat a human baseline is genuinely hard to come by nowadays.

you're right, its just that 'platonic' is an argument that numbers exist in the universe as objects in and of themselves, completely independent of human reality. if we don't assume this, that numbers are a system that humans created (formalism), then sure, we can be happy that llms are picking common representations that map well into our subjective notions of what numbers are.

FWIW it's objectively false that numbers are a system humans created. That's almost certainly true for symbolic numbers and therefore large numbers ( > 20). But pretty much every bird and mammal is capable of quantitative reasoning; a classic experiment is training a rat to press a lever X times when it hears X tones, or training a pigeon to always pick the pile with fewer rocks even if the rocks are much larger (i.e. ruling out the possibility of simpler geometric heuristics). Even bees seem to understand counting: an experiment set up 5 identical human-created (clearly artificial) landmarks pointing to a big vat of yummy sugar water. When the experimenters moved the landmarks closer together, the bees undershot the vat, and likewise overshot when the landmarks were moved further apart.

And of course similar findings have been reproduced etc etc. The important thing to note is how strange and artificial these experiments must seem for the animals involved - maybe not the bees - so e.g. it seems unlikely that a rat evolved to push a lever X times, it is much more plausible that in some sense the rat figured it out. At least in birds and mammals there seems to be a very specific center of the brain responsible for coordinating quantitative sensory information with quantitative motor output, handling the 1-1 mapping fundamental to counting. More broadly, it seems quite plausible that animals which have to raise an indeterminate number of live young would need a robust sense of small-number quantitative reasoning.

It is an interesting question as to whether this is some cognitive trick that evolved 200m years ago and humans are just utterly beholden to it. But I think it requires jumping through less hoops to conclude that the human theory of numbers is pointing to a real law of the universe. It's a consequence of conservation of mass/energy: if you have 5 apples and 5 oranges, you can match each apple to a unique orange and vice versa. If you're not able to do that, someone destroyed an apple or added an orange, etc. It is this naive intuitive sense of numbers that we think of as the "platonic concept" and we share it with animals. It seems to be inconsistent and flaky in SOTA reasoning LLMs. I don't think it's true that LLMs have stumbled into a meaningful platonic representation of numbers. Like an artificial neural network, they've just found a bunch of suggestive and interesting correlations. This research shows the correlations are real! But let's not overinflate them.


Regardless of whether the convergence is superficial or not, I am interested especially in what this could mean for future compression of weights. Quantization of models is currently very dumb (per my limited understanding). Could exploitable patterns make it smarter?

That's more of a "quantization-aware training" thing, really.

I think blaming America's problems on gerontocracy is correlation-causation confusion. The reason we have a gerontocracy is that ordinary rank-and-file voters are too cynical and individualistic to participate in politics.

In many wealthy countries the old are literally outnumbering the young so it wouldn't matter if everyone under the age of 40 turned up to vote.

Nations haven't tried to implement mass immigration because they are woke- it's a last desperate gamble.


A gamble which they managed so poorly that the planned wins got buried under collateral losses. And I still don't see much talk about solutions, just destructive radicalization.

Funnily enough a lot of these 'boomer' haters love to pretend the silent generation or the greatest generation were so much better. I believe a lot of this cynicism and individualism is caused by political decisions by these generations. Decisions like subsidizing the 30 year mortgage and urban design plans made it more difficult to have a 'real community', one which you would engage in politics for.

The power balance of local politics and national politics also got changed with TV and the internet, things which would've happened regardless of how good a 'generation' is.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: