Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Author's note: From here on, the content is AI-generated

Kudos to the author for their honesty in admitting AI use, but this killed my interest in reading this. If you can use AI to generate this list, so can anyone. Why would I want to read AI slop?

HN already discourages AI-generated comments. I hope we can extend that to include a prohibition on all AI-generated content.

> Don't post generated comments or AI-edited comments. HN is for conversation between humans.

 help



If the author had also included a note explaining that he'd *reviewed* what the AI produced and checked it for correctness, I would be willing to trust the list. As it is, how do I know the `netstat` invocation is correct, and not an AI hallucination? I'll have to check it myself, obviating most of the usefulness of the list. The only reason such a list is useful is if you can trust it without checking.

How would you know the invocation is correct when written by a human? Don’t humans make mistakes?

Sure, humans make mistakes... but rarely, vanishingly rarely about commands they use often. Are you going to make a non-typo kind of mistake when typing `ls -l`? AI hallucinations don't happen all the time, but they happen so much more often than "vanishingly rarely".

That's why you can't just vibe-code something and expect it to work 100% correctly with no design flaws, you need to check the AI's output and correct its mistakes. Just yesterday I corrected a Claude-generated PR that my colleague had started, but hadn't had time to finish checking before he went on vacation. He'd caught most of its mistakes, but there was one unit test that showed that Claude had completely misunderstood how a couple of our services are intended to work together. The kind of mistake a human would never have made: a novice wouldn't have understood those services enough to use them in the first place, and an expert would have understood them and how they are supposed to work together.

You always, always, have to double-check the output of LLMs. Their error rate is quite low, thankfully, but on work of any significant size their error rate is pretty much never zero. So if you don't double-check them then you're likely to end up introducing more bugs than you're fixing in any given week, leading to a codebase whose quality is slowly getting worse.


If I get that kind of content, my first reaction is to close it, it is kind of low effort content nowadays.

Unfortunely at work it isn't as easy with all the KPIs related to taking advantage of AI to "improve" our work.


I could've done better with research, but this post has been collecting dust in the drafts, so I decided to try my first (and last) time to finish the work I started a few months ago.

Why should you learn anything if you can just use AI to look it up? For fun is one reason.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: