Hacker Newsnew | past | comments | ask | show | jobs | submit | estebank's commentslogin

Steak doesn't look like a cow either.

There's no reason that kind of client behavior can't be detected server side.

Detect the mouse moving in a "non-human way"? If it were that easy there'd be no hackers. And even if it where, what about wall hacks?

You can detect with high confidence that a player is aiming at something that shouldn't be visible to them. That goes for both aim bots and wall hacks. The longer they play and the more they do it, the higher the confidence. If you don't want to instaban them because you don't trust the detection enough, use it as a preselection of players to manually review.

Most of the time is spent figuring what the right thing to do is, not writing the implementation. Sometimes the process of writing the implementation surfaces new considerations about what the right thing is, but still, producing text to feed to a compiler is not the bulk of the work of a software engineer. It is to unearth requirements and turn them into repeatable software.

Feels like lately most of the time is spent arguing about or at least worrying about whether or not AI is going to replace all software developers.

Or dealing with the idiotic fallout of somebody who sucks at coding or even has never coded in their life trying to make that happen.

If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive. With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly. None of this pontificating; it’s really not that useful anymore.

With an LLM you don’t have to try to predict a complex system in advance, experiments are so cheap to can just converge to a solution directly.

We saw a similar philosophy in TDD advocacy many years ago. Search for something like "Sudoku Jeffries" to see how that went. Then search for "Sudoku Norvig" to see what it looks like when you actually understand the problem.

The idea that you can somehow iterate your way to a solution when you have no idea where you're trying to go or even which direction your next step should be in has always seemed absurd to some of us but in the era of LLMs there's no longer any doubt. In the agentic era (can we call a few months an "era"?) I estimate that 90% or more of the writing I've read about how to use agents most effectively came down to making sure there is a clear specification for what they need to implement first and then imposing extensive guard rails to make sure their output does in fact follow that specification. It's all about doing enough design work up front to remove any ambiguity before coding the next part of the implementation and almost everyone claiming any sort of real world success with coding agents seems to have reached a similar conclusion.


This is very naive and reductive thinking. Experiments have a cost, you really have to think carefully about what you are trying to learn. Even when code is cheap, traffic and time are still huge constraints, and you better make sure your hypothesis actually makes sense for your goals, because AI is more than happy to fill in the blanks with a plausible but completely wrong proposal.

More broadly, it's well understood that experiments are not a replacement for design and UX. Google is famously great at the former and terrible at the latter. Sure the AI maxxers will say the machines are coming for all creative endeavours as well, but I'm going to need more evidence. So far, everything good I've seen come from AI still had a human at the wheel, and I don't see that changing any time soon.


Even writing code the good old way, of course we experiment. I remember the old rule "Plan to throw away the first one. You will anyway." But then there's the "second system effect" where the second system is supposedly always overengineered and trying to take every possibility into account.

And then there's the times when the quick sloppy poc you planned to throw away gets forced into production and is still impossible to change ten years down the road.

AI makes all these problems so much less painful.

I worked at a company which had a huge monolithic ERP system (their product, to be clear) with no good separation between the GUI layer and presentation layer. The GUI was also dependent on an ancient version of the Borland C++ compiler. They put in a humongous effort to move to a slightly more modern UI library, and a client server architecture.

However, someone had decided that messages in xml or json were too inefficient, they already had performance issues. So they went with a binary message protocol of their own design - with no features for protocol update. Everything communicating with the server had to be on exactly the same version, or it would throw an error. So of course they very, very rarely updated the protocol.

I think the best help of AI will be to clean up such real life messes of soul-crushing architectural regrets. Will it do it perfectly, certainly not, but I wouldn't do it perfectly myself either if I was forced to do it - and I'd take a hell of a lot more time to do it.


I think you and 7e are both right. Being able to iterate some N orders of magnitude quicker is a big deal. This doesn’t eliminate design and UX. Rather, it merges it with high iteration speed to produce a form of “play”.

“Play” is what produced at least two (likely more) generations of attentive (and therefore competent) programmers. The hype around LLMs is painful, yes, but attentive human minds will ultimately bust through it.


And before long you have a solution that is made up of a thousand pieces of spaghetti that neither you nor anyone else understands. And when your solution becomes too brittle to use, cannot be maintained, or fails catastrophically, then what? Just hope that's someone else's problem?

Refactoring is cheap too, but you have to read your code and know when to stop and ask the agent to refactor, rewrite, adopt or change libs, fix issues presented by linters and code quality scanners, change abstractions and rethink the architecture.

It's never been easier to replace chunks of code with sane software patterns, but you have to have a feel for those patterns. And also understand what's under the hood.

You folks speak like the only function of the agent is to spit code and features. Get a grip and treat your deliverables with care, otherwise you only have yourself to blame, not the AI.


Refactoring is not cheap when you take into account the cost of not breaking things.

When we say "X is cheap" it's in comparison to doing things manually, not irresponsibly

You actually get what you ask for. And you can ask for anything, vaguely or not.

You'll end with spaghetti if you'll play a bad manager and only ever allocate time for new features and never for cleanups.

You can go through code, add REFACTOR comments based on your tastes and thoughts, and get your result and iterate to your heart's wishes. You just don't need to do the direct code typing.


That's the point. Your prototype doesn't need to be pretty. It just needs to prove that the value is there for it to be made pretty.

Order of operations: Make it work. Make it right. Make it fast.[0]

[0]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Unfortunately, too many developers (especially in the AI era) stop after the first item.

Stop... short of.

> If you’re spending time thinking and not experimenting, then it’s because experimentation is expensive.

No, because no amount of experimentation can solve many of the problems that have been solved by thinking. Even your claim about "experiments are cheap" requires thinking to decide what experiments to do. No one is generating all possible solutions that fit in X megabytes; you have to think to constrain the solution space.


Well - you converge to a system, but do that by pruning what you don't want.

If you care about maintainability and quality (and I include maintaining using LLM based tools) then you need to understand what it does (in doing so you will find lots of things for it to fix - you'll probably find that the architecture it's chosen is not right for what you want too).


I've noticed that a lot of developers fall into a trap where they start a project or experiment because their mental model is that an LLM made the task effortless, but in reality they ended up spending more time, because they committed themselves to doing projects that bring correspondingly less value such as throwaway prototypes.

So the infinite monkeys with infinite typewriters approach.

aka "swarms". cool sounding name for.. throw yet more mud at the wall. at unprecedented scales

AI is pretty good at figuring out what the right thing to do is.

AI is pretty good at pulling from the body of existing solutions of what the right thing to do is.

I find that it often pulls a solution that is good enough for this problem today. Sometimes that is great, and other times it's just creating a pile of shit

AI is decent at solving problems that lots of people already solved a long time ago.

Too many people believe that AI is going to come up with elegant solutions to problems that no one has ever solved before. Maybe someday, but for now it seems to be good at finding a solution that may be hidden away somewhere in stack overflow. If it just isn't there, then you are out of luck.

There is almost nothing new in computer programming. 99.999% of any code most of us on this forum write will be repeating patterns that have been written thousands of times before.

Tell a coding agent what your new thing needs to do, give it the absolute constraints, max response times, max failover times, and so on, tell it which technologies it has access to or could use, and then tell it to spend a lot of time going over and over the design, coming up with an initial X number of designs (I use 5), and then it must self criticise each one of them and weigh them up, narrow down to three, before finally presenting those three options to the user.

Now you read the options, understand them, realise that the AI has either converged on something very sensible, or it has missed something, so you tell it what it missed and iterate. Or it nailed something good, you pick the option you prefer, and tell it to come up with a more fleshed out high level design, describing the flow and behaviour deeply (NO CODE REFERENCES!). Then once you're happy, tell it to use that and write a comprehensive coding plan. Tell it specifically what coding patterns you prefer (you should have these in your AGENTS.md file already), what patterns to avoid (single threaded? multi-threaded? Avoid gc? How you typically deal with error conditions, etc etc).

Then have it start iteratively working on the coding plan, and it *MUST* have a strong feedback loop. If there is no feedback loop initially, I tell it to build one. It must be able to write very fluent integration tests (not just unit tests). It must be able to run the app and read the logs.

Do all this and I bet you get a better result that 80% of developers out there. Coding agents are extremely good when used well.


> The UFO nut community is being weaponized for political leverage

Always has been, at least since 1947.


Please don't, at least not mimicking the "edgy 13 year old thinks it comes across as cool" tone.

If the toolchain crashed while compiling, please file a ticket! That shouldn't happen ever and if it does it is indicative of a big problem that needs fixing.

Under that rubric, no language is memory safe.

This attitude doesn't even work in carpentry, depending on the timeframe you look at, tools have changed over time. You can still use a hand saw, where a table saw would be just as suitable, or have a SawStop(tm) and reduce the likelihood of losing a finger.

In carpentry, you still do a lot of work with a hammer which did not change materially for last 70 years. Programming tools did change very, very much since 1956, even though some still retain the recognizable shape (e.g. Lisp or Fortran).

I don't think there are Titanium hammers 70 years ago. The changes are smaller but they are there.

Do you recall which libraries? Use of nightly fell of a cliff after 2018. Looking at the bottom of https://lib.rs/stats#rustc-usage, ~8% of all crates.io requests came from a nightly newer than that corresponding to 1.86. That's am upper bound, as using a nightly compiler doesn't mean that a nightly compiler was needed. The prevalence of nightly is also niche specific. If you're in embedded it is likely you need to use some nightly-only features that haven't been stabilized, but if you have an OS chances are that you don't.

> That's am upper bound, as using a nightly compiler doesn't mean that a nightly compiler was needed.

To be fair it's not even a lower bound, as using a stable compiler doesn't imply the absence of nightly only feature (as in Cargo features, the ones you can enable on crates you depend on).


For the purposes of this discussion the question is not whether or not a crate exposes optional features that require a nightly compiler, but whether or not a crate makes use of the nightly compiler mandatory, which has become extremely rare in my experience. Perhaps it's more common in some embedded use cases, but if people want to make that assertion, I would ask that they either mention which libraries they're specifically talking about or which nightly features they're specifically referring to.

I think the divide is apps vs libraries: a library that requires their dependants to set an environment variable opting out of stability guarantees is unlikely to gain adoption, but applications that do so are more common, like Firefox.

> For the purposes of this discussion the question is not whether or not a crate exposes optional features that require a nightly compiler, but whether or not a crate makes use of the nightly compiler mandatory

In my opinion what matters is the functionality. If it's provided by a nightly-only crate or as a nightly-only feature of an otherwise non-nightly-only crate it doesn't really matter.

But I agree that this is become more and more rare.


In many countries the state owned the phone company.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: