Hacker Newsnew | past | comments | ask | show | jobs | submit | rstuart4133's commentslogin

> But there's the reflog in git which is the ultimate undo tool.

That one sentence outs you as someone who isn't familiar with JJ.

Here is something to ponder. Despite claims to the contrary, there are many git commands that can destroy work, like `git reset --hard`. The reflog won't save you. However there is literally no JJ command that can't be undone. So no JJ command will destroy your work irretrievably.


I’ve just tested that exact command and the reflog is storing the changes. It’s different from the log command which displays the commit tree for the specified branch. The reflog stores information about operations that updates branches and other references (rebase, reset, amend, commit,…). So I can revert the reset, or a pull.

> Zellij is close to 50 megabytes,

That's a Rust thing. It's what happens when you statically link because you monomorphise everything.


And the Rust practice of everything needing at least 300 libraries. It's slowly getting to JS levels of insanity.

One of the reasons I went Zig for now

It's true that we often split software engineering into two pieces: doing the design and implementing the code. In fact I often prefer to think of it as computer systems engineering because the design phase often goes well beyond software. You have to think about what networks are used, what the hardware form factor should be used, even how many check digits should be put on a barcode.

But then you go onto say this:

> But that is not programming then? Doing voice recognition in the 90s, missile guidance systems, you name it, those are hard things, but it's not the "programming" that's hard. It's the figuring out how to do it. The algorithms, the strategy, etc.

That implies LLMs can't help with design of something that needs voice recognition and missile guidance systems. If that's your claim, it's wrong. In fact they are far better at being sounding boards and search engines than they are at coding. They make the same large number of mistakes in design tasks as they do in software coding, but in design tasks the human is constantly in the loop, examining all of the LLM's output so the errors don't matter so much. In vibe coding the human removes themselves from the loop to a large extent. The errors become shipped bugs.


They can help with those tasks because there are decades of published research on them. I don't think LLMs change anything here. Even before LLMs, it wouldn't have been efficient to ignore readily available lessons from those who solved similar problems before. As you put it, LLMs can be great search engines. It's not that they changed the task.

Whether you have to solve a problem that is hard today because there aren't many available resources, or something well discussed that you can research with Google or an LLM, I don't think it changes anything about their argument that once you know what to do, actually turning it into working code is comparatively mundane and unchallenging, and always has been to some degree.


I suspect this horse has bolted. When I see a photo on a website and I want to know where it's taken, I assume it's been stripped already and ask an AI. The accuracy is uncanny.

Since it's a rare web site that leaves the EXIF data intact, I guess this is aimed at apps harvesting photos on the device itself. I hope Firefox gets a new site permission that allows you to upload photos with the EXIF intact, because that's often what I want. But that won't happen for a while, and until apps do get their permissions updated it's going to be annoying. It will be a right proper PITA to discover later your EXIF data is gone from the photos you transferred to your laptop.


There is pushback here against the figures you are quoting.

Here is something real. South Australia electricity production averaged 75% from renewables last year. Wikipedia (for 2023) put it at 70%: "70 per cent of South Australia's electricity is generated from renewable sources. This is projected to be 85 per cent by 2026, with a target of 100 per cent by 2027." https://en.wikipedia.org/wiki/Energy_in_South_Australia They averaged 75% in 2025.

South Australia has no hydro to speak of. They have a some local gas, but no local coal. They do have good wind and solar resources. To me it looks like the transition was driven largely by immediate pragmatism concerns, as renewables are so much cheaper than gas. The politicians make a lot of noise about it of course, but I suspect if they had a local cheap source of coal the outcome would have been different.

Their electricity prices are high by Australian standards - but they have to pay for the gas they import to cover the missing 25%, and gas is by far the most expensive form of generation in Australia. And they are paying for all the new equipment this transition requires.


As someone else said, jj comes into its own when a reviewer insists you split your PR into many commits, because they don't want to review 13k lines in one chunk. In that case it is easier because there is no rebase. To change a PR commit in the middle of the stack you checkout a PR commit, edit it - and done. The rebase happened automagically.

Notice I didn't say "edit it, commit, and done" because that's another thing you don't do in jj - commit. I know, `git commit` is just a few characters on the cli - but it's one of several git commands you will never have to type again because jj does it without having to be asked.

If the rebase created a merge conflict (it could do so in any PR commit above the one you edited) - it's no biggie because jj happily saves merge commits with conflicts. You just check it out, and edit to remove the conflict.

Jj does grow on you over time. For example, when you start with jj you end up in the same messes you did as a git beginner, when you recovered with 'rm -r repository', followed by 'git clone git@host/repository.git'. Then you discover 'jj op restore' which compared to git's reflog is a breath of fresh air. And while you might at first find yourself chafing at the loss of git staging, you gradually get comfortable with the new way of working - then you discover `jj evolog`, and it's "omg that's far better than staging". Ditto with workspaces vs worktrees, and just about everything else. It might be difficult to lose work with a bad git command, but actually impossible to lose work with a jj command.

It is a steep learning curve. We are talking months to use it fluently instead of treating it as git with better porcelain. If all you ever do is work with one commit at a time, it's a lot of effort for not a lot of return. But as soon as you start managing stacks of changes, duplicating them, splicing them, it makes you feel like a god.

That said, if you are starting out - I'd suggest starting with jj instead of git. You've got to go through a learning curve anyway. You may as well do it with the kinder, gentler, more powerful tool.


> That said, if you are starting out - I'd suggest starting with jj instead of git

That wouldn't be my advice if you're going to work with other people. You can't know jj without knowing git well enough to fall back in general


I got a similar response. It looked wrong on several levels. So I asked it: if it knew the current time, and if it hard learnt when I retire.

It claimed it didn't know either.


My own version of this is that I wanted bank transactions in CSV format. However, the transactions were more than a year old, and the bank only provides recent transactions in a downloadable form. They do, however, provide statements in PDF format going back indefinitely. But the objects in the PDF are arranged in a way that made pdftotext output near-indecipherable.

I thought I'd give Gemini a go. When I uploaded the 18-page PDF, it complained the output exceeded some limit. So I used pdftk to break it up into 4-page chunks, which seemed to work - the output looked very good and passed a couple of spot checks. But I don't trust these things as far as I can kick them.

There was a transaction column and a running balance column, so I did a quick check to see if every new balance equalled the previous one plus the transaction. And it almost always did. There were a couple of errors I put down to transcription errors. I was wrong. I eventually twigged that these errors only happened where I had split the PDFs. After tracking where the balance first went wrong, it became evident it had dropped chunks of lines, duplicated others, and misaligned the transaction and balance columns. It was complete rubbish, in other words.

So why did my balance check show so few errors? I put that down to it knowing what a good bank statement looked like. A good bank statement adds up. So it adjusted all the balances so it looked like a real bank statement. I also noticed these errors got more frequent in later pages. I tried splitting the PDF into single pages and loading them into the model one at a time. That didn't help much for the later pages, but the first one was usually good. So then I loaded each page into a fresh context, with a fresh prompt. If that didn't produce something that balanced, the second go always did.

I'm not sure it saved time over doing it manually in the end. It's a tired analogy now, but it's true: at their heart, these things are stochastic parrots. They almost never produce the same output twice when given the same input. Instead, they produce output that has a high probability of following the input tokens supplied. If there is only one correct output but the output is small enough, the odds are decent they will get it right. But once the size grows, the odds of it outputting complete crap become a near certainty.


> “good” API design (highly subjective)

A good API does two things. Firstly it DRYs the code out. Many APIs start life doing only that, as a collection of routines you get tired of writing over and over again. Secondly, the functions are designed in a way that reduces the need to share information. Typically they do that by hiding a whole pile of details in their implementation so knowledge of those details is all in one place rather than scattered across a code base. A term often used for APIs that don't do that well is "leaky", or we say it's a "leaky abstraction".

Perhaps a good API has other more subjective attributes but it must have those two. LLMs suck at both. You can see that in the comments here, when people say they write verbose code. It's verbose because the LLM didn't go looking for duplicated functionality - if it needed it, it just put the code where it was focused on at the time.

If they are bad at DRY then I need a better superlative to describe how they fare at respecting the isolation boundaries that underpin good module design. As far as I can tell, they have no idea about the concept or how to implement it. Let loose, they are like a bull in a china shop, breaking one boundary after another.


> I suggest reading up on wifi and RF before going further.

I'd suggest neither matter in the face of how the problem is solved in the consumer cards the OP was talking about. They solve it by locking down the firmware that controls the radios.

The reality is most routers do that too. You can replace the firmware in most of them with OpenWRT or something similar. You still can't exceed regulatory limits because of the signed blobs of firmware in the radios.

Nonetheless, here we are getting comments like yours, which imply all firmware in the device must be behind a proprietary wall because a relatively small blob of firmware in them must be protected. It has its own protections. It doesn't need to be protected by the OS or the application that runs on top of it.

Yet it's in those applications where most of the vulnerabilities show up. Making them consumer replaceable would help in solving the problem. Protecting the firmware is not a good reason to not do it.


I was responding to the original post about open standards. My point is that anything with an RF transceiver will never be as open as a standard PC with replaceable components. The radio portion will always be blocked off. That relatively small blob will always limit how much control you can exert over the device.

We don't have to look far. The embedded space with Arduinos, ESP32s and even RPis is a hacker's paradise. Yet the radio stack is restricted in all of them. For instance, it's not possible to take an ESP32 board and turn it's single antenna into a MIMO configuration, even if you make a custom PCB with trace antennas.


My point is that anything with an RF transceiver will never be as open as a standard PC with replaceable components. The radio portion will always be blocked off.

sure, but again, why would the RF transceiver on my desktop PC or in my laptop be any different than the one in my router?


this topic about how to turn anything into a router is tangentially related: https://news.ycombinator.com/item?id=47574034


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: