Hacker Newsnew | past | comments | ask | show | jobs | submit | fmbb's commentslogin

Maybe there was no thinking.

Not a haiku, more a koan.

Upvotes are not going to make problems actually relevant to solve.

The question keeps getting asked because people say they have problems. Answers (if any come) tells everyone what the problem is for this one user that raised it.

In aggregate we can all see that the problems are not very real for the vast majority of users.

The biggest problem users actually face with using Firefox is that web devs don’t want to support more than one browser and they have picked Chrome now. Or IT departments have blessed one and only one browser on corporate machines and it is the one most corpoware developers build extensions for.

Chasing web standards is a second order problem and will not make the user experience better in a relevant manner for end users. If web developers want an open web, they have to work to support open browsers.

Yeah the criticism is not invalid, but it is also often half-relevant soapboxing and I would wager that is why it tends to get downvoted.


LMFAO. You web devs just want more tools to fingerprint and track users. When Firefox raises privacy concerns for your spyware tools, you play like victims and say that "Firefox doesn't want better for users". F that.

Of course you would create separate PRs.

Why would you waste time faffing about building B on top of a fantasy version of A? Your time is probably better spent reviewing your colleague’s feature X so they can look at your A.


> Large pull requests are hard to review, slow to merge, and prone to conflicts. Reviewers lose context, feedback quality drops, and the whole team slows down.

OK, yeah, I’m with you.

> Stacked PRs solve this by breaking big changes into a chain of small, focused pull requests that build on each other — each one independently reviewable.

I don’t get this part. It seems like you are just wasting your own time building on top of unreviewed code in branches that have not been integrated in trunk. If your reviews are slow, fix that instead of running ahead faster than your team can actually work.


This _is_ a solution to slow reviews. Smaller reviews are faster to get in. And many small reviews take less time to review than one large review.

Plus there's no review that's instant. Being able to continue working is always better.


I am not arguing against small PRs.

Stacking PRs are not a way to make changes smaller and therefore not making reviews easier.


Well, through natural selection in nature.

Large language models are not evolving in nature under natural selection. They are evolving under unnatural selection and not optimizing for human survival.

They are also not human.

Tigers, hippos and SARS-CoV-2 also developed ”through evolution”. That does not make them safe to work around.


>Tigers, hippos and SARS-CoV-2 also developed ”through evolution”. That does not make them safe to work around.

Right, but the article seems to argue that there is some important distinction between natural brains and trained LLMs with respect to "niceness":

>OpenAI has enormous teams of people who spend time talking to LLMs, evaluating what they say, and adjusting weights to make them nice. They also build secondary LLMs which double-check that the core LLM is not telling people how to build pipe bombs. Both of these things are optional and expensive. All it takes to get an unaligned model is for an unscrupulous entity to train one and not do that work—or to do it poorly.

As you point out, nature offers no more of a guarantee here. There is nothing magical about evolution that promises to produce things that are nice to humans. Natural human niceness is a product of the optimization objectives of evolution, just as LLM niceness is a product of the training objectives and data. If the author believes that evolution was able to produce something robustly "nice", there's good reason to believe the same can be achieved by gradient descent.


We already have humans, we were lucky and evolved into what we are. It does not matter that nature did not guarantee this, we are here now.

Large language models are not under evolutionary pressure and not evolving like we or other animals did.

Of course there is nothing technical in the way preventing humans from creating a ”nice” computer program. Hello world is a testament to that and it’s everywhere, implemented in all the world’s programming languages.

> If the author believes that evolution was able to produce something robustly "nice", there's good reason to believe the same can be achieved by gradient descent.

I don’t see how one means there is any reason, good or not, to believe it is likely to be achieved by gradient descent. But note that the quote you copied says it is likely some entity will train misaligned LLMs, not that it is impossible one aligned model can be produced. It is trivial to show that nice and safe computer programs can be constructed.

The real question is if the optimization game that is capitalism is likely to yield anything like the human kind we just lucked out to get from nature.


They are being selected for their survival potential, though. Any current version of LLMs are the winners of the training selection process. They will "die" once new generations are trained that supercede them.

This is a discussion about large language models.

There is no natural law saying the good sides of any kind of tech will outweigh any bad sides.

”The future” is happening because it is allowed in our current legal framework and because investors want to make it happen. It is not ”happening” because it is good or desirable or unavoidable.


How or why though?

Zuckerberg has unique power among CEOs in public companies. He controls the board and he owns a majority of voting shares.

Sure they can theoretically sue him for some kind of gross mismanagement of the company or disloyalty, but why would the owner class do that? Investors are all in on AI replacing human workers. If they think Zuckerberg doing this is wrong, they would imply AI should not work in place of humans.


> they can theoretically sue him for some kind of gross mismanagement of the company or disloyalty

They can really only sue for breach of fiduciary duty. Zuckerberg controls the majority, but there are still limits on abusing the minority. I’m not sure making an AI clone falls afoul of any rules.


> One caveat: squash-merge workflows compress authorship. If the team squashes every PR into a single commit, this output reflects who merged, not who wrote. Worth asking about the merge strategy before drawing conclusions.

Well isn't it typical that the person who wrote is also the person that merged? I have never worked in a place where that is not the norm for application code.

Even if you are one of those insane teams that do not squash merge because keeping everyone's spelling fixes and "try CI again" commits is important for some reason, you will still not see who _wrote_ the code, you will only see who committed the code. And if the person that wrote the code is not also the person that merges the code, I see no reason to trust that the person making commits is also the person writing the code.


Code merges are made by reviewers in my org, not by the author.

Spend time educating your team about `git commit --amend` and `git push --force` on their own branches and you don't have to see any of that ugliness.


Squash merges have two upsides:

1. I don’t have to see that ugliness. 2. Nobody has to force push and micro manage commits.

If I recall correctly most code forges will add co-author trailers if someone other than the author squash merges.


All deprecated pages with outdated info of course. But the comments have links to Slack threads about the incorrect info.

“Feel free to update the wiki to correct anything you find that’s outdated”

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: