Hacker Newsnew | past | comments | ask | show | jobs | submit | CoastalCoder's commentslogin

> libruls

Mocking regional accents doesn't really help the conversation.


It's a term much like freedumb, mocking their celebration of ignorance, not their accent.

"Freedumb" is not as benign as you think it is either. If you're aspiring to meaningful commentary rather than social media upvotes, I would advise avoiding both.

In my opinion the people who are deliberately destroying science and education should be prosecuted for literal textbook treason, not merely mocked.

These are the "fuck your feelings" people whose feelings you're worrying about.


> who are deliberately destroying science...

But this is subjective. What you call as "Science" might be pseudoscience for someone else. As an example, some decade back, following and trusting peer reviewed research was "scientific", but even back then I thought it was a stupid, unscientific thing to do. Today the problems with peer review process is pretty widely acknowledged. But back then I would have been considered unscientific to not fully trust peer reviewed research. People also used to say things like "Science is settled" and "Trust the experts", which is the most unscientific thing that one could possibly say.

So since there is a lot of unscientific things that is being called "science" these days, I think this is very subjective.


It's the feelings of uninformed people who don't yet know what's going on that I'm worrying about. Saying things like "libruls" and "freedumb" makes it harder to build the coalition which we'll need to prosecute the perpetrators.

Really makes one appreciate that concept of A.C.I.D. database transactions.

I'd usually say Pop_OS!

But my recent upgrade to Pop version 24.04 has been a bit of a step back in terms of desktop experience.

I suspect it's growing pains from (switching to Wayland) + (non-System76 hardware) + (laptop with nVidia dGPU + external monitor).

So with different hardware, and/or some more time to mature, this Pop release will probably be a very solid choice.


Is there any reason to believe that the same carpet-pull won't occur with those brands?

I thought the whole trick was arbitrage on the delayed awareness of reduced quality.


> Is there any reason to believe that the same carpet-pull won't occur with those brands?

No, but nothing's forever. The important piece of information is "is this brand good, right now, when I'm looking to make a purchase."


> The important piece of information is "is this brand good, right now, when I'm looking to make a purchase."

Right, which is the very thing that makes branding less than useful. You have to research everything before every purchase regardless of the brand precisely because the brand is no longer a good indicator of quality. That means that the brand doesn't mean much. Just because a brand signified high-quality goods yesterday doesn't mean it signifies the same today.


> Always include some randomness in test values.

If this isn't a joke, I'd be very interested in the reasoning behind that statement, and whether or not there are some qualifications on when it applies.


humans are very good at overlooking edge cases, off by one errors etc.

so if you generate test data randomly you have a higher chance of "accidentally" running into overlooked edge cases

you could say there is a "adding more random -> cost" ladder like

- no randomness, no cost, nothing gained

- a bit of randomness, very small cost, very rarely beneficial (<- doable in unit tests)

- (limited) prop testing, high cost (test runs multiple times with many random values), decent chance to find incorrect edge cases (<- can be barely doable in unit tests, if limited enough, often feature gates as too expensive)

- (full) prop testing/fuzzing, very very high cost, very high chance incorrect edge cases are found IFF the domain isn't too large (<- a full test run might need days to complete)


I've learnt that if a test only fails sometimes, it can take a long time for somebody to actually investigate the cause,in the meantime it's written off as just another flaky test. If there really is a bug, it will probably surface sooner in production than it gets fixed.

Flaky tests are a very strong signal of a bug, somewhere. Problem is it's not always easy to tell if the bug's in the test or in the code under test. The developer who would rather re-run the test to make it pass than investigate probably thinks it's the test which is buggy.

sadly yes

people often take flaky test way less serious then they should

I had multiple bigger production issues which had been caught by tests >1 month before they happened in production, but where written off as flaky tests (ironically this was also not related to any random test data but more load/race condition related things which failed when too many tests which created full separate tenants for isolation happened to run at the same time).

And in some CI environments flaky test are too painful, so using "actual" random data isn't viable and a fixed seed has to be used on CI (that is if you can, because too much libs/tools/etc. do not allow that). At least for "merge approval" runs. That many CI systems suck badly the moment you project and team size isn't around the size of a toy project doesn't help either.


> it's written off as just another flaky test

So don't do that. That's bad practice. The test has failed for a reason and that needs to be handled.


Can't one get randomness and determinism at the same time? Randomly generate the data, but do so when building the test, not when running the test. This way something that fails will consistently fail, but you also have better chances of finding the missed edge cases that humans would overlook. Seeded randomness might also be great, as it is far cleaner to generate and expand/update/redo, but still deterministic when it comes time to debug an issue.

Most test frameworks I have seen that support non-determinism in some way print the random seed at the start of the run, and let you specify the seed when you run the tests yourself. It's a good practice for precisely the reasons you wrote.

Absolutely for things like (pseudo) random-number streams.

Some tests can be at the mercy of details that are hard to control, e.g. thread scheduling, thermal-based CPU throttling, or memory pressure from other activity on the system


There's another good reason that hasn't been detailed in the comments so far: expressing intent.

A test should communicate its reason for testing the subject, and when an input is generated or random, it clearly communicates that this test doesn't care about the specific _value_ of that input, it's focussed on something else.

This has other beneficial effects on test suites, especially as they change over the lifetime of their subjects:

* keeping test data isolated, avoiding coupling across tests * avoiding magic strings * and as mentioned in this thread, any "flakiness" is probably a signal of an edge-case that should be handled deterministically and * it's more fun [1]

[1] https://arxiv.org/pdf/2312.01680


Must be some Mandela effect about some TDD documentation I read a long time ago.

If you test math_add(1,2) and it returns 3, you don't know if the code does `return 3` or `return x+y`.

It seems I might need to revise my view.


I vaguely remember the same advice, it's pretty old. How you use the randomness is test specific, for example in math_add() it'd be something like:

  jitter = random(5)
  assertEqual(3 + jitter, math_add(1, 2 + jitter))
If it was math_multiply(), then adding the jitter would fail - that would have to be multiplied in.

Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.


> it's pretty old.

Damn, must be why only white hair is growing on my head now.

>Nowadays I think this would be done with fuzzing/constraint tests, where you define "this relation must hold true" in a more structured way so the framework can choose random values, test more at once, and give better failure messages.

So the concept of random is still there but expressed differently ? (= Am I partially right ?)


Yes, the randomness is still there but less manually specified by the developer. But also I haven't actually used it myself but had seen stuff on it before, so I had the wrong term: it's "property-based testing" you want to look for.

Here's an example with a python library: https://hypothesis.readthedocs.io/en/latest/tutorial/introdu...

The strategy "st.lists(st.integers())" generates a random list of integers that get passed into the test function.

And also this page says by default tests would be run (up to) 100 times: https://hypothesis.readthedocs.io/en/latest/tutorial/setting...

So I'm thinking... (not tested)

  @given(st.integers(), st.integers())
  def test_math_add(a, b):
      assert a + b == math_add(a, b)
...which is of course a little silly, but math_add() is a bit of a silly function anyway.

Randomness is useful if you expect your code to do the correct thing with some probability. You test lots of different samples and if they fail more than you expect then you should review the code. You wouldn't test dynamic random samples of add(x, y) because you wouldn't expect it to always return 3, but in this case it wouldn't hurt.

This sounds like the idea behind mutation testing

I fear it's going to take more than just one other president.

Now we've all see what one bad POTUS can do to the world, and I don't know if/how/why the world would trulyove past that.

It reminds me of the Twilight Zone episode "The Shelter" [0].

[0] https://en.wikipedia.org/wiki/The_Shelter_(The_Twilight_Zone...


My dear friends to the north: I just want to repeat how sorry many of us are for this.

And that some of us are trying to change the situation. My reps have heard from me multiple times.

Ah yes, strongly worded letters. That is the way to fix things. They must be trembling in fear at the weight of your words.

What do you suggest?

* focus locally; getting invoked with local politics by supporting local candidates with your time and effort - the state department runs programs to talk to city and state officials concerning foreign policy matters and city’s and local governments can create pressure on federal representatives from those states.

* vote with your wallet; boycotts and divestments are tools ordinary people have to effect conglomerates. Ensure your retirement money is not invested with companies engaging with the political ideas you do not agree with

* protest; attending in person events shows leaders numbers and images that are harder to ignore than their consultants’ polling data.


I've done all of those and while I think they are important i believe it's most important to let politicians know, otherwise they rely too much on money.

Pointing that whatever people think they are doing is not working does not mean we have to propose a solution. I'd suggest revolution, but that won't ever happen in the US.

These really feel hollow. Just like "thoughts and prayers." ACK.

What would you consider more appropriate?

I'm not willing to start another actual civil war over Trump's presidency.

I figured an apology was at least an improvement over not apologizing.


FWIW, I'm a Canadian and I do appreciate it. There's a lot of raw feelings up here, but I know there's only so much any individual can do.

I apologize as well; however, they need to diversify. They can't count on the USA.

I had the same thought.

But can we really rule out it being part of such a strategy?


Yeah, but there was a spoof on that (in Family Guy?). It was a tie in to the movie "I Know what you Did last Summer", IIRC.

Sometimes this happens when the original comment being replied to is subsequently edited.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: