Hacker Newsnew | past | comments | ask | show | jobs | submit | SebastianKra's commentslogin

Textbook marketing speak: “Don't you want more relevant ads?”.

It assumes that “ads” = useful information, but that's rare at best. Most ads focus on stealing your attention and creating a fear of missing out. NordVPN isn't educating you. They just manufacture a need and then hope that you won't invest time in researching a better option.

Why would I give them more leverage to do that?


I genuinely would rather see ads for products I might like yet do not know exist instead of purely random ads. I don't understand why a person wouldn't.

Is it that rare? Sure, there's no advertising profile for "hates VPN ads" but eg an adult male doesn't want ads for women's period pain medication and similarly an adult woman doesn't want ads for male testosterone or other male-coded enhancements ads. Then you get into niche interests like fishing or sewing or 3d printing.

You're conflating correctly targeted ads and useful information.

If you sell gambling ads to an addicted gambler, the gambler doesn't get useful information.

Niche interests might get a pass. But then again: if I’m getting an ad for a 3D printing product on a 3D printing review site, its very likely that the advertised product wasn’t actually the best and is just artificially pushed on me.


Have they, by chance, also fixed the issue where MacOs' SMB implementation is unusably slow when copying many small files?

A backup of my 2TB MacBook literally takes weeks.


What I have done to maintain the integrity of my Time Machine backups (to UnRAID, via SMB):

For the "sparsebundles break" issue:

* Back up to multiple targets. I use both mbentley's Time Machine Docker image (only one backup per source machine) and UnRAID's built-in Time Machine functionality (multiple backups of same machine allowed).

* Use spaceinvader1's macinabox Docker image to have a local way to `fsck_apfs` the above sparsebundles.

* When one irreparably breaks, delete it and replace it with a copy of a working one from another of the above targets.

For the "backups are incredibly slow" issue:

* One of the above targets is to an SSD.

* Use TheTimeMachineMechanic's "Speed" option after a backup to determine the slow spots. Look at patterns in "Current:" lines. Pumping the output to an LLM is very helpful here.


The discussion around async await always focuses on asynchronous use-cases, but I see the biggest benefits when writing synchronous code. In JS, not having await in front of a statement means that nothing will interfere with your computation. This simplifies access to shared state without race conditions.

The other advantage is a rough classification in the type system. Not marking a function as async means that the author believes it can be run in a reasonable amount of time and is safe to run eg. on a UI main thread. In that sense, the propagation through the call hierarchy is a feature, not a bug.

I can see that maintaining multiple versions of a function is annoying for library authors, but on the other hand, functions like fs.readSync shouldn’t even exist. Other code could be running on this thread, so it's not acceptable to just freeze it arbitrarily.


Maybe I am missing something. But the function coloring problem is basically the tension that async can dominate call hierarchies and the sync code in between looses it's beneficial properties to a degree. It's at least awkward to design a system that smoothly tries to blend sync that executes fast and async code that actually requires it.

Saying that fs.readSync shouldn't exist is really weird. Not all code written benefits from async nor even requires it. Running single threaded, sync programs is totally valid.


The function coloring problem represents multiple complaints. I disagree that the propagation of async makes the sync case irrelevant. In the frontend, receiving a promise has completely different implications on loading states. In the backend, I usually try to separate side-effects from pure functions, so the pure functions are usually sync.

Because JS is single threaded, fs.readSync will freeze the entire app. The only case where I would find that acceptable is in cli-scripts. But that could also be achieved with nodejs’ support for top-level await. There's perhaps a slight overhead from the Promise being created, but JS-Engines have so many optimizations that I don't even know if that matters. If nothing else is scheduled, awaiting a promise is functionally the same as blocking. Even in rare cases where you do want to block other scheduled events from running, you could achieve that with an explicit locking mechanism instead.

You could argue that filesystem access is fast so blocking everything is fine, but what if the file happens to be on a NAS somewhere?


'readSync' does two different things - tells the OS we want to read some data and then waits for the data to be ready.

In a good API design, you should exposed functions that each do one thing and can easily be composed together. The 'readSync' function doesn't meet that requirement, so it's arguably not necessary - it would be better to expose two separate functions.

This was not a big issue when computers only had a single processor or if the OS relied on cooperative multi-threading to perform I/O. But these days the OS and disk can both run in parallel to your program so the requirement to block when you read is a design wart we shouldn't have to live with.


> tells the OS we want to read some data and then waits for the data to be ready

No, it tells the OS "schedule the current thread to wake up when the data read task is completed".

Having to implement that with other OS primitives is a) complex and error-prone, and b) not atomic.


The application in question is frozen for that period though, that's the wait they're referring to.

Even websites had this problem with freezing the browser in the early AJAX days, when people would do a synchronous XMLHttpRequest without understanding it.


he was referring to fs.readSync (node) which has also has fs.read, which is async. there is also no parallelism in node.

i don't see it as very useful or elegant to integrate any form for parallelism or concurrency into every imaginable api. depends on context of course. but generalized, just no. if a kind of io takes a microsecond, why bother.


> Not all code written benefits from async nor even requires it. Running single threaded, sync programs is totally valid.

Maybe, but is it useful to have sync options?

You can still write single threaded programs


I mean single threaded + sync.

Sync options are useful. If everything is on the net probably less so. But if you have a couple of 1ms io ops that you want to get done asap, it's better to get them done asap.


> But if you have a couple of 1ms io ops that you want to get done asap, it's better to get them done asap.

and async prevents this how?


my statement was in response to "fs.readSync shouldn't exist". that is how.


if it didn't exist, the async version would still exist, which you could use to get it done asap


stop


> This simplifies access to shared state without race conditions

But in ordinary JS there just can't be a race condition, everything is single threaded.


You can definitely have a race condition in JS. Being single-threaded means you don't have parallelism, but you still have concurrency, and that's enough to have race conditions. For example you might have some code that behaves differently depending on which promise resolves first.


And it doesn't actually prevent concurrency.


Sure, but concurrent != parallel. You can't have data races with a single thread of execution - a while loop writing i=0 or i=1 on each iteration is not a data race.

Two async functions doing so is not a data race either.


You should really look up the definition of race condition; it has nothing to do with parallel processing. Parallel processing just makes it harder to deal with.


Data race != Race condition


Data races are a specific race condition - they may be safe or cause tearing.

Serially, completely synchronously overwriting values is none of these categories though.


You're mixing up quite a few somewhat related but different concepts: data races, race conditions, concurrency and parallelism.

Concurrency is needed for race conditions, parallelism is needed for data races. Many single threaded runtimes including JS have concurrency, and hence the potential for race conditions, but don't have parallelism and hence no data races.


Concurrency with a single thread of execution runs with complete mutual exclusion, so no "pure" single threaded concurrency is definitely race condition free.

What we may argue over (and it becomes more of a what definition to use): IO/external event loop/signal handlers. These can cause race conditions even in a single threaded program, but one may argue (this is sort of where I am) that these are then not single threaded. The kernel IO operation is most definitely not running on the same thread of execution as JS.

I think I have been fairly consistent in the definition of a data race as a type of race condition, where a specific shared memory is written to while other(s) read it with no synchronization mechanism. This can be safe (most notably OpenJDK's implementation is tear-free, no primitive or reference pointer may ever be observed as a value not explicitly set by a writer), or unsafe (c/c++/rust with unsafe, surprisingly go) where you have tearing and e.g. a pointer data race can cause the pointer to appear as a value that was never set by anyone, causing a segfault or worse.


You can implement your own event loop within a single thread


Knowing Apple's track history with materials, I guess the seats will look like used iPad Smart Keyboard Folios after two years.


I suspect everyone feels that way except SaaS providers. They could just give you a checkbox to turn the newsletter off, but they don't.


This prompted me to look it up.

Are we seriously talking about a white box with placeholder text, or has there been a development since then?

https://www.phoronix.com/image-viewer.php?id=2026&image=libr...


This composability was also a defining feature of Launchbar.

I loved it, but eventually found that Raycasts approach of having predefined plugins for each use case is more performant , discoverable and usable.

Kinda like how the unix philosophy was beaten by integrated full-stack applications.

* since anything can be composed, everything must be in the same search index. This slows down the index, and means you need to sift through more irrelevant results.



Maybe it's related to finger length. On the home row, my index finger is somewhat stretched and my little finger is bent.


I felt so vindicated when Halide finally released Process Zero, years after the iPhone 13.

I still remember that 50-page community thread of people complaining about the ugly camera, and one guy swearing up and down that “it's fine, it's fine you're all wrong”.

> It's your *expectations* that are wrong, not the phone. If you go out and buy a "professional" $6000 DSLR and $6000 lens… you will have many of these same issues.

Then Process Zero comes and solves all of my issues...

I love your work. Keep doing what you do.

https://discussions.apple.com/thread/253181534?sortBy=rank&p...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: