> My personal opinion for a while has been that crypto operations should be in the kernel so we can end the madness that is every application shipping it's own crypto and trust system which has only gotten worse since containers were invented.
There’s a valid argument here but I think that’d devolve into the DNSSec trap without both a very well-designed API and a stable way to ship updates for older kernels. If people can’t get good user experience or have to force kernel upgrades to improve security, most applications will avoid it. Things like Chrome shipping their own crypto mean that they can very quickly ship things like PQC without waiting years or having to deal with issues like kernel n+1 having unrelated driver or performance issues which force things into a security vs. functionality fight.
Which does sort of loop around to the issue of Linux not having a stable ABI as a feature I suppose which would be one way to implement it with long term compatibility on kernel modules.
But the Chrome example also highlights the problem: Chrome might ship it, but vanishingly little software is ever going to upgrade and we've got an explosion of statically linked languages now.
Sure, nobody’s saying it’s an inscrutable mystery but if your goal is to inform a wide audience it’s considered good form to expand all but the most common acronyms. It’ll even get you more internet points than petty smugness.
I don't think that's a fair dismissal: you see ads all over media websites because the rates have been plummeting as consumers tune out ads. One main reason why everyone does is that ads are so obtrusive and repetitive, and that's exactly what LLMs change: I'm sure we'll see regular ads on AI apps because the companies have trillions of dollars to repay but advertisers would pay a lot more for openings where they aren't _forcing_ their message as a distraction but are instead able to insert it fairly naturally into a context where the user is engaged.
The entire history of advertising before the web was companies estimating a dollar value on “awareness” when they couldn't measure direct referrals and every business in the world has gotten a lot better at measuring sales since then. It's not going to be transformative but if, say, Toyota got ChatGPT to say their vehicles were a better value than Ford's I suspect they'd be able to tell pretty quickly whether sales were improving relative to the competition and would pay well for that to continue.
Jane Austen died long enough ago that her works are in the public domain, so Winters did not need a license to use it. That does not mean that he gained rights to her work: if he tried to sue someone for use of anything which appeared in the original, he would lose in court because it’s easy to show that copies made before he was born had the same text. This also how they prevent people trying to extend copyright by making minor changes to an existing work: the new copyright only covers the additions.
There’s a very accessible summary of the United States rules here:
It was worse than that: they forced _everyone_ into it, whether or not you had any interest in using it.
They did this before having notification control or usable filtering[1] so what this meant was for most of year, you'd login to Gmail and see the upper right notification badge be !!!LOOK AT ME!!! red only to click on it and see it was telling you that some dude who no-showed on a Craigslist sale 10 years ago in a different city had been forced to “join” Google+. Even worse, it took like 6 months for their iOS developers to give you any control over push notifications so you got all of that as push notifications until you deleted the app.
They also annoyed key communities like Google Reader users: that wasn't their largest popular social network but it was one which people actually liked and it disproportionately skewed towards people like journalists, bloggers, etc. who recommended technology to other people. The conversion to Google+ was really clumsy and they did things like replacing the popular Reader commenting system with a Google+ “integration” which didn't work at all on mobile devices[2], which meant that a ton of influential people had a really negative experience and told everyone they knew about it.
1. The “circles” idea reportedly worked well when it was Google employees using it internally but it relied on the poster picking an audience for a post, which failed in the real world when the spammiest people think everyone is interested in their every word.
2. The dialog was sized for a desktop display so the post button was inaccessible off the screen.
This stopped working in the mid-Atlantic when invasive tiger mosquitoes arrived. They need like a bottle cap sized amount of water so even things like a flower can hold enough water for them to reproduce.
We’re using scented lures which have the right salt + lipid combo to attract mosquitoes. It helps but I still wish Nathan Myrvold had seriously developed that “photonic fence” product.
I think the next best thing is an automatic turret that fires salt bullets or something, maybe AI. Hopefully it doesn't take an eye out, but if it took out like 1million mosquitoes for 1 eye, worth it?
> I always thought this method could be used to provide a/c for neighborhoods, operated as a neighborhood utility. I've not seen it done tho. I've seen neighborhood owned water supplies and sewer systems; it tells me the ownership part seems feasible.
I keep thinking this would be a great municipal code change: any time the roads are being built or ripped up for water/sewer maintenance, put in a ground loop and subsidize household connections for heat pumps so instead of having to deal with the marginal difference between 20℉ winter air you'd be working with 50-60℉ ground temperatures.
Expecting scientific rigor is not a bad bias: everyone who has been willing to do actual science agrees that climate change is real and significant. For example, Richard Muller was a climate skeptic who had a great job at one of the most prestigious universities in the world, got funding to establish a team to critically review climate science research … and concluded it was right:
“When we began our study, we felt that skeptics had raised legitimate issues, and we didn’t know what we’d find. Our results turned out to be close to those published by prior groups. We think that means that those groups had truly been very careful in their work, despite their inability to convince some skeptics of that.”
If you haven’t read up on both, it’s hard to appreciate how unlike climate science is from the beta amyloid theory. The latter has some evidence but there were always alternate theories by serious researchers because it involved multiple systems which scientists were still working to understand and basic questions around causation and correlation had significant debate.
In contrast, climate scientists reached consensus about climate change four decades ago and by now have established many separate lines of evidence which all support what has been the consensus position. More importantly, since the 1970s they have been making predictions which were subsequently upheld by measured data from multiple sources. The ongoing research is in fine-tuning predictions, estimating efficacy of proposed interventions, etc. but nobody is seriously questioning the basic idea.
Almost all of the people you hear dismissing climate change are funded by a handful of companies like Exxon, whose own internal research showing climate change was a significant threat produced a chart in 1982 which has proven accurate:
There’s also a really good operational benefit if you have limits like total RAM, database connections, etc. where being able to reason about resource usage is important. I’ve seen multiple async apps struggle with things like that because async makes it harder to reason about when resources are released.
Basically it’s the non-linear execution flow creating situations which are harder to reason about. Here’s an example I’m trying to help a Node team fix right now: something is blocking the main loop long enough that some of the API calls made in various places are timing out or getting auth errors due to the signature expiring between when the request was prepared and when it is actually dispatched because that’s sporadically tend of seconds instead of milliseconds. Because it’s all async calls, there are hundreds of places which have to be checked whereas if it was threaded this class of error either wouldn’t be possible or would be limited to the same thread or an explicit synchronization primitive for something like a concurrency limit on the number of simultaneous HTTP requests to a given target. Also, the call stack and other context is unhelpful until you put effort into observability for everything because you need to know what happened between hitting await and the exception deep in code which doesn’t share a call stack.
The execution flows of individual async tasks are still linear, much like individual threads are linear.
Scheduling (tasks by the async runtime vs threads by the OS), however results in random execution order either way.
If there is a slow resource, both, async tasks as well as threads will pile potentially increasing response times.
Wether async or threads, you can easily put a concurrency limit on resources using e.g. semaphores [1]:
- limit yourself to x connections (either wait or return an error)
- limit the resource to x concurrent usages (either wait until other users leave, or return an error)
Regarding blocking the main loop: with async and non-blocking operations, how would something block the main loop?
And why would the main loop being blocked cause API calls being timing out? Is it single threaded?
> The execution flows of individual async tasks are still linear, much like individual threads are linear.
Think about what happens:
1. Request one hits an await in foo()
2. Runtime switches to request two in bar() until it awaits
3. Runtime switches to request three in baaz(), which blocks the loop for a while
4. Request one gets a socket timeout or expired API key
That error in #4 does not tell you anything about #2 or #3, and because execution spreads across everything in that process you have to check everything. If it was a thread, you would either not have the problem at all, it would show up clearly in request three, or you’d have a clear informative failure on a synchronization primitive saying that #3 held a lock for too long.
That makes it harder to control when memory is allocated or released in garbage collected languages, too, because you have to be very careful to trigger gc before doing something which can suspend execution for a while or you’ll get odd patterns when a small but non-zero percentage of those async requests take longer than expected (i.e. load image master, create derivative, send response needs care to release the first two steps before the last or you’ll have weird behavior when a slow client takes 5 minutes to finish transferring that response).
Arguably that’s something you want to do anyway but it dramatically undercuts the simplicity benefits of async code. I’m not saying that we should all give up async but there are definitely some pitfalls which many people stumble into.
There’s a valid argument here but I think that’d devolve into the DNSSec trap without both a very well-designed API and a stable way to ship updates for older kernels. If people can’t get good user experience or have to force kernel upgrades to improve security, most applications will avoid it. Things like Chrome shipping their own crypto mean that they can very quickly ship things like PQC without waiting years or having to deal with issues like kernel n+1 having unrelated driver or performance issues which force things into a security vs. functionality fight.
reply