This has always been the case, and is really the main practical advantage of open source. Contributing code back is an act of community service which people do not always have time for. The main issue is that over time, other people will contribute back their own acts of community service, which may be bug fixes or features that you want to take advantage of. As upstream and your own fork diverge, it will take more and more work to update your patches. So if you intend to follow upstream, it benefits you to send your patches back.
As apologetics what he's saying is complete nonsense. The jizyah has been interpreted by every islamic society as a tax on non muslims, not a fine for those who break the law. You could argue that the passage doesn't actually say that the purpose of jizyah is to humiliate people (humbling is different) or that islamic societies in practice didn't (typically) use it as a means of ridicule, but saying that actually it was just a fine is utter make believe.
You don't need to keep shrinking features. Brute forcing is highly parallel; to break a key within a certain time frame all you need is a large enough quantity of chips. While it's in the realm of science fiction today, in a few centuries we might have nanorobots that can tile the entire surface of mars with processors. That would get you enough orders of magnitude of additional compute to break a 128 bit key. 256 bit would probably still be out though.
Classical brute force is embarrassingly parallel, but Grover's algorithm (the quantum version) isn't. To the extent you parallelize it, you lose the quantum advantage, which means that to speed it up by a factor of N, you need N^2 processors.
The article discusses this in detail, and calculates that "This means we’ll need 140 trillion quantum circuits of 724 logical qubits each operating in parallel for 10 years to break AES-128 with Grover’s."
Is it? I've generally understood that most symmetric cryptography like AES is safe. QC only gives exponential speedups on some specific problems. The most is that naively you might want to double your keysize to get the same protection, something that the article points out is unecessary because that naive approach assumes that QC is like classical computing but with extra magic, as opposed to having its own tradeoffs.
I've heard a lot about Shor's algorithm breaking RSA, but this article on hackernews is the first I've heard anyone discuss quantum attacks for AES.
Then again, I am in quantum computing not cryptography, maybe different circles have different discussions.
The power and heat are the issues for that, though. Think about how much energy and heat are used/generated in the chips we have now. If we tiled out those chips to be 20 orders of magnitude larger… where is the heat going to go, and where is the energy coming from?
In my example I had imagined that your nanobots would also create solar panels and radiators for the chips you were tiling the surface of mars with. This is why it needs to be done on the surface instead of underground somewhere.
The comment I was responding to was suggesting n x 6. And there are also aspects beyond brightness and contrast, like font styles and sizes, line height and margins, justification and hyperlink style, and so on. The things you can or want to configure in an e-book reader.
I feel like we could go beyond that, especially for more app-like experiences. Maybe we want themes that do things like "add specific trim to make editable fields more identifiable." or adding "high contrast" versions of the themes for low-quality screens or low-vision users.
There's no reason a webpage shouldn't be as themable as, say, a GTK or Qt based desktop application.
We should be trying to snatch back styling power from the designers and putting it back on the user-agent's side. Let the page look brutalist until the user has chosen an appropriate theme for their needs rather than railroading them into what someone in Marketing decided looked good.
I don't know what such a demo would prove in the first place. LLMs are good at things that they have been trained on, or are analogues of things they have been trained on. SVG generation isn't really an analogue to any task that we usually call on LLMs to do. Early models were bad at it because their training only had poor examples of it. At a certain point model companies decided it would be good PR to be halfway decent at generating SVGs, added a bunch of examples to the finetuning, and voila. They still aren't good enough to be useful for anything, and such improvements don't lead them to be good at anything else - likely the opposite - but it makes for cute demos.
I guess initially it would have been a silly way to demonstrate the effect of model size. But the size of the largest models stopped increasing a while ago, recent improvements are driven principally by optimizing for specific tasks. If you had some secret task that you knew they weren't training for then you could use that as a benchmark for how much the models are improving versus overfitting for their training set, but this is not that.
Sovereignty doesn't mean autarky. Gas requires continuous resupplying which depends on maintaining relations with foreign countries. Solar requires you to acquire equipment to set it up, but doesn't require an ongoing relationship beyond that. Having invested heavily in solar doesn't give china a veto in your political affairs thereafter, except to the degree they would have one otherwise.
This is what government should be doing. Figure out how to do something safely, make that a regulation, then shield companies from liability as long as they follow that regulation. In practice you won't extract trillions of dollars from most companies anyways, because they'll go bankrupt long before they manage to pay all that back.
To me jj is an ok porcelain for git, but I find it worse than magit. Sure, it has some tricks under their sleves for merging, but I just don't run into weird merges and never needed more advanced commands like rerere.
What I'd would expect of the next vcs is to go beyond vcs of the files, but of the environment so works on my machine™ and configuring your git-hooks and CI becomes a thing of the past.
Do we need an LSP-like abstraction for environments and build systems instead of yet another definitive build system? IDK, my solution so far is sticking to nix, x86_64, and ignoring Windows and Mac, which is obviously not good enough for like 90%+ of devs.
reply