Well, it would be 2^(7*8) = 72057594037927936 possible intros. Someone/something has to generate, run and evaluate them all. Theoretically it's the "halting problem" all over, to wait for the "final output" of each intro. The 16 byte effect for example takes a while to achieve its final form. So even if it is somehow managed to evaluate 1000 intros per second, we are looking at about 2 million years of time to really test ALL possible 7 byte intros.
Probably because JS has larger runtime, in JS you don't have to write about most of the low level code. So it's easier to squeeze code in JS than in ASM or machine code.
> It’s a single point of failure for the internet. Every Cloudflare outage ends up in the news.
I hear this argument all the time, but I think it's more complicated.
Firstly, if people used more diverse / smaller services the distribution of outages would change.
While there will likely to be more frequent "smaller" asynchronous outages, many platforms can still break even when only one of their dependencies break. So, you might likely to face even more frequent outages, although not synchronous.
Secondly, we are not sure if these smaller services are on par with the reliability of Cloudflare and other big players.
Thirdly, not all Cloudflare infrastructure is fully centralized. There is definitely some degree of distribution and independence in/between different Cloudflare services. Some Cloudflare outages can still be non global (limited by region or customers that use certain feature set, etc).
Using a single provider is a single point of failure. It may be that this provider has lots of internal failure modes, but you're still one credit card problem or fake legal request or one mistake away from experiencing the primary failure.
If you actually care for the resiliency necessary to survive a provider outage you should have more than one provider.
Which means you should be running your own origin and using the simplest CDN features you possibly can to make your use case work.
Again, there is no simple answer. It depends on situations and resources. Some systems might rely on multiple services. If those services are independent, in such system this might cause more frequent failures which might still result in serious outages even when only one service fails. For those kind of systems a single service provider might be more preferable, because a single provider might coordinate things more efficiently while with multiple providers you might wait longer until each one fixes its problem. For example, system A depends on system B. If both A and B depend on Cloudflare and Cloudflare has 1 hour outage, both A and B will have 1 hour outage. But if A and B depend on different providers, the situation might be similar for B but worse for A. This is because for A each hour of outage in any of the providers means 1 hour of outage. In such cases each additional provider might be an additional weak link.
> If you actually care for the resiliency necessary to survive a provider outage you should have more than one provider.
Well, that probably means duplication. This might be too expensive in certain situations. Also some occasional outages might be not a big concern for some, such as most bloggers.
I'm not downplaying the downsides of centralization. Certain things should be decentralized if it's reasonable. But it's not always that way.
Sometimes apps lack the features of the web versions. For example, I wanted to translate a document on Android. When I was trying to open Google translator website, the system was redirecting me to the app. Unfortunately, I couldn't see document translation feature in this app. Could still open the website in incognito mode. This is really maddening me.
Strava is an example where to enjoy all the features of the platform you have to use the app for some and the browser for others. Neither has all of them.
IMHO, browsers might prioritize execution speed somewhat more than memory. There is the Pareto tradeoff principle where it's unlikely to optimize all the parameters - if you optimize one, you are likely to sacrifice others. Also more memory consumption (unlike CPU) doesn't decrease power efficiency that much, so, more memory might even help with that by reducing CPU usage with caching.
You just read my comment very literally or carelessly. I mean in cases where it increases performance. It's not always that more memory = less CPU usage, but in many situations there is some tendency. For example, on older Windows, such as 98 or XP, applications draw directly on screen and had to redraw the parts of the exposed UI when windows were dragged (BTW, this is why many people, including myself, remember that famous artifact effect when applications were unresponsive on older Windows versions). When memory became cheaper, Vista switched the rendering model to compositing where applications render into private off-screen buffer. That is why moving windows became smoother, even though memory use went up. There is some memory / performance tradeoff, not always though.
> on older Windows, such as 98 or XP, applications had to redraw the parts of the exposed UI when windows were dragged (BTW, this is why many people, including me, remember that famous cascading effect when applications were unresponsive on older Windows versions)
i remember this and had no idea that's why it would be doing that. thanks, i learned something today.
You won't have cache misses if the reason why the application is using a lot of memory is that garbage collection is run less frequently than it could.
That is the case with every mainstream JS engine out there and is one of the many tradeoffs of this kind.
From my understanding this is an official statement, not a benchmark result.
> The change isn't about the core operating system becoming resource-hungry. Instead, it reflects the way people use computers today—multiple browser tabs, web apps, and multitasking workflows, all of which demand additional memory.
So it is more about the 3rd party software instead of OS or desktop environment. Actually, nowadays it's recommended to have 8+ GB of RAM, regardless of OS.
I just checked the memory usage on Ubuntu 24.04 LTS after closing all the browser tabs. It's about 2GB of 16GB total RAM. 26.04 LTS might have higher RAM usage but it seems unlikely that it will get anywhere close to 6GB.
4GB of RAM? What? I guess if your minimum is "able to start Windows and eventually reach the desktop", sure? I wouldn't even use Windows 11 with 8GB even though it would theoretically be okay.
> 4GB of RAM? What? I guess if your minimum is "able to start Windows and eventually reach the desktop", sure? I wouldn't even use Windows 11 with 8GB even though it would theoretically be okay.
Not okay as soon as you throw on the first security tool, lol.
I work in an enterprise environment with Win 11 where 16 GB is capped out instantly as soon as you open the first browser tab thanks to the background security scans and patch updates. This is even with compressed memory paging being turned on.
I was about to rush to the defence of Windows 11, thinking it couldn't possibly be that bad, and just checked mine. I booted a couple of hours ago and have done nothing apart from running Chrome and putty, and whatever runs on startup.
Apparently 13.6GB is in use (out of 64GB), and of that 4.7GB is Chrome. Yeah, I'm glad I'm not running this on an 8GB machine!
Yea, Windows requirements are a meme. Maybe it could barely work with IoT LTSC for non interactive tasks, but definitely not with regular versions. Even windows 10 would hold just barely. Same with HDD space.
It's not just the applications, the installer doesn't even start up with 1GiB of memory. With 2GiB of memory it does start up. You could (well, I would :) ) blame it on the Gnome desktop, but it is very different from what I would have expected.
I just tested this with 25.10 desktop, default gnome. With 24.04 LTS it doesn't even start up with 2GiB.
So, you mean when RAM is 2 GiB with 25.10 the installer started up but didn't with 24.04? What about being able to install and then boot the installed Ubuntu?
No because as far as we know 26.04 won't enable zswap or zram whereas Windows and MacOS both have memory compression technology of some sort. So Ubuntu will use significantly more memory for most tasks when facing memory pressure.
Apparently it's still in discussion but it's April now so seems unlikely.
Kind of weird how controversial it is considering DOS had QEMM386 way back in 1987.
CPUs really weren't up to the job in the pre-Pentium/PowerPC world. Back then, zip files used to take an appreciable number of seconds to decompress, and there was a market for JPEG viewers written in hand-optimised assembly.
That's why SoftRAM gained infamy - they discovered during testing that swapping was so much faster than compression that the released version simply doubled the Windows swap file size and didn't actually compress RAM at all, despite their claims (and they ended up being sued into oblivion as a result...)
Over on the Mac, RAMDoubler really did do compression but it a) ran like treacle on the 030, b) needed to do a bunch of kernel hacks, so had compatibility issues with the sort of "clever" software that actually required most RAM, and c) PowerMac users tended to have enough RAM anyway.
Disk compression programs were a bit more successful - DiskDoubler, Stacker, DoubleSpace et al. ISTR that Microsoft managed to infringe on Stacker's patents (or maybe even the copyright?) in MS DOS 6.2, and had to hastily release DOS 6.22 with a re-written version free of charge as a result. These were a bit more successful because they coincided with a general reduction in HDD latency that was going on at roughly the same time.
A lot of it is optimizing applications for higher-memory devices. RAM is completely worthless if it's not used, so ideally you should be running your software with close to maximum RAM usage for your device. Of course, the software developer doesn't necessarily know what device you will be using, or how much other software will be running, so they aim for averages.
For example, Java applications will claim much more memory than they need for the heap. Most of that memory will be unused, but it's necessary to have a faster running application. If you've ever run a Java app at consistently 90% heap usage, you know it grinds to an absolute halt with constant collection.
The same is true for caching techniques. Reading from storage is slow, so it often makes sense to put stuff in RAM even if you're not using it very often.
I also believe that this memory usage might be decreased significantly, but I don't know how much (and how much is worth it). Some RAM usage might be useful, such as caching or for things related with graphics. Some is a cumulative bloat in applications caused by not caring much or duplication of used libraries.
But I remember in 2016 Fedora Gnome consumed about 1.6GB of RAM on my PC with 2GB of RAM a decade ago. Considering that after a decade the standard Ubuntu Gnome consumes only 400MB more RAM and also that my new laptop has 16GB of RAM (the system might use more RAM when more RAM is installed), I think the increase is not that bad for a decade. I thought it would be much worse.
Buy why that much? The first computer I bought had 192MB of RAM and I ran a 1600x1200 desktop with 24-bit color. When Windows 2000 came out, all of the transparency effects ran great. Office worked fine, Visual Studio, 1024x768 gaming (I know that’s quite a step down from 1080p).
What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?
> What has changed? Why do I need 10x the RAM to open a handful of terminals and a text editor?
It’s not a factor of ten, but a 4K monitor has about four times as many pixels. Cached font bitmaps scale with that, photos take more memory, etc.
> When Windows 2000 came out
In those times, when part of a window became uncovered, the OS would ask the application to redraw that part. Nowadays, the OS knows what’s there because it keeps the pixels around, so it can bitblit the pixels in.
Again, not a factor of ten, but it contributes.
The number of background processes likely also increased, and chances are you used to run fewer applications at the same time. Your handful of terminals may be a bit fuller now than it was back then.
Neither of those really explain why you need gigabytes of RAM nowadays, though, but they didn’t explain why Windows 2000 needed whatever it needed at its time, either.
The main real reason is “because we can afford to”.
Partly because we have more layers of abstraction. Just an extreme example, when you open a tiny < 1KB HTML file on any modern browser the tab memory consumption will still be on the order of tens, if not hundreds of megabytes. This is because the browser has to load / initialize all its huge runtime environment (JS / DOM / CSS, graphics, etc) even though that tiny HTML file might use a tiny fraction of the browser features.
Partly because increased RAM usage can sometimes improve execution speed / smoothness or security (caching, browser tab isolation).
Partly because developers have less pressure to optimize software performance, so they optimize other things, such as development time.
2 Programmers sat at a table. One was a youngster and the other an older guy with a large beard. The old guy was asked: "You. Yeah you. Why the heck did you need 64K of RAM?". The old man replied, "To land on the moon!". Then the youngster was asked: "And you, why oh why did you need 4Gig?". The youngster replied: "To run MS-Word!"
Well if you have a 512x512 icon uncompressed it is an even megabyte, so that makes the calculations fairly easy.
But raw imagery is one of the few cases where you can legitimately require large amounts of RAM because of the squaring nature of area. You only need that raw state in a limited number of situations where you are manipulating the data though. If you are dealing with images without descending to pixels then there's pretty much no reason to keep it all floating around in that form, You generally don't have more than a hundred icons onscreen, and once you start fetching data from the slowest RAM in your machine you get pretty decent speed gains from using decompression than trying to move the uncompressed form around.
Aren't they usually all preloaded to prevent pop-in (or using some sort of heuristic)?
anyways, I bet there's like a million little buffers all over the place in the graphics stack. It would be neat to go through all that and just see how slim you could get it, even if it broke a bunch of stuff.
I remember running Xubuntu (XFCE) and Lubuntu (LXDE, before LXQt) on a laptop with 4 GB of RAM and it was a pretty pleasant experience! My guess is that the desktop environment is the culprit for most modern distros!
well to start, you likely have 2 screen size buffers for current and next frame. The primary code portion is drivers since the modern expectation is that you can plug in pretty much anything and have it work automatically.
It's not just closer. Someone wrote an x86 emulator with CSS (it uses JS only for clock to make it more reliable). https://lyra.horse/x86css/ . So, CSS is officially Turing complete (which is a bit scary IMHO).
reply