>Problem is, most of the slow pages out there are slow because the teams just can't keep up with all of these shifts
Not really. Most pages are slow not because they're using outdated techniques, or they're still serving everything from a large Windows Server running an ancient version of .NET. Those old react pages aren't slow either because they "didn't keep up", they still run fine.
Pages are slow because of the obscene amounts of tracking and "user conversion" techniques. Banners, notices, trackers to make note of everything you do on the page, trackers to make note of errors, third party trackers, ads everywhere because a simple static one isn't good enough, list goes on.
Guess what those old pages that "stayed fast" don't have? You guessed it, all those BS trackers, they weren't readily available back then.
I didn't mean that old react pages are slow. I meant sites that started on old react and kept trying to keep up with all of the newest best practices throughout the years are slow. Typically because they do too much on the webapp and don't treat it as a gui only. The last web app I looked at in the old job was literally as much code as the entire backing service for it. I don't know how to justify that.
And yeah, for public sites, tracking almost certainly dominates how slow things are. I was thinking of internal tooling sites. I remember how fast many of the tools were when I started at the last job. I also remember how slow all of them were getting as I was leaving said job. It is more than a touch insane.
I can't say I've come across many internal tooling where the experience is anywhere near as bad as public sites. I find that even the cheap hardware offerings from businesses tend to handle bloated internal tooling pages fine, even if your first load might take a good minute. Ignoring tooling specifically crafted for IE6 and hasn't been touched. May that rot in hell.
There was this really horrific one I did experience, come to find out the entire company is relying on this angular app that connects to an ancient mac mini that IT is responsible for but mostly forgot about. Wasn't a large company, but they weren't stalling in growth!
Certainly many sites with tons of tracking are worse than most internal sites. You will not get a disagreement from me on that.
I can't really cite examples, as I'm not at the old job. But silly things like our "news" homepage was getting so that it lazy loaded all of the news. HR things would load in parts, too. And issue tracking seemed to constantly be a mess.
The tool my team controlled had bloated to excessive points. Some of that was almost certainly my fault. I had designed a somewhat granular backend. That said, I remember things were certainly faster before we had a ton of redux based code to do things.
Sorta? The backend specifically exists to be where state is persisted. And the validation of the data has to happen at the backend, even if you also repeat it at the front end.
You are correct that there can be more interactions on the frontend, but frontends were both faster and easier to deal with when they did not do all of this. For many internal tools, bouncing back the validation from the backend is also easier to understand than the validation that the website is providing.
I suspect it is a bit of a curve. Zero interaction on the front is not pleasant. All of the state replicated on the front is also not useful and likely to be more problematic.
That's how server-side interaction was in the early 00's with ASP.Net WebForms. This type of interaction is one of the things being proposed as opposed to client-side. And yeah, it definitely was a (very painful) thing. Server-driven Blazor has very similar (poor) behavior today. I know there's other frameworks that do similar things. Which really sucks if you have any latency or bandwidth issues.
Not that I like the idea of several MBs of JS being sent over the wire either... I know that is also a thing, not to mention poor state mgt, which is also a regular occurrence (most angular apps have many state bugs).
Personally, I'm not too bothered by client rendered/driven apps... They can often be split accordingly, and it's reasonable to stay under 500kb JS payload for a moderately sized web application. Not that everyone has as much awareness. I think of the browser as a thick-ish client toolkit as much as it is a rendering platform. That doesn't mean SPA is the right way for everything. I wouldn't do it for a mostly text content site (news, blog, etc). And most of my work is web based applications, not text driven content sites.
Not really. Most pages are slow not because they're using outdated techniques, or they're still serving everything from a large Windows Server running an ancient version of .NET. Those old react pages aren't slow either because they "didn't keep up", they still run fine.
Pages are slow because of the obscene amounts of tracking and "user conversion" techniques. Banners, notices, trackers to make note of everything you do on the page, trackers to make note of errors, third party trackers, ads everywhere because a simple static one isn't good enough, list goes on.
Guess what those old pages that "stayed fast" don't have? You guessed it, all those BS trackers, they weren't readily available back then.