Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Putting consumer grade (aka "commodity") hardware in a datacenter and running your infra on it is a bit of a meme, in the sense that it's not the only way of doing things. It was probably pioneered/popularized by Google but that's because writing great software was their "hammer", ie they framed every computing problem as a software problem. It was probably easier for them (= Jeff Dean) to take mediocre hardware and write a robust distributed system on top instead of the other way around.

There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).

IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.

 help



What I think today people do:

1. They run complicated infrastructure software, written by third-party developers.

2. And they run their own simple programs on top of them.

So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.

Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.

There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.


Worth noting, of course, that Kubernetes traces its lineage to Google. It is the virtual mainframe that Google built on top of commodity clusters.

> There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software.

You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.


> Credit card transactions and banking software run on this model for example

TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.


Current generation of banking software is expanding on the mainframe:

IBM Z mainframes play a pivotal role in facilitating 87% of global credit card transactions, nearly $8 trillion in annual payments, and 29 billion ATM transactions each year, amounting to nearly $5 billion per day. Rosamilia highlighted the continuous growth in demand for capacity over the past decade, which has seen inventory expand by 3.5 times.

https://thesiliconreview.com/2024/04/ibm-new-mainframe-web-t...


That's true, but the proportion is shrinking.

Source? Interested in learning more about this

Red Hat OpenShift (IBM) is what a lot of banks have settled on. Red Hat went all in maybe 5+ years ago in capturing those institutions.

Ah, that explains why IBM bought RedHat. Or at least one reason for doing so.

I'd imagine close to 95% in the US, if they're running important workloads on prem on Linux, it's on RHEL. A staggering number of VMs and bare metal.

(Clarification: I'm not saying 95% of all US company Linux workloads are RHEL, not even close.

I'm saying a huge percentage of high criticality (risk of loss of life / high financial risk) are, simply because of support and the name.)


Exactly. The exact opposite of the people flogging internet widgets running on a bunch of AWS instances running Arch/Ubuntu/Cheap distro of the week. Unfortunately that contingent is massively over-represented here on HN.

Is that in addition to mainframes or for completely replacing them?

Both

Some stayed at on prem, some pushed code to mainframe VMs in the cloud, some went to OpenShift (mostly on prem from what Ive seen, probably 80-85%).


Probably both, to respond to the risk tolerances of any given org.


That post fails to mention Capital One's move from IBM mainframes to AWS was one of the reasons they suffered one of the largest data breaches in history.

And what was the financial cost of this?

At least $270,000,000 in direct costs [0].

[0] https://www.security.org/identity-theft/breach/capital-one/


I work in banking. We provide modern solutions for small local banks in the US. That's how our core runs. It's just Java apps (Spring Boot, Jakarta EE) running in the cloud.

> Credit card transactions and banking software run on this model for example

Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.

Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.


x86 servers weren't that common in the 90s and early 200s, that was all sun or the other commercial unix peoples things

In the 90s, perhaps not massively, but gaining ground very early in the 00s. I started my career in 2000 and most of the credit-card related stuff I built until ‘05 was targeted at Windows, Linux and Solaris, with a variety of other Unix platforms depending on the client/project.

But the x86 I was referring to in my comment above, Stratus, was (maybe still is?) an exotic attempt to enter the mainframe-reliability space with windows. IIRC it effectively ran two redundant x86 machines in lockstep, keeping them in sync somehow, so that if hardware on one died the other could continue. I have no idea how big their market was, but I know of at least one acquirer/issuer credit card system that ran on that hardware around 2002-3.


Stratus VOS ran on a bunch of non-x86 hardware, i860, PA-RISC, 68000. It wasn't Windows (UNIX admin with a modicum of Stratus VOS experience in production, back in the day).

It seems I encountered the “ftServer” line, which on closer inspection launched in 2001, and was indeed intel/windows 2k, based around Pentium III Xeon Chips.

They still list old product sheets here, the oldest being the ftServer 5200 AFAICT - https://www.stratus.com/solutions/previous-generation-produc...

https://www.stratus.com/assets/5200hw.pdf


Sun was dying in 2000. I was busy deploying BSD and a bit later Linux for all our x86 gear.

Meanwhile in 2000 we only considered Linux good enough to host our MP3 file server and quake for the late nights.

All our production stuff was being deployed on Aix, HP-UX, Solaris and Windows NT/2000 Server.

Likewise most of my university degree used DG/UX and Solaris, when Red-Hat Linux was first deployed on the labs, it was after the DG/UX server died, and I was already on the fourth year of a five year degree.


Well we were a small startup, and the idea of using AIX was a non-starter. Solaris was lovely, but our E250 was only for mail, and in hindsight we should have stood up a FreeBSD server with dovecot or something instead of a system that we migrated off of a year later.

We did use NT/2K internally but that was because we had some who insisted on using SMB via Windows.

Such fun times. The nix and nix-like OSes were spreading like fire. I never would have thought I'd ever wrangle them for the majority of my career.


Java was exploding and sun machines were the server platform at the time. Yes, the dot com bubble burst and their stock was in freefall but all the things deployed to sun that survived the bubble didn't just disappear or move to X86 overnight

Well you can say the same about COBOL...

Just because things hung around didn't mean that Sun/Solaris/Java were long for this world. Linux/x86 was just too cheap compared to SPARC gear. Even if it wasn't as robust as the Sun gear, it just made too much sense especially if you didn't have any legacy baggage.


IIRC the Stratus/Model 88 was Moto 68K chips, not x86? I worked on them for years on wall st. - really nice machines! :-D

The ones I encountered (and I never worked on them directly) were tandem-x86 systems and ran windows.

According to Wikipedia they launched in 2002, so I guess they were quite new when I saw them in 03.


How well do commodity systems protect your financial transactions from cosmic ray-induced bit errors?

How often do you hear about them? Now divide that by the millions (billions?) of daily transactions to get an approximate error rate. That's about how well they are protected.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: