from Hacker News

Ask HN: What is the point of IBM mainframes?

by julius on 11/28/22, 5:23 PM with 33 comments

From another HN post I learned that quite a few companies use IBM mainframes. And they are very expensive.

What can you accomplish with an IBM mainframe that you could not do on x86 Linux servers? How is the premium in price justified?

  • by aq9 on 11/28/22, 5:36 PM

    I think this is the wrong way to think about it.

    For the type of organizations that run workloads on IBM mainframes, there are three drivers: * Legacy: The application was written for the mainframe, cannot run on anything else, too expensive in terms of dev and test time to re-platform * Business value: This is the big one; these workloads make their companies 100s of millions to billions of dollars per year. The price premium for running this on a mainframe is a rounding error. * Reliability: With the cloud, I hold the opinion that the average x86 application is less available/reliable than a well-run pre-cloud application (which already included HA, etc.). Mainframe apps and hardware blow all this out of the water.

    FWIW: I programmed mainframes briefly early in my career, I am quite familiar with the ecosystem.

  • by LinuxBender on 11/28/22, 6:22 PM

    What can you accomplish with an IBM mainframe that you could not do on x86 Linux servers?

    I would probably word that as what can a mainframe do that commodity servers can not. They are more reliable and allow for spinning up a vast number of workloads with incredibly fast virtual networking. If one needed a "cloud" of Linux nodes, a Z16 would be insanely fast deployment and teardown of workloads on demand and those nodes could talk to each other with very little latency and high throughput. Some companies and organizations need a small private cloud (think pod or blast-radius contained mini region) that can spin up their cloud region on demand. Even in a catastrophic failure, the fault blast radius is limited to the group of racks comprising the mainframe.

    The cost is not just hardware. Anything IBM is going to be IBM supported and your business would have factored that into the ROI/TCO. The contracts are very expensive but you have the highest level engineers a phone call away and if you require it they can also remotely diagnose problems and have replacement parts in {n} hours based on the contract. Not everyone needs this of course which understandably leads many to question the cost.

  • by mikewarot on 11/28/22, 11:46 PM

    Capability Based Security[2] is what IBM and other Mainframe systems offer. When you have a job, in the setup, you specify exactly what resources and runtimes are to be made available, which virtual disks, etc. The code running in that constrained environment can't affect anything else. This is, effectively capability based security. It's the thing that didn't get built into Multics in time, and Unix made fun of ignoring.

    It's the same thing that Linus thought wasn't important at all, ignoring the advice of his teacher, Andrew Tannenbaum who was trying to teach him about microkernels, and why they were better[3].

    In the PC world, we don't even have ECC memory in most desktops, etc. That's why Rowhammer[1] and all the other things like it work, because it's substandard RAM that we've grown to accept in the name of cost savings.

    We keep finding Ersatz versions of Capabilities, first in Virtualization such as VMware, then in Docker and containers, and now in WASM/WASI. We'll eventually learn the lesson, likely in 5-10 years now. If we can keep WASM from getting corrupted, it might make it.

    Sure, mainframes aren't cost competitive, because the cost of computing is no where near as important as the business it makes possible.

    [1] https://en.wikipedia.org/wiki/Row_hammer

    [2] https://en.wikipedia.org/wiki/Capability-based_security

    [3] https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...

  • by closeparen on 11/29/22, 1:07 AM

    My understanding is: Silicon Valley hires tens of thousands of the most brilliant, expensive engineers to write always-on distributed systems and to try and keep them alive with the data sort-of consistent. Enterprise does not do this. Your bank writes simple, naive, non-distributed programs. The operator of one super-reliable computer invokes the program and makes sure it runs to completion each night. When it is time to make a deployment or a database migration on an internet-connected service, they just declare a maintenance window and go down for a few hours. There's one big ole SQL database in normal form with joins and transactions.
  • by jareds on 11/28/22, 5:38 PM

    You can continue to run internal software that has been under active development for over 50 years. IN 1970 Unix and C did not exist but COBOL did. Do you really want to rewrite all your credit card processing code from scratch when it has 50 years of business rules and bug fixes?
  • by hindsightbias on 11/28/22, 7:39 PM

    google news malware, ransomware, dc/cloud outages, millions and see how many reference an IBM mainframe.

    You get what you pay for. All your new fangled software and hardware just reinvents the wheel, is buggy, is layered on an onion stack with a lifespan of maybe 3 years and takes a lot more humans and money to run, diagnose and maintain. And in three years, a new CTO will come from Wharton and tell you you’re a moron for not using microservices 2.0 and spend $100M rearchitecting it.

    In the meantime, Z/Power just keep going and 4 or 5 guys run the whole show. They sleep at night and have a life.

  • by tacostakohashi on 11/28/22, 11:45 PM

    I currently work at an organization that has a lot of very flaky applications that are spread across a handful (5, 10, 20) servers using zookeeper, ignite and a few other frameworks that attempt to provide "clustering".

    It all ads a lot of overhead and isn't particularly reliable - frankly, I think it would make mor sense to run it on a mainframe, and have simple application code because the fault tolerance is on the hardware/OS layer.

    If you are a big tech company, then the massive scale means the cost advantage of x86 is worth dealing with node failures and it's not like any single mainframe for handle it, but for a lot of internal applications that are too big for any single x86 machine, but don't require thousands of nodes, I can totally see where it makes sense.

  • by markus_zhang on 11/28/22, 6:02 PM

    I actually want to ask the reverse question: why is it not popular for mainframe manufacturers to rent out their cores and rams and storages for non banking/insurance businesses? Is it because of the cost?
  • by simne on 11/30/22, 3:29 AM

    I could tell a story from a typical mainframe client decades ago.

    This was successful big distributed corporation, which begin, when mainframes was only viable technology for them, and all their business was inside machine.

    They grow decades, but once happen problem - mainframe reach it's limits, and vendor was not agile enough, and said "we will transfer all your infrastructure to new hardware, but in few months" (they understand this as will stop business for long time).

    I don't know all exact details, all I know, other company made them offer, claimed they will transfer to cloud, without stopping work.

  • by simne on 11/30/22, 2:53 AM

    I think, this is because IBM is software/consulting company, and their software costs so much, that hardware cost not important.

    And sure, big share of their business software, intensively use very specific features of their mainframes.

    And as a last trick, usually such companies don't disclosure real prices, but when you buy hard+soft+consulting in one, they offer huge discounts.

    So you sure, could pay their consultants, to write on Linux, but it will cost more.

  • by CRConrad on 11/29/22, 11:28 AM

    As I've understood it[1], in a word: Throughput.

    ___

    [1]: Edit -- Removed superfluous "haven't read TFA" disclaimer.