The TCO analysis is the easy part. Here is what the business case leaves out.
Gartner just published something that surprised even the analyst who wrote it.
Alessandro Galimberti, Gartner VP Analyst, told The Register that VMware users considering alternatives to Broadcom’s new licensing are running the numbers – and some of them are landing on IBM mainframe. He was surprised. So were the people reading it.
But the business case is real. For organizations running 500–700 Linux VMs on VMware, the TCO comparison with IBM’s z/VM hypervisor can favor mainframe. High availability, data synchronization, and disaster recovery are built into the platform. On cloud or VMware, you build them yourself.
So the decision makes sense. The question nobody is asking is: what happens next?
The TCO analysis is straightforward. Broadcom’s Cloud Foundation stack requires buying everything. IBM’s platform has built-in HA, DR, and data synchronization. Run the numbers for your workload, your team, your five-year horizon.
Galimberti has seen multiple business cases where the move makes sense. The math doesn’t lie.
But the math only covers the hardware and software costs. It doesn’t cover the operational costs. And on mainframe, the operational costs are almost entirely human.
A VMware administrator who moves to IBM z/VM arrives on a platform that has been running continuously since before they started their career.
The hardware is extraordinary. The software is mature. The platform is, as Galimberti said, genuinely not stale.
But the interface is ISPF. The job scheduler is JCL. The security model is RACF. The diagnostic tool is SDSF. The primary development interface is a 3270 terminal – 24 rows, 80 columns, keyboard-driven, no mouse.
None of this is bad. All of it is different.
A VMware administrator who has never seen ISPF will spend the first month feeling like they’ve been handed a cockpit with no manual. The systems are running. The instruments are working. They have no idea what any of them mean.
This is not a hypothetical. This is what happens every time an organization adopts mainframe without a structured transition plan.
The mainframe skills gap has been discussed for twenty years. The average mainframe professional is over 50. The pipeline of new talent is thin. Gartner’s own report notes that skills remain an issue – kids don’t contemplate mainframe careers.
This is a known problem for organizations that already run mainframe.
It is a completely new problem for VMware refugees who just made a TCO decision and now need to operate a platform they’ve never touched.
The difference matters. Existing mainframe shops have institutional knowledge – decades of tribal wisdom, undocumented procedures, experienced operators who know what the system is telling them at 3 AM. VMware refugees have none of this.
They arrive with strong virtualization skills, strong Linux skills, strong automation skills. These transfer. They arrive without RACF knowledge, JCL knowledge, ISPF muscle memory, or any understanding of why the change management process has three approval stages and asks for a business owner.
The platform is sound. The knowledge transfer problem is the crisis.
If your organization is running the Broadcom TCO comparison and mainframe is coming out ahead, here is what the business case needs to include before you sign anything.
A training course gives your team knowledge. A training plan gives them operational capability. The difference is hands-on practice in a realistic environment before they touch production.
JCL, ISPF, RACF, SDSF, and DB2 are not concepts you learn from slides. They are muscle memory you build from repetition. Your team needs a lab environment where they can make mistakes safely – run jobs, read abend codes, diagnose failures, fix them, and run again.
RACF is the most powerful security model in enterprise computing. It is also the easiest to misconfigure if you don’t know what you’re doing. Default RACF settings are not production-ready settings. An organization that migrates VMware workloads to mainframe without a RACF audit is trading one risk for another.
Get the security baseline established before the workloads arrive. Not after.
Things will go wrong. On mainframe, the diagnostic information is extraordinarily rich – SMF records, CICS journals, job logs, abend dumps. The platform tells you exactly what happened.
But only if you know how to read what it’s telling you. Build the runbook before the first production incident. Not at 3 AM when payroll is abending and hundreds of employees are waiting.
Gartner predicts that by 2030, 75% of vendors offering mainframe exit services will change direction or disappear. The mainframe exit market is collapsing.
The mainframe entry market is just getting started.
VMware refugees who make the move successfully will need tools, training, and support built for people who are new to the platform. Not documentation written for IBM engineers. Not courses designed for people who already know JES2. Accessible, hands-on, realistic training that gets a VMware administrator operational on mainframe in weeks, not years.
The platform is ready. The talent pipeline isn’t. That gap is the next wave of mainframe opportunity – not for the organizations that have been running mainframe for forty years, but for the ones arriving for the first time.
Some VMware refugees won’t stop at z/VM. They’ll ask the next question: what about containers?
IBM’s answer is Red Hat OpenShift on IBM Z. Kubernetes, containerized workloads, cloud-native development patterns – running on mainframe hardware. The same platform that runs your COBOL batch jobs can run your containerized microservices alongside them.
This is not theoretical. z/OS Container Extensions (zCX) allow Linux containers to run directly on z/OS. OpenShift on Z brings the full Kubernetes ecosystem to mainframe hardware.
For VMware refugees, this is actually a compelling story. They know containers. They know Kubernetes. The learning curve for OpenShift on Z is much shorter than the learning curve for z/OS itself.
But here is the nuance: running containers on mainframe hardware is not the same as running z/OS workloads. The two worlds coexist on the same hardware but operate differently. Your containerized application doesn’t automatically get access to the COBOL batch jobs running next to it. Integration requires deliberate design.
The opportunity is real. The complexity is also real – you now have two operational models on one platform, each requiring different skills. The VMware administrator who moves to z/VM and learns to operate z/OS has solved one problem. If they also want to run OpenShift on Z, they’ve signed up for a second education.
Which brings us back to where we started: the decision to move is step one. The operational capability to run what you’ve moved is the work that follows.
The Gartner report is right. The mainframe is not stale. The business case can work. The skills gap is real and it just got a new dimension.
The decision to move is the easy part.
The hard part is everything that happens on the other side of it. Plan for that part first.
Source: Gartner “State of the IBM Mainframe in 2026,” as reported by The Register, May 4 2026. Quotes attributed to Alessandro Galimberti, Gartner VP Analyst.
Also worth reading: Is Mainframe a Good Career Choice in 2026? · Institutional Knowledge and AI on the Mainframe · AI Pilot Purgatory on the Mainframe