← Back to Blog

Why Mainframe is Different – The Execution Graph Problem

Most developers understand applications as programs that run on infrastructure. On mainframe, the infrastructure IS the application.

When developers from modern software backgrounds first encounter mainframe, they often assume the learning curve is about syntax – learning COBOL instead of Java, JCL instead of shell scripts, ISPF instead of a modern IDE.

The syntax is the easy part. The hard part is understanding that mainframe architecture is fundamentally different from the application model most developers carry in their heads.

This difference matters enormously when applying AI to mainframe. Tools built on modern application assumptions will fail in ways that are hard to detect and expensive to fix.

The modern application model

In a modern web application, the architecture is relatively clear. A program runs. It calls other programs or services. It reads from and writes to databases. The source code describes most of what happens. If you read the code carefully enough, you can understand the system.

This model – code as the primary source of truth – is what most developer tools, including AI tools, are built around. It is why RAG on source files works reasonably well for a Node.js microservice. The code and its execution are closely coupled.

The mainframe model – three layers, one system

On mainframe, the system operates across three layers that are tightly interdependent but physically separate.

The batch layer – JCL and COBOL. Batch jobs are the backbone of mainframe processing. A batch job is described in JCL – Job Control Language – which defines what programs run, in what order, with what datasets as input and output. The COBOL programs contain the business logic. But the JCL defines the execution context: which datasets the program receives, what it is expected to produce, what happens if it fails.

"A COBOL program without its JCL is like a function without its call stack. It tells you what the code does. It does not tell you when it runs, what data it processes, what depends on its output."

The online layer – CICS. CICS (Customer Information Control System) manages online transaction processing. When a bank teller accesses a customer record, or an ATM processes a withdrawal, CICS is the middleware that receives the transaction, routes it to the appropriate COBOL program, manages the conversation state, and handles the response.

CICS routing tables define which transaction codes map to which programs. CICS commarea definitions define the data passed between the terminal and the program. CICS error handling definitions determine what happens when a program abends. None of this is in the COBOL source code. It is in CICS system definitions that exist in a completely separate part of the system.

The data layer – DB2, VSAM, IMS. Data on mainframe lives in multiple systems. DB2 is the relational database. VSAM is the file system for indexed and sequential data. IMS is a hierarchical database still used extensively in banking and insurance. Each has its own access methods, its own locking behaviour, its own interaction with the programs that use it.

A program that reads a VSAM KSDS file depends on the file being allocated, the record format being correct, and in a batch context, the previous job in the stream having written the right data to it.

A real example – tracing a bank transaction

To make this concrete, trace what happens when a bank customer initiates a payment:

Payment transaction trace
Step 1 – CICS entry

Transaction enters via CICS. The routing table maps it to PAYMNT01, the payment processing program.

Step 2 – DB2 + VSAM write

PAYMNT01 reads the customer record from DB2 table CUST_ACCT, validates the account balance, and writes a pending transaction record to VSAM file PEND.TRANS.

Step 3 – Batch job at 11:00 PM

A batch job reads PEND.TRANS, sorts it by account number (JCL SORT step), and passes the sorted file to BATCHPAY, the batch payment processing program.

Step 4 – Validation chain

BATCHPAY calls VALACCT to validate each account. VALACCT calls CHKLIMIT to check credit limits. CHKLIMIT reads a DB2 table last updated by a separate regulatory compliance job that runs at month end.

Step 5 – Error handling

If a payment fails validation, BATCHPAY writes an error record to VSAM file ERR.TRANS. A CICS transaction running the next morning reads ERR.TRANS and presents failed payments to operators.

Now ask an AI tool: "What happens when a payment fails validation?" A tool that vectorised only COBOL source files will find the validation logic in VALACCT and CHKLIMIT. It will completely miss the CICS routing, the VSAM file, the batch job, the DB2 table updated by a separate job, and the CICS transaction that presents errors the following morning.

The answer it gives is not wrong. It is incomplete in ways that matter when you are making decisions about this system.

Why this matters for AI

The execution graph – the full picture of what runs, when, with what data, in what sequence – is the real unit of analysis on mainframe.

This graph is not written down in any single place. It is distributed across:

  • JCL job streams and their dataset definitions
  • CICS system definitions and routing tables
  • Job scheduler entries and dependency chains
  • VSAM and DB2 catalog entries
  • Started task definitions and their interactions
  • Operations runbooks that describe what to do when things go wrong

An AI tool that analyses only source code is reading one layer of a three-layer system. The answers it produces are grounded in a partial picture.

Building AI tools that work on mainframe requires understanding this architecture first. The execution graph must be modelled before any meaningful AI analysis can begin. This is not a minor technical detail – it is the foundational requirement that distinguishes mainframe-specific AI tools from generic ones.

What this means for teams evaluating AI tools

Before evaluating any AI tool for mainframe analysis, ask:

  • Does it ingest JCL as well as COBOL? Does it understand dataset flows between job steps?
  • Does it model CICS transaction definitions and routing? Does it understand commarea structures?
  • Does it understand job scheduler dependencies – not just program-level calls?
  • Does it model dataset lineage – which jobs produce which datasets that which programs consume?
  • Does it distinguish between the three layers – batch, online, and data – and their interactions?

A tool that cannot answer yes to these questions is analysing fragments, not systems. On a payroll or payments system processing millions of transactions daily, that distinction is not academic.

Also in this series: Why Generic AI Tools Fail on Mainframe · The Hidden Risk in Every COBOL Migration Project · Runtime Evidence as the Right Starting Point

IMUAI – AI diagnostics built for mainframe architecture IMUAI models the execution graph – JCL, CICS, DB2, VSAM – not just COBOL source code.
Learn more
Working on Linux and mainframe? IM3270 is a modern 3270 terminal emulator for Linux – free 60-day trial, no credit card required.
Download Free