Zero Trust for AI is a hot topic in 2026. On mainframe, Zero Trust has been the default since 1970.
Make the Mainframe Great Again.
Not a political statement. A technical one.
While the industry obsesses over AI in the cloud, the most sensitive data in the world is still sitting on z/OS – bank transactions, insurance records, government systems processing $10 trillion in transactions every day. And now AI wants access to it.
This is where it gets interesting. Because the mainframe's greatest strength – its security architecture – is exactly what makes AI adoption on it different from everywhere else. And different in a way that the rest of the industry is only now beginning to understand.
When organisations deploy AI tools on modern cloud platforms, the security model is improvised. A tool gets API credentials. Access is scoped as tightly as the team has time to configure. Audit logs exist if someone remembered to enable them. The principle of least privilege is applied in theory and compromised in practice because it is easier to give broader access than to debug permission errors.
The industry's response has been to retrofit Zero Trust principles onto platforms that were never designed for them. The frameworks are complex. The implementation is slow. The results are incomplete.
Mainframe organisations have a different problem. They are trying to explain to security teams that the platform they want to apply Zero Trust principles to already has them – and has had them for fifty years.
RACF (Resource Access Control Facility) was introduced in 1976. Its design principles are indistinguishable from what the industry now calls Zero Trust:
Every resource on z/OS – every dataset, every program, every transaction, every operator command – is protected by a RACF profile. Access is denied unless explicitly permitted. There is no concept of implicit trust based on network location or previous authentication.
Every access attempt – by a human, a batch job, a CICS transaction, or a started task – is verified against the RACF database at the time of access. Not at login. Not at session establishment. At every individual resource access.
RACF enforces access at the individual resource level. A user ID can be granted read access to a specific dataset without any broader access to the system. A program can be authorised to run a specific CICS transaction without any other privileges.
Every security event – successful access, failed access, privilege escalation, security definition change – is written to SMF as a structured record. Available for real-time analysis and historical audit with no additional configuration required.
When an AI tool needs to access mainframe data or systems, it operates within the same security framework as everything else. There is no special AI security model. There is the z/OS security model, which applies to everything.
This is a strength, not a constraint.
The wrong approach. AI tools that require APF authorisation, elevated privileges, or special security bypasses to access mainframe data. These tools are asking organisations to create exceptions to a security model that has been refined over fifty years. Any security team worth its certification will reject them.
The right approach. AI tools that run under the user's own RACF ID, with no elevated privileges, reading only the data the user is already authorised to access. The AI has exactly the access that the human operator has – no more. Every access is logged in SMF the same way it would be for any other access.
This is the architecture that enterprise security teams can approve. Not because they trust the AI tool – but because they trust the security model the tool is running inside. As one mainframe AI developer put it: "We didn't ask customers to trust us. We asked them to trust decades of proven mainframe security controls." This is the correct posture for AI on mainframe.
One of the persistent concerns about AI in enterprise environments is auditability. When an AI tool accesses sensitive data, makes a recommendation, or takes an automated action – who is responsible? How do you prove what happened? How do you satisfy a regulatory audit?
On mainframe, this problem is largely solved by infrastructure that already exists.
SMF records every access to every resource. If an AI tool running under a specific RACF ID accesses a DB2 table, reads a dataset, or queries CICS data, that access is recorded in SMF with the user ID, the resource, the time, and the outcome.
The audit trail for AI actions on mainframe is not something that needs to be built. It is something that needs to be understood and leveraged. The same SMF infrastructure that has satisfied banking regulators for decades is the audit infrastructure for AI actions. For organisations subject to financial regulation, data sovereignty requirements, or security certification regimes, this is a significant advantage.
Data sovereignty – the requirement that data be processed under specific legal jurisdictions – is an increasingly important constraint on AI adoption. Regulations in multiple jurisdictions are tightening requirements around where sensitive data can be processed and by whom.
AI tools that run on the mainframe – reading SMF records, analysing CICS data, processing DB2 queries – process data in the same jurisdiction as the mainframe itself. There is no data leaving the environment. There is no cloud hop to a region with different regulatory treatment.
AI tools that require exporting data to cloud platforms for processing create data sovereignty complications that mainframe organisations are specifically not positioned to accept.
The mainframe-native AI architecture – processing data where it lives, under the security model that already governs it – is not just technically preferable. For many regulated organisations, it is the only compliant option.
For organisations evaluating AI tools for mainframe environments, here is the security evaluation framework:
A tool that can answer these questions satisfactorily is a tool that has been designed with mainframe security architecture in mind. A tool that cannot is applying a cloud security model to a platform it does not fully understand.
The mainframe has a security architecture that the rest of the industry is trying to replicate. RACF, SMF, pervasive encryption, hardware-level security features, fifty years of regulatory hardening – this is not legacy infrastructure with legacy security. It is mature infrastructure with mature security.
AI tools that are built to work within this architecture – not around it, not despite it, but inside it – inherit a security model that no cloud platform can currently match for regulated workloads.
The mainframe does not need to become great again. It never stopped being great. It just needs AI tools that are built to match it.
Also in this series: Why Generic AI Tools Fail on Mainframe · Runtime Evidence as the Right Starting Point · The Future – Mainframe in an AI-First World