Mainframe Modernisation using Cloud

Transforming Legacy Core Systems in Banks

Satyajeet Singh
8 min readMar 1, 2021

Core Systems in Banks

Large financial institutions and banks that have been in existence for few decades have built their core systems using mainframe technology. These mainframe systems have helped organisations in supporting massive compute capability with robust resilience. While the mainframe platforms have helped with system stability, performance and availability, the rate of innovation in mainframe has not kept pace with the open platforms. This has led to challenges in keeping up with business innovation and meeting customer needs.

Typical core systems using mainframe technologies in a bank are those dealing with critical functionality e.g. customer data, product and accounting systems, payment and lending system etc. These are generally known as Systems of Record (SOR) with databases, data files and compute logic on mainframe. These systems have online programs supporting massive volumes of transactions as well as batch programs implementing business processes and policies needing bulk data processing.

Legacy Architecture of Core Systems

For the purposes of understanding the nature of transformation involved, let us look at a generalised version of mainframe-based legacy core system architecture.

Legacy Core System architecture using mainframes

Core systems based on mainframe have four primary layers.

  • Messaging Layer

Responsible for communication with surrounding systems and typically consists of message queues. Generally, this layer is used for Online transactions.

  • Application layer

Has two main components — Transaction Manager realised through CICS, IMS TM etc. for managing online real-time transactions and a Batch system realised through JCL for bulk data processing.

  • Data Layer

Holds all data used by the core systems and it can be data files (VSAM, ISM etc.) or databases (DB2, IMS DB, UDS) for online transactions or even in-memory data cache.

  • System Layer

Provides a host of system functions like scheduling, security, monitoring, code versioning etc.

Transforming Legacy Core Systems

Challenges

Over the years, organisations have made attempts to modernise these core systems in a variety of ways including introducing modern build, deployment practices; re-platforming of these systems; and complete re-factoring the system onto more modern platforms. However, these attempts have invariably met with limited success due to three main challenges.

  • Lack of viable alternative

For years, mainframes have been the robust and performant system for mission-critical workloads. While the concept of running smaller applications onto commodity hardware has gained ground, they are not seen to be fit for running large workloads handling millions of online transactions.

  • Long Project Completion Cycle

Due to the nature of these core systems, there is a long lifecycle for large transformation projects, for which the ROI tends to get delivered only towards the end. For various organisational reasons like new leadership, change in business direction, and lack of funding, such programs do not see the end of tunnel.

  • Skills and Complexity of Systems

Complex core systems are quite old and lack the requisite understanding by the subject matter experts. The skill, being legacy, is difficult to find. This presents challenges as the organisation undertakes the modernisation journey but is unable to complete it.

Transformation Options

Cloud presents a unique opportunity for banks to modernise their legacy core systems. While Cloud offers the computing and cost-effectiveness to deal with the scale of core legacy systems, other developments like evolution of many reverse/forward engineering tools, DevOps and automation practices and availability of mainframe emulators for commodity Linux servers are making it possible for banks to seriously undertake the long-awaited modernisation of core systems.

In the following sub-sections, we are going to look at three options for core system modernisation leveraging cloud and associated technology levers.

  • Option 1: Reuse — Lift, Shift and Tweak using Mainframe Emulators
  • Option 2: Rewrite — Complete Re-factoring of Core Systems into cloud-native
  • Option 3: Reimagine — Core Systems capabilities delivered on cloud incrementally

Option 1: Reuse

Lift, Shift and Tweak using Mainframe Emulators

This option relies on reusing existing codebase (not entirely, but to great extent) to run on mainframe emulators (for x86 platforms) hosted in a cloud instance e.g. MicroFocus Enterprise Server, IBM z/DT, TmaxSoft OpenFrame etc.

A typical architecture of cloud-based emulator supported core system is represented as below.

Core System transformation with Reuse Option

In this option, there is an API layer as an abstraction for all core system online transaction capability. This API will communicate with Google Compute Engines (GCEs) hosted as managed instance group behind a Load Balancer.

In the application layer, it retains most of the existing code and recompiles them to run on emulators hosted on Google Compute Engine. Unsupported language code is converted.

For the data layer, careful consideration is done for decision as to retain or replace the data stores. Generally, the data files are left as-is and database is replaced with cloud-native database (e.g. Cloud SQL). For in-memory data, MemoryStore from GCP can be used.

The transition from legacy core system to a rehosted core system on cloud will look like below.

Legacy to Rehosted Core System

Option 2: Rewrite

Complete Re-factoring of Core Systems into cloud-native

This option goes for re-factoring the core systems to a cloud-native application. The key in this option is to use automated re-factoring to the maximum extent by leveraging solutions for reverse engineering as well as forward engineering. One of the key decisions is about breaking up the monolith core system and implementing distinct business capabilities as microservices.

There is a wide variety of choices available to re-factor the systems. One of the typical target architectures using GCP services has been shown below.

Core System Transformation with Rewrite option

Each microservice can use its own application stack from among the choices of IaaS (Google Compute Engine) to PaaS (AppEngine, Google Kubernetes Engine) or even serverless (Cloud Run).

On the data layer, each microservice will have its own database and choices will range from using relational database service to using NoSQL databases.

For batch computations, solution can use high performance compute clusters using pre-emptible VMs to optimise cost. Other options like Google Dataflow or Google Functions can also be used.

The transition from legacy core system to a complete re-factored core system leveraging cloud-native services will look like below.

Legacy to Rewritten Core System

Option 3: Reimagine

Core Systems capabilities delivered on cloud incrementally

In this option, the focus is on completely reimagining the way core system capabilities will be delivered and gradually building these individual capabilities onto modern cloud stack and phasing out the old ones. The capability demarcation has to be vertical and deployment will be atomic in nature so that each capability starts paying off the investment in isolation, rather than being dependent on all capabilities to be delivered.

Based on various patterns available, one can follow the 3-phase approach described below.

Phase 1: Create a digital engagement layer and offload online read transactions

This phase will create a cloud-native system of engagement for providing the enquiry services to all consumer channels and applications through an Enquiry Microservice exposed through API layer. The database for this service will be a selection of relational and NoSQL databases depending upon the type of data. This database will be synced up with core system data in real-time through asynchronous event based mechanism using Google Pub-Sub. Creating this layer diverts all enquiry traffic from mainframe based core system, while the update transactions and heavy compute batch functions continue to be serviced through legacy mainframe system.

Add digital engagement layer to cloud based services

Phase 2: Re-factor read-only batch jobs (extracts, reports) into cloud-native stack

This phase will further remove workloads from core systems which are used for extracting and creating reports, through batch jobs aggregating data. A cloud-native batch processing layer will use the dataflow pipelines to do all the data processing. For the data source, this layer will make use of digital engagement layer which will help remove the dependency on mainframe system. The processed data can be offloaded into either Cloud Storage (extracts) or CloudSQL for reporting using the chosen reporting tools.

Offload batch components using cloud native services

Phase 3: Create capability-wise microservices and deploy atomically

This phase will go about creating cloud-native microservices for each of the identified vertical capabilities of the core system and deploying them atomically by following Strangler pattern. It is important to note that reverse engineering solutions can be used to bridge the knowledge gap and automation (DevOps) is foundational practice to help accelerate the delivery cycle. Each microservice can have its own associated datastore. Data migration can be tricky and hence a data-access microservice using Y(write-and-read) approach to data replication should be considered. Careful consideration be given to minimise the backward dependency on the legacy core systems, though it will be unavoidable. Facade patterns should be used to switch from legacy to new vertical microservices. This approach will allow gradual movement of all legacy core system capabilities to modern stack on cloud.

Individual core capability created as cloud based micro services

Modernisation Options — Conclusion

When it comes to modernising the legacy core systems, there is no single answer for all organisations and the decision must be taken keeping in mind the considerations related to business objectives, legacy skills availability, usage and future strategy of capabilities present in the legacy core systems among other factors.

Irrespective of the option chosen, banks have a great opportunity to come out of the legacy core trapped in mainframe systems and build a truly digital core capability based on the foundations of microservices leveraging cloud services. The modernised core built on cloud not just helps in keeping up with ever-changing customer needs of the banks, but also enables innovative services to their customers.

--

--

Satyajeet Singh
Satyajeet Singh

Written by Satyajeet Singh

IT Consultant and Technology enthusiast navigating the Cloud World in Financial Services Domain

No responses yet