Fake News, Mainframes and the myths of rehosting – Part 1


Clients are saying they want to move their mainframe workloads to a public cloud platform (Editor: Lets not pull any punches with that first sentence).  Clients cite “the mainframe is too expensive”, or “the mainframe is not modern”, or “we are running out of mainframe skills”.  Or they may simply have a new executive who is not familiar with the value of IBM z, or is otherwise just pre-disposed to move everything to the cloud. This 2 part blog is intended to help address this topic and debunk some ‘fake news’ put out by organizations who will have you believe certain things along the way.

Clients see a shared, multi-tenant public cloud services offering such as Amazon Web Services or Microsoft Azure as the promised land, some nirvana where angels tread, and they have decided to adopt a cloud-first strategy for their entire data center.  Then when faced with the challenge of how to integrate new tools, services and capabilities with their System of Record running on IBM z, they mistakenly believe that their only recourse is to offload the workloads from the mainframe and remove it entirely.

But the never spoken of bottom line is this:  Typically, moving workload off the mainframe –via rehosting, migration, or re-engineering – is a large and complex project, which more often than not does not succeed, almost always ends up costing far more than anticipated, and ultimately does not save money, let alone the senior IT Exec career impact.

Let’s examine what offloading workloads from the mainframe really means.

There are three methods for moving workload off the mainframe:

  • Rehosting  =>  Take the applications off the mainframe and run them on another platform.
  • Migrating Components  =>  Take some of the applications off.
  • Re-engineering  =>  Basically rewrite the applications for a new platform.

Typically, Rehosting maps to a customer complaint of escalating cost.  However moving workloads to another platform (typically x86) has serious drawbacks.  MicroFocus COBOL – – the most common alternative COBOL —  is an emulator, not a compiler.  It’s slower than IBM Enterprise COBOL, and incomplete in terms of functionality. So why not stay on the best COBOL platform?  Native z/OS COBOL is constantly being improved, it’s highly mature, and highly integrated into the transaction/DB environment.  IBM knows of numerous cases where companies have spent years of labour and cost, yet their rehosting efforts have failed completely.

Migrating Components typically maps to a complaint of lack of agility.  However IBM z is easily cloud-enabled — it’s open, standards-based, API-enabled —  therefore mainframe-based workloads can easily connect through modern interfaces to cloud-based components.  It’s critical to talk about all the new work IBM is doing around Application Discovery and z/OS Connect to open and expose mainframe assets for connectivity.  It’s critical to dispel the myth of lack of agility.

Lastly, Re-engineering typically maps to a complaint around skills shortage.  IBM has seen many examples where re-engineering efforts become huge and unwieldly, and rarely succeed.  To dispel the myth about mainframe skills shortage, take a look at Linux and Java and REST/JSON, and all the other modern languages, interfaces, open source software, and tools that run on the mainframe.  Increasingly clients are realizing why minimizing transformation of the core Systems of Record is so critical.

Ultimately, clients have invested very significant amounts of time and energy in the intellectual capital which is their mainframe-based workloads and applications. Mainframes work exceptionally well at running the core computing of most businesses, and IBM continues to drive real innovation onto this platform.  In fact, the IBM z platform has consistently shown the ability to adapt and innovate in the face of changing market forces — be it PCs, Java, Linux, and now cloud.  So if your strategy includes a shift towards public cloud integration, rest assured that IBM can help you.

The on-premises solution enables clients to maintain control and governance over sensitive and confidential data and invaluable and irreplaceable business assets.  Keeping data and applications on the z-based System of Record and minimizing transformations has been shown time and again to reduce cost, improve throughput and reduce latency.  With IBM z at the heart of the data centre, clients have a modern, immensely powerful and scalable platform capable of being the cornerstone for analytics, AI, machine learning and cognitive computing of the future.

If however  you have already made up your mind, and you are bound and determined to move off the mainframe and onto the cloud… you still have options!  In this case, take a look at IBM zCloud (formally known as Cloud Managed Services on z Systems)!

IBM zCloud is a pay-per-use, public cloud for z/OS and Linux on z workloads; and it offers the typical cloud advantages of you no longer having to host infrastructure and paying those associated costs.  Moving workloads to IBM zCloud also insures that you do NOT dive into the lengthy, expensive and often unsuccessful endeavour of an offload.  Workloads are not rehosted — they continue to run on z/OS, IBM COBOL and other IBM middleware and also leverage the rich ecosystem of ISV tools.  There is no component migration and no re-engineering either — workloads run as is and continue to reap all the benefits of the z/OS platform.  Yet the platform is now a cloud platform… with all the integration benefits thereof.  The client goes forward with a successful cloud story and typically at significant saving over on premise operations.

Stay tuned for Part 2 where we will provide more real facts as opposed to the fake news you hear from this who propose getting off the mainframe….

For information about IBM zCloud, visit the website

Editors Note – While I would love to take credit for this well written and articulate blog post it was largely authored by Emily Farmer from IBM’s Competitive Project Office (CPO) who is an expert at working with clients on the true costs of replatforming, rehosting or workload placement decisions.  The CPO have done over 600 studies evaluating where to place workloads and in the vast majority of those cases the mainframe wins hands down, despite what the ‘fake news’ will tell you…

3 thoughts on “Fake News, Mainframes and the myths of rehosting – Part 1

  1. It’s not bad to do some re-engineering in and out the mainframe. I mean, you have to use the adequate tool for the job. You wouldn’t buy a whole farm to plant just a seed, but you have to plan: is a giant sequoia tree seed, or a dwarf Japanese maple seed? Do I need a pot or a big terrain?
    Would you develop a CRM or an ERP in Assembler? I don’t think so.
    I’ve seen moving core business products out of the mainframe, but when it came to the database, the servers weren’t able to process and handle the amount of data, so one part is in the Mainframe and the other is in pSeries.


Leave a Reply to Guillermo Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.