Docker and Z Systems – Now I’ve heard it all!
If you were paying attention at the announce of the new z13 processor, you might have noticed that IBM announced that they have Docker containers working on z Systems now.
If you read https://www.business-cloud.com/articles/news/ibm-puts-mainframe-centre-stage-z13-launch, you’ll see some of the key opportunities that Docker unlocks for both existing mainframe customers and new candidates, like service providers – and a few challenges as well like pricing. What is notable now is that the paradigm that most application developers are gravitating towards will now enable simpler deployment of those new applications on the mainframe, in the same way they can be deployed into x86 public clouds. It takes fit-4-purpose to a whole new level.
A little background on Docker and how it is being approached in the mainframe context. Docker containers are a logical extension to Linux containers (lxc) and use standard Linux mechanisms (e.g. cgroups) to manage their security. What Docker adds is things like portable deployment across Docker-enabled Linux hosts, versioning, and a public registry http://index.docker.io/ where one can find thousands of Docker enabled applications.
Compared to virtual machines, Docker containers are much more lightweight. The Docker architecture, unlike VMs, allows for containers to share portions of files system, like where your operating system lives, which allows for a more optimized use of the host’s resources. Meaning, more containers can fit into a host than VMs.
For the mainframe, job #1 was to ensure consistency with the Docker model and practices. If the application developer needs to pull in special libraries or execute special steps in managing their Docker environment, the whole process breaks down. As we started working with Docker and talking with our clients, we focused on those aspects (e.g. efficiency/scalability) that enterprise customers expect. At the same time, being able to contribute the changes necessary to support running Docker on the mainframe back into the community so we can become and remain compatible with the evolutions going on there. Security is another important consideration that’s an obvious area for the mainframe to focus on.
The other dimension from a cloud perspective is to ensure integration with other open technologies like OpenStack HEAT for orchestration. Docker also works with build/config tools like chef. Understanding the use cases on how our clients want to deploy Docker containers across platforms, especially for starting out in a public cloud and then being able to fairly seamlessly re-deploy those applications to an on-premise private cloud is one priority. Hosting them on their Z systems to ensure security and availability for mission critical apps is a primary motivation.
We’ve made the Docker binaries available to our existing Z clients to kick the tires – if you’re interested in learning more and possibly requesting a copy, please contact firstname.lastname@example.org. Lowering the cost and complexity of bringing new applications and capabilities to the mainframe is becoming even easier now.
The mainframe is ushering in a new era of cloud computing with the Z13 and Docker is a part of that story. Much more to come in the Linux on Z space – stay tuned.
Blog written by Mike Baskey – IBM Distinguished Engineer who can be followed on twitter at @MBaskey