Docker really provides you ease of use, portability and quick to get up and running where you can develop anywhere. Docker is an open source tool that provides a way of running isolated applications/software in a single Linux instance in what is called “Containers”. It basically has an engine/runtime on top of the OS and provides the virtual containers to deploy software in to. The beauty of this is it provides better portability for the application/software, it’s lightweight and removes some of the complexities of managing a hypervisor (e.g. z/VM, VMware).

To net this out:

  • way of packaging apps together to more efficiently deploy them to get better density
  • develop the app package on any platform, and provided that the binaries exist for the platform, deploy them where ever you want to, e.g. your laptop, your data center, a public cloud
  • the trend towards modernizing the Linux environments in datacenters
  • no VM resource overhead since the hipervisor is not required
  • full support of DevOps model through simple build process of containers, versioning, deployment, etc
  • simple scaling of stateless solution components (e.g. node.js instances).

However – if you want security, you need to package them as second level guests to get the isolation between the apps/data.  This is how I presume most z clients will use Docker, so the hipervisor advantage is probably not pertinent

Here are 4 use case scenarios that define the advantage of using Docker containers:

Use case 1 : A developer can develop apps in Java on an Intel platform, then desire to deploy on LoZ   Since Java doesn’t require a recompile on different platforms, app portability is a given.   Without containers, the developer didn’t know if they had the right libraries, Java VM level, MW requirements.  But now with containers, any SW the app requirements can be packaged into a container as long as its Java.  The container does have to be rebuilt on the platform image you are using, but once you are in a container model, this becomes simpler.

Use case 2: Docker allows for larger density than VMs and enables more apps in one system.  Before containers, a user could only run 10 WebSphere instances on Intel sys, but now with containers and its memory efficiency, the user can run hundreds of the same instance,  as you don’t have to set up separate VMs with hypervisors.  As a platform, z has some advantages over x86 with memory overcommit, and the overhead on z with CPU virtualization is relatively small.  Thus, one will see a bigger advantage in the distrusted environment leveraging containers, but what is interesting is the combination of containers and security isolation advantages z due to this memory overcomit – lower VM overhead.

In a Docker environment that leverages density, the user loses the security isolation between these apps since the hipervisor is not present.  Thus, if this is a don’t care, running on bare metal give good density at fast response time.  If you do care like a bank, than you will want the isolation, and as stated above, the overhead on z is minimal.

For mission critical Enterprise, the given advice is leverage density/bare metal like in a dev-test environment to rapidly put everything on one LPAR on z to eliminate the provisioning associated with hipervisors, then in production, go back to second level guests to get the isolation, and live with the hipervisor overhead as the production environment is not changing rapidly.  Use this VM isolation on a tenant granularity to get the isolation on a tenant base, and sufficent “mass” of applications to gain from the efficiency opportunities since the VMs don’t have as big of overhead on z as compared to an x86 platform.

So, on z Systems, you can shape your environment with system virtualization and container elements according to your landscape and your requirements, hardly with constraints in performance — you can define IT structures according to your needs, not your system constraints.

Use case 3:  Write app once, send to other people and they know how to deploy.  Now you don’t have to develop an org that knows how to install app, and knows what other packages are required.  The containers provide the user the automation for packaging with automated scripts inside container.  Thus, easy, more efficient and faster to deploy/run apps.  Specifically with open distributions e.g. Fedora, an ISVs can reduce efforts.

Use case 4 –Have an app that requires lots of parts for a multi tiered application requiring workflow component, WebSphere component, database component,  math lib component.  Containers allows you to put them in 4 diff containers, so if you are not interested in say the workflow component, the user would only deploy 3 of the containers.  This makes it more flexible.

This allows the user to break down apps into different parts and use the only pieces needed.  Today, even our own

IBM Software products and ISVs ship the entire product where one installs 25 pieces to get possibly only one needed component

If your app required all components, put them in one container.  If app didn’t, keep them separate as if pieces are optional, as its like choosing Lego blocks to build what you want and how you want it to look

Editors Note: Dale Hoffman is the most reluctant blogger yet to post on mainframe debate, but given the interest on Docker since we announced it on the platform, he felt compelled to put pen to paper!  Utz Bacher and his insight and feedback on this post cannot be over stated.