Often enough, Docker is mentioned in one breath with portability. However, there are two dimensions of portability: first, running a container on any system of the same platform. Here, Docker really simplifies deploying applications on any system (of the same instruction set architecture). From a Linux on z Systems perspective, the second aspect of cross platform portability is more interesting: portability across platforms. Often enough, getting a container to run on z is not a big deal, but sometimes it is. Let’s decompose this:

Start with the Docker interface: whether that refers to the CLI or REST API, it is identical on z. Same syntax, same semantics, no differences.

Docker images, as self-contained set of binaries, are specific to a CPU and ABI by nature. Mostly, Docker solutions are microservice architectures, consisting of many but per-se simple components. This facilitates scaling out, as well as simplifies getting the individual components ported.

An image is typically based on a distribution base. This is not absolutely necessary, you can add a binary and all necessary shared libraries into a tarball and import that as a very slim container image, but typically a Dockerfile is used to build the image in an automatic and controlled fashion. The first line (“FROM”) specifies the base image to be used, then often additional distribution packages are installed. If the same distribution is available on s390, e.g. a Fedora image, porting is typically simple — yum is yum, and the package names are identical. A lot of images are based on Ubuntu on x86, however. Here, the closest thing on s390 is Debian, at least providing the same package tool chain (“apt-get”) and mostly the same package names. This will make it much easier to get through the package installation steps. If the base image is something non-existent on s390 (e.g. “golang” — a Go environment based on Google’s go compiler), the fun starts. Then, the art is to come up with an environment which is compatible with any succeeding steps.

In Dockerfiles, additional components are often installed natively: source code archives are downloaded, extracted and built. In most cases, this is platform independent, so should not add any impediment to building the container on z. There are some exceptions: platform-specific behavior (e.g. if the application checks for CPU information in /proc), or non-portable code (endianness issues in the source code or written in assembler) will require additional care — but for most packages that will not be necessary.

The last area of consideration is the main application itself. Should that be a binary blob, it will need to  be substituted by an s390 build of the application. Otherwise, the same thoughts on code portability as just mentioned apply; this is also not specific to the use via Docker.

Eventually, work can be avoided in the future, if any necessary changes can be submitted to the originator (in an acceptable way for other platforms). When such changes get integrated “upstream”, it will work out of the box next time.

 

Editors Note: This post was originally posted at http://containerz.blogspot.com and was authored by Utz Bacher who works as an architect for Linux on z Systems in the IBM Research and Development Lab in Boeblingen, Germany.  Watch this space for more from this true innovator…