Just a small recap of the best practices you played around with during this Hands-on Lab.
Do One Thing and One Thing well¶
Container are not Virtual Machines so it is wrong to think that you should deploy your application into existing running containers. That can be handy during the development phase but in the end result your application should be part of the image.
Remember: Images are immutable and portable.
Containers are perfect for running a single process (http daemon, application server, database). If you find yourself creating a Dockerfile that starts a static web app, a database and an application server, you should rethink your strategy. An image for the application server, one for the static web app and one for the application server.
One Container One single process!
Smaller is better¶
A large image will be harder to distribute. So it is important to optimize your image as much as possible. Clean up your environment at the right moments in the Dockerfile and make sure you do not use more files and libraries than needed to run your application.
Do not install unnecessary packages or run “updates” that downloads many files to a new image layer.
Order your layers
To make effective use of the layered filesystem, and the caching options of the build it is important to think about the ordering of you layers.
Dockerfile will be build from top to bottom and will cache layers if possible to make future builds
faster, but if a layer has changed all the subsequent layers will also be rebuild, so try to order the commands
in such a way that the layers least likely to change are at the top of the
Dockerfile and the most likely
Not from running containers!¶
In other terms, don’t use
docker commit to create an image (ever!).
These images are not from a reproducible build and should therefore be avoided completely.
Also using a
Dockerfile in combination with a VCS (e.g. git) makes you able to track changes to your definition.
Don’t use only the
latesttag as a version
latest is just the latest “un-versioned” pushed image. It is not necessarily the latest released version.
Released versions should always have a specific version attached to it so that you can also go back to former versions.
In developers terms you can think of the “latest” tag as the “SNAPSHOT” for maven projects.
So creating your own images
FROM another image should always be from a versioned image otherwise your build is
not repeatable as the latest version can change at any build.
Just say no!
Images are immutable and should therefore not contain hard-coded credentials. It is much better to use environment variables so that the provided info can be maintained and decided upon during the creation (run) of the container.
Don’t be a root¶
“By default docker containers run as root. (…) As docker matures, more secure default options may become available. For now, requiring root is dangerous for others and may not be available in all environments. Your image should use the USER instruction to specify a non-root user for containers to run as”. (From Guidance for Docker Image Authors)
This best practice has not been specifically handled in this hands-on lab as it is a bit more advanced topic, but the Payara example has an example of running under a different user than root and also setting the admin console credentials through environment variables.
No data in containers¶
A container can be stopped, destroyed, or replaced. An application version 1.0 running in container should be easily replaced by the version 1.1 without any impact or loss of data. For that reason, if you need to store data, do it in a volume. In this case, you should also take care if two containers write data on the same volume as it could cause corruption. Make sure your applications are designed to write to a shared data store.