Start United States USA — software Top 20 Dockerfile Best Practices

Top 20 Dockerfile Best Practices

295
0
TEILEN

Learn how to prevent security issues and optimize containerized applications by applying a quick set of Dockerfile best practices in your image builds.
Join the DZone community and get the full member experience. Learn how to prevent security issues and optimize containerized applications by applying a quick set of Dockerfile best practices in your image builds. If you are familiar with containerized applications and microservices, you might have realized that your services might be micro; but detecting vulnerabilities, investigating security issues, and reporting and fixing them after the deployment is making your management overhead macro. Much of this overhead can be prevented by shifting left security, tackling potential problems as soon as possible in your development workflow. A well crafted Dockerfile will avoid the need for privileged containers, exposing unnecessary ports, unused packages, leaked credentials, etc., or anything that can be used for an attack. Getting rid of the known risks in advance will help reduce your security management and operational overhead. Following the best practices, patterns, and recommendations for the tools you use will help you avoid common errors and pitfalls. This article dives into a curated list of Docker security best practices that are focused on writing Dockerfiles and container security, but also cover other related topics, like image optimization. We have grouped our selected set of Dockerfile best practices by topic. Please remember that Dockerfile best practices are just a piece in the whole development process. We include a closing section pointing to related container image security and shifting left security resources to apply before and after the image building. These tips follow the principle of least privilege so your service or application only has access to the resources and information necessary to perform its purpose. Our recent report highlighted that 58% of images are running the container entrypoint as root (UID 0). However, it is a Dockerfile best practice to avoid doing that. There are very few use cases where the container needs to execute as root, so don’t forget to include the USER instruction to change the default effective UID to a non-root user. Furthermore, your execution environment might block containers running as root by default (i.e., Openshift requires additional SecurityContextConstraints). Running as non-root might require a couple of additional steps in your Dockerfile, as now you will need to: Make sure the user specified in the USER instruction exists inside the container. Provide appropriate file system permissions in the locations where the process will be reading or writing. You might see containers that start as root and then use gosu or su-exec to drop to a standard user. Also, if a container needs to run a very specific command as root, it may rely on sudo. While these two alternatives are better than running as root, it might not work in restricted environments like Openshift. Run the container as a non-root user, but don’t make that user UID a requirement. Why? Openshift, by default, will use random UIDs when running containers. Forcing a specific UID (i.e., the first standard user with UID 1000) requires adjusting the permissions of any bind mount, like a host folder for data persistence. Alternatively, if you run the container ( -u option in docker) with the host UID, it might break the service when trying to read or write from folders within the container. This container will have trouble if running with an UID different than myuser, as the application won’t be able to write in /myapp-tmp-dir folder. Don’t use a hardcoded path only writable by myuser. Instead, write temporary data to /tmp (where any user can write, thanks to the sticky bit permissions). Make resources world readable (i.e.,0644 instead of 0640), and ensure that everything works if the UID is changed. In this example our application will use the path in APP_TMP_DATA environment variable. The default value /tmp will allow the application to execute as any UID and still write temporary data to /tmp. Having the path as a configurable environment variable is not always necessary, but it will make things easier when setting up and mounting volumes for persistence. It is a Dockerfile best practice for every executable in a container to be owned by the root user, even if it is executed by a non-root user and should not be world-writable. This will block the executing user from modifying existing binaries or scripts, which could enable different attacks. By following this best practice, you’re effectively enforcing container immutability. Immutable containers do not update their code automatically at runtime and, in this way, you can prevent your running application from being accidentally or maliciously modified. To follow this best practice, try to avoid: Most of the time, you can just drop the –chown app:app option (or RUN chown… commands). The app user only needs execution permissions on the file, not ownership. It is a Dockerfile best practice to keep the images minimal. Avoid including unnecessary packages or exposing ports to reduce the attack surface. The more components you include inside a container, the more exposed your system will be and the harder it is to maintain, especially for components not under your control. Make use of multistage building features to have reproducible builds inside containers. In a multistage build, you create an intermediate container – or stage – with all the required tools to compile or produce your final artifacts (i.e., the final executable). Then, you copy onl y the resulting artifacts to the final image, without additional development dependencies, temporary build files, etc. A well crafted multistage build includes only the minimal required binaries and dependencies in the final image, and not build tools or intermediate files. This reduces the attack surface, decreasing vulnerabilities. It is safer, and it also reduces image size. For a go application, an example of a multistage build would look like this: With those Dockerfile instructions, we create a builder stage using the golang:1.15 container, which includes all of the go toolchain. We can copy the source code in there and build. Then, we define another stage based on a Debian distroless image (see next tip).

Continue reading...