Speed Up Your Docker Builds with These Easy Tweaks
Introduction
If you’ve been working with Docker for any length of time, you’ve probably run into a familiar pain: slow builds. Whether you’re building microservices, CI/CD pipelines, or large enterprise images, waiting around for Docker to churn through layers can feel like watching paint dry.
But here’s the good news: you don’t have to accept slow builds as a fact of life. With some practical tweaks and strategic thinking, you can dramatically cut down build times—making your development workflow faster, leaner, and less frustrating.
In this article, we’ll take a deep dive into why Docker builds are slow in the first place, and then explore practical techniques you can apply right away to speed them up. We’ll cover everything from caching strategies, multi-stage builds, and build context hygiene, to advanced optimizations like BuildKit and distributed caching.
By the end, you’ll have a playbook of best practices that can shave minutes—or even hours—off your Docker workflows.
Why Docker Builds Are Slow in the First Place
Before optimizing, let’s understand what’s happening under the hood. A Docker build consists of executing each instruction in your Dockerfile to create new layers.
Some common bottlenecks include:
Large base images: Pulling down a heavy image like
ubuntu:20.04can take time compared to a slimmer alternative.Unoptimized layer caching: Rebuilding unchanged dependencies repeatedly wastes time.
Bloated build context: Sending unnecessary files to the Docker daemon increases transfer time.
Heavy RUN instructions: Installing dependencies or compiling software without caching strategies slows builds.
Network bottlenecks: Downloading packages or artifacts from external repositories can drag performance down.
Once you know these pain points, you can target them systematically.
1. Start with the Right Base Image
Choosing the right base image is one of the easiest and most impactful optimizations.
Example: Slim vs Full
# Heavy
FROM python:3.11
# Lightweight
FROM python:3.11-slim
The
python:3.11image weighs in at several hundred MBs, whilepython:3.11-slimis much lighter.Alpine images (
alpine:3.20) are even smaller, though you may need extra work to install dependencies.
Tip: Balance size with compatibility. Alpine’s musl C library can sometimes cause issues with software expecting glibc. Slim images often strike a good balance.
2. Leverage Layer Caching Effectively
Docker caches intermediate layers, which means if a layer hasn’t changed, it won’t be rebuilt. This is where ordering your instructions correctly becomes critical.
Bad Example
FROM node:20
COPY . /app
RUN npm install
Here, any small change in your source code invalidates the cache for npm install, forcing it to run again.
Good Example
FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
Now, npm install only re-runs if package.json or package-lock.json changes—saving massive time when editing application code.
Rule of Thumb: Place less frequently changing instructions (like dependencies) above frequently changing ones (like source code).
3. Clean Up After Yourself
Leaving temporary files or caches bloats your image and slows down subsequent builds.
Example: Apt Install
RUN apt-get update && apt-get install -y \
curl \
git \
&& rm -rf /var/lib/apt/lists/*
This ensures you don’t carry unnecessary package index files into the final image.
Benefit: Smaller images mean faster builds and quicker pulls in CI/CD environments.
4. Use Multi-Stage Builds
Multi-stage builds let you separate build dependencies from runtime dependencies, producing slimmer and faster images.
Example: Go Application
# Stage 1: Builder
FROM golang:1.22 as builder
WORKDIR /src
COPY . .
RUN go build -o app .
# Stage 2: Runtime
FROM alpine:3.20
WORKDIR /app
COPY --from=builder /src/app .
CMD ["./app"]
The builder stage contains heavy compilers and tools.
The runtime stage is slim, containing only the final binary.
Result: Faster runtime image builds and deployments.
5. Optimize the Build Context
The build context is everything sent from your local machine to the Docker daemon. If it’s too large, builds slow down unnecessarily.
Fix It with .dockerignore
Create a .dockerignore file:
.git
node_modules
*.log
tests/
Outcome: Smaller build context = faster builds and reduced image size.
6. Parallelize with BuildKit
Docker BuildKit is a newer build engine that brings powerful performance improvements.
Enable it with:
export DOCKER_BUILDKIT=1
docker build .
Benefits:
Parallel execution of independent layers.
Smaller images with automatic garbage collection.
Advanced caching (e.g.,
--mount=type=cache).
Example with Build Cache
# BuildKit syntax
# syntax=docker/dockerfile:1.4
FROM node:20
WORKDIR /app
# Cache node_modules
RUN --mount=type=cache,target=/root/.npm \
npm install -g typescript
This keeps node_modules cached across builds, slashing dependency installation times.
7. Use Layer Squashing (When Appropriate)
Sometimes, many RUN commands leave unnecessary intermediate layers. Squashing them reduces image size and speeds up builds.
docker build --squash -t myimage .
Note: Squashing can remove caching benefits. Use sparingly for production images, not during active development.
8. Pre-Build Dependencies in CI/CD
If you know dependencies rarely change, you can pre-build them and push a cached image to your registry.
Example Workflow
Build a base image with dependencies installed (
npm install,pip install, etc.).Push it to your registry.
Use it as the base for app builds.
This means app builds only re-run lightweight layers, dramatically improving CI/CD times.
9. Use Remote Caching with BuildKit
With BuildKit and modern registries (like GitHub Container Registry, AWS ECR, or GCP Artifact Registry), you can use distributed caching across builds.
docker build \
--build-arg BUILDKIT_INLINE_CACHE=1 \
--cache-from=ghcr.io/myorg/myimage:cache \
-t ghcr.io/myorg/myimage:latest .
Now, different machines (like CI runners) can reuse cached layers instead of rebuilding them from scratch.
10. Split Monolithic Images
Sometimes images are slow simply because they’re doing too much.
Example: a single image handling frontend, backend, and database. Instead, break it into smaller, purpose-built images.
Benefits:
Each image builds faster.
Easier caching.
Simpler updates when only one component changes.
11. Optimize Package Managers
For Python: Use
pip install --no-cache-dir.For Node.js: Use
npm ciinstead ofnpm installfor reproducible, faster installs.For Java: Use Gradle or Maven with build caching enabled.
12. Avoid COPYing Everything
Be surgical with COPY. Only copy what’s needed for that stage.
Example
# Instead of this
COPY . .
# Do this
COPY src/ ./src/
COPY package*.json ./
This avoids invalidating caches unnecessarily and reduces context size.
13. Consider Alternative Build Tools
Kaniko: A container-native build tool by Google, optimized for Kubernetes.
Buildah: Lightweight, rootless image building tool.
Bazel: Highly optimized build system with advanced caching.
These tools can outperform traditional docker build in certain environments.
14. Profile and Measure Build Performance
Optimizations only matter if you can measure them. Tools like:
docker build --progress=plainto see step timings.Dive to analyze image layers.
CI/CD timing metrics.
Track improvements over time to validate which tweaks provide the most value.
Putting It All Together: A Practical Example
Imagine you’re building a Node.js microservice. Here’s an optimized Dockerfile:
# syntax=docker/dockerfile:1.4
FROM node:20-slim as base
WORKDIR /app
# Install dependencies first
COPY package*.json ./
RUN --mount=type=cache,target=/root/.npm \
npm ci
# Copy source code
COPY . .
# Build stage
FROM base as build
RUN npm run build
# Production image
FROM node:20-slim as production
WORKDIR /app
COPY --from=build /app/dist ./dist
COPY package*.json ./
RUN npm ci --omit=dev
CMD ["node", "dist/index.js"]
Optimizations here:
Lightweight base image (
slim).Dependency caching with BuildKit.
Multi-stage build for clean separation.
Smaller production image without dev dependencies.
Result: builds that are faster, leaner, and easier to maintain.
Conclusion
Slow Docker builds aren’t just an inconvenience—they sap developer productivity, increase CI/CD costs, and delay feature delivery.
By adopting the tweaks we’ve discussed—choosing smaller base images, reordering layers for better caching, cleaning up dependencies, using multi-stage builds, leveraging BuildKit, and employing distributed caching—you can transform your Docker workflow into something fast, efficient, and resilient.
Remember: optimization isn’t about chasing perfection in a single build. It’s about adopting habits and practices that consistently save time across thousands of builds over the life of your project.
If you start with just a few of these tweaks today, you’ll thank yourself tomorrow when your Docker builds are minutes faster, your CI/CD pipelines are smoother, and your deployments are snappier.

