Introduction
Have you ever heard a developer say "it works on my machine" when their code breaks in production? Docker solves this problem once and for all.
Docker is a tool that packages your application and everything it needs to run into a single, lightweight container. This container bundles your app code, its dependencies, configuration files, and the exact runtime environment.
The magic? These containers run exactly the same way everywhere — on your laptop, your teammate's computer, or a production server in the cloud.
In this guide, we'll transform a regular Node.js application into a production-ready Docker container using modern best practices.
By following this guide, you'll create a Docker setup that:
- ⚡ Builds fast: Smart caching means rebuilds take just seconds
- 📦 Runs small: Final image weighs only ~160MB (vs 1GB+ with naive approaches)
- 🔓 Stays secure: Follows container security best practices
- 🤝 Works everywhere: From local development to production at scale
You can follow along with the complete example project on GitHub.
Ready to containerize like a pro? Let's dive in! 🚀
🤏 The Simple Way (But Not the Best)
When developers first learn Docker, they often write a Dockerfile like this:
Does it work? Yes! Your app runs in Docker successfully.
Is it good? Not really. Let's see why.
This simple Dockerfile creates several problems that will hurt you in real projects:
- 🐢 Slow builds – Every code change rebuilds everything from scratch, taking in our example 18.4 seconds even for small changes. Developers waste time waiting instead of coding.
- 🧱 Heavy image – In our example, final image weighs 1.16 GB because it includes Node.js, npm, build tools, source code, and everything else. This means slow deployments and expensive storage costs.
- 🔓 Not secure – The container runs as root user and includes development tools like npm and build scripts. If someone breaks into your container, they have full control and access to tools they shouldn't have.
- 🧹 Messy – It copies your entire project folder, including
.git
directories,README.md
, test files, and other things your production app doesn't need. This makes images larger and potentially leaks sensitive information.
It works — but we can do much better.
Now, let's see how we can improve this step by step. Each change will speed up your builds, reduce image sizes, enhance security, and simplify maintenance. We'll explain what each improvement means, why it matters, and how it impacts your Docker workflow.
🏋️ Using a Smaller Base Image
Our initial Dockerfile uses the default node:22
image, which is huge and packed with system tools your app will never use.
A quick win is switching to the Alpine variant. Alpine Linux is a tiny, security-focused operating system designed specifically for containers. Here's the updated version:
The difference is impressive:
- Image size: 1.16GB → 160.6MB (over 80% smaller!)
- Build time: 18.4 seconds → under 10 seconds
- Download speed: Much faster for your team and deployments
This single line change gives you massive improvements with zero code changes. Alpine includes everything Node.js needs to run but removes bloated system packages, documentation, and tools that only matter for development machines.
The smaller size means faster deployments, lower storage costs, and quicker container startup times. Your team will notice the difference immediately when pulling images or deploying to production.
That's already a massive improvement with minimal effort. Let's keep optimizing and see what else we can squeeze out of this setup.
🧹 Cleaning Up Dependencies for Production
If we look inside our current image, we'll find it includes both development and production dependencies. That means TypeScript compilers, testing frameworks, linting tools, and other packages your production app never uses.
This creates unnecessary bloat, slower builds, and a larger attack surface for potential security issues.
To fix this, we're going to replace these lines:
with this optimized version:
Here's what each command does and why it matters:
npm ci
– Installs dependencies frompackage-lock.json
in a clean, deterministic way. This is faster and more reliable thannpm install
for production builds.npm run build
– We still need development dependencies (like TypeScript) to compile our code, so we build the project while they're still available.npm prune --production
– After building, this removes all development dependencies, keeping only what's needed to run the app in production.- Chaining with
&&
– Combining commands in a singleRUN
step creates fewer Docker layers, making the builds faster.
Here's the updated Dockerfile:
💡 With this change, the final image contains only what's strictly necessary to run your app in production — smaller, safer, and more efficient.
🧳 Copy Only What You Really Need
In our previous Dockerfile, we used this simple but problematic line:
This copies your entire project directory into the Docker image, including files your production app never needs. That means documentation files like README.md
, config files, CI/CD folders like .github/
, local .env
files, test folders, and temporary editor files all end up in your container.
All of this bloats your image and slows down Docker builds because Docker has to process and transfer more files than necessary.
To fix this, we can be more selective about what we copy. Here's the improved approach:
Notice how we copy package*.json
files first and run npm ci
immediately after? This is a powerful Docker caching trick.
When you change your source code but don't modify dependencies, Docker can reuse the cached npm ci
layer instead of reinstalling everything. This makes builds much faster, especially during development when you're making frequent code changes.
We can make this even better with a .dockerignore
file that excludes unnecessary files from the build context entirely:
This tells Docker to ignore everything (*
) except the specific files we actually need. The !
prefix means "don't ignore this file." This approach ensures that only essential files are even considered when building the image, making the build context smaller and faster to process.
The result is faster builds, smaller images, and better security since you're not accidentally including sensitive files in your containers.
🚦 Fixing Node.js Signal Handling Problems
A common way to start a Node.js app in Docker is using:
This works for development, but it creates serious problems in production environments, especially around process signals and proper shutdown behavior.
When you run a Node app through npm
, it doesn't run your Node.js process directly. Instead, npm becomes the main process and spawns your Node app as a child process. This breaks how Docker communicates with your application.
Here's what goes wrong:
- Broken shutdown signals – When Docker tries to stop your container (sending
SIGINT
orSIGTERM
), these signals get stuck at npm and never reach your Node.js app. Your app can't clean up properly before shutting down. - Zombie processes – Child processes might get orphaned and continue running in the background, consuming memory and resources even after your main app "stops."
- Wrong process ID – Your Node.js app doesn't run as PID 1 (the main process), which breaks Docker's built-in signal handling and process management.
To fix this, we are going to use a tool called dumb-init. It's a tiny process supervisor that runs as PID 1 and handles signals correctly, making sure they reach your Node.js app.
Here's the updated Dockerfile:
Notice we're now calling node dist/main.js
directly instead of going through npm. This means Docker can communicate properly with your Node.js process, graceful shutdowns work correctly, and your containers behave predictably in production.
This small change greatly improves stability and makes your containerized Node.js apps much more reliable ✨.
🏗️ Multi-Stage Builds
When building Docker images for Node.js applications, most developers put everything in a single image - build tools, development dependencies, source code, and the final compiled app. This creates bloated production images filled with tools your running app never uses.
Multi-stage builds solve this problem by splitting your Dockerfile into separate stages. Think of it like having two different kitchens: one messy kitchen where you prepare and cook everything, and one clean kitchen where you only serve the final meal.
We can have one stage that builds the app with all development dependencies and build tools, then a separate production stage that copies only the final compiled code and runtime dependencies. The build tools and source code get left behind.
Here's how we refactor our Dockerfile to use multi-stage builds:
Here's what happens: The build
stage installs everything, compiles TypeScript, and cleans up dependencies. The production
stage starts fresh and copies only the essential files from the build stage - the compiled JavaScript, production dependencies, and dumb-init.
The result is a clean production image that contains zero build tools, zero source code, and zero development dependencies. It's smaller, more secure, and starts faster because there's less bloat to load.
By adopting multi-stage builds, we create optimized, production-ready Docker images without sacrificing development flexibility or build capabilities.
🔒 Don’t Run Your Containers as Root
By default, Docker containers run as the root user. While this might seem harmless inside an isolated container, it's actually a major security vulnerability.
If an attacker manages to break into your running container (through a code vulnerability, misconfigured network, or compromised dependency), running as root gives them complete control. They could delete critical files, access sensitive data, or even escape the container to attack your host system.
Here's a simple example of the damage they could do:
Since most Node.js apps depend on the dist/
folder to run, this single command could take down your entire service. In worse scenarios, depending on mounted volumes and permissions, attackers could affect your host machine or other containers.
The good news? The official Node.js Docker images include a pre-created node
user that's designed exactly for this purpose. This user has limited permissions and can run your app safely without root access.
Here's our updated Dockerfile with the security fix:
The USER node
line switches from root to the non-root user before starting your application. Now if someone breaks into your container, they have limited permissions and can't cause system-wide damage.
🛡️ Running as a non-root user is a fundamental security practice for production containers. It's simple to implement and adds a critical layer of protection against container breakouts and privilege escalation attacks.
🌱 Set the Right Environment
Adding this single line to your production stage might look insignificant:
But this small change can dramatically improve your app's performance and security.
When NODE_ENV
is set to production
, Node.js libraries and frameworks automatically switch to their optimized production mode. Express disables verbose error messages that could leak sensitive information, React removes development warnings and debugging code, NestJS skips expensive validation checks, and many other libraries reduce memory usage and CPU overhead.
Without this setting, your production app might still run in development mode, which means slower performance, higher memory usage, detailed error messages that reveal internal code structure, and extra debugging features that attackers could potentially exploit.
Here's the final Dockerfile with this essential environment variable:
This simple environment variable ensures your app runs lean, fast, and secure in production by telling all Node.js libraries to use their most optimized, production-ready behavior.
🧠 Wrapping It All Up
After applying all these improvements, here's our final production-ready Dockerfile:
The transformation is remarkable:
We've transformed a basic, unoptimized setup into a professional Docker image that builds faster, runs smaller, and operates more securely. Your deployments will be quicker, your storage costs lower, and your containers more reliable in production.
The best part? These techniques we have used work for any Node.js application, whether you're building APIs, web servers, microservices, or background workers.
I hope this guide was helpful and gave you some useful takeaways to apply in your own projects.
Thanks for reading! 🙌