Building Microservices with Node.js, Docker, and Kubernetes
Description: In this guide, we’ll walk through developing a microservices architecture using Node.js for the backend, Docker for containerization, and Kubernetes for orchestration. You’ll see how to design, code, containerize, and deploy a simple multi-service application in a production-ready way.
Introduction
Microservices have moved from being a buzzword to becoming the dominant architectural style for modern applications. Instead of building one giant monolith, microservices break systems down into small, independent, loosely coupled services that can be developed, deployed, and scaled individually.
For DevOps engineers, microservices open the door to automation, scalability, and agility—but they also introduce complexity. Services need to communicate with each other reliably, deployments must be repeatable, and orchestration tools become essential.
In this article, we’ll show how to build and deploy a microservices-based application step by step using three powerful tools:
Node.js → Fast, lightweight, JavaScript runtime for building backend services.
Docker → Ensures consistent, portable environments through containerization.
Kubernetes → Provides orchestration, scaling, and resilience for containerized services.
By the end, you’ll have a working microservices system running on Kubernetes, with all the moving parts in place.
Why Node.js, Docker, and Kubernetes?
Before diving in, let’s justify the stack.
Node.js:
Non-blocking I/O, perfect for microservices handling concurrent requests.
Rich ecosystem of NPM libraries.
Simple and fast to prototype services in JavaScript/TypeScript.
Docker:
Encapsulates services in portable containers.
Eliminates “works on my machine” issues.
Integrates seamlessly with CI/CD pipelines.
Kubernetes:
Manages deployment, scaling, and communication between microservices.
Provides service discovery, self-healing, and rolling updates.
The de-facto standard for container orchestration in production.
Together, they form a powerful trio for building resilient, scalable, and developer-friendly systems.
Designing the Microservices System
We’ll keep our demo application simple but realistic. Imagine we’re building a Bookstore API with two microservices:
Books Service → Manages book data (title, author, price).
Orders Service → Handles customer orders referencing books.
Each service will:
Be implemented with Node.js + Express.
Have its own Docker image.
Be deployed independently on Kubernetes.
We’ll also add an API Gateway (optional but common in production) to route requests to the correct service.
Architecture Overview
Client → API Gateway → [Books Service, Orders Service] → MongoDB (per service DB)
Each service has its own database (microservices best practice).
Services communicate via REST (could also be gRPC or messaging in real-world systems).
Kubernetes will handle discovery, load balancing, and scaling.
Step 1: Creating the Node.js Microservices
Let’s write two simple Node.js services using Express.
Books Service (books/index.js)
const express = require('express');
const bodyParser = require('body-parser');
const app = express();
app.use(bodyParser.json());
let books = [
{ id: 1, title: "Clean Code", author: "Robert C. Martin", price: 25 },
{ id: 2, title: "The Pragmatic Programmer", author: "Andy Hunt", price: 30 }
];
// Get all books
app.get('/books', (req, res) => {
res.json(books);
});
// Get a single book
app.get('/books/:id', (req, res) => {
const book = books.find(b => b.id === parseInt(req.params.id));
if (!book) return res.status(404).send("Book not found");
res.json(book);
});
// Add a new book
app.post('/books', (req, res) => {
const book = {
id: books.length + 1,
title: req.body.title,
author: req.body.author,
price: req.body.price
};
books.push(book);
res.status(201).json(book);
});
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => console.log(`Books service running on port ${PORT}`));
Orders Service (orders/index.js)
const express = require('express');
const bodyParser = require('body-parser');
const axios = require('axios');
const app = express();
app.use(bodyParser.json());
let orders = [];
// Get all orders
app.get('/orders', (req, res) => {
res.json(orders);
});
// Place a new order
app.post('/orders', async (req, res) => {
const { bookId, quantity } = req.body;
try {
// Call Books service to validate book
const response = await axios.get(`http://books-service:3000/books/${bookId}`);
const book = response.data;
const order = {
id: orders.length + 1,
bookId,
quantity,
total: book.price * quantity
};
orders.push(order);
res.status(201).json(order);
} catch (error) {
res.status(400).send("Invalid book ID");
}
});
const PORT = process.env.PORT || 3001;
app.listen(PORT, () => console.log(`Orders service running on port ${PORT}`));
Notice the Orders Service calls the Books Service via HTTP (
http://books-service:3000
). Later in Kubernetes, the service name books-service will resolve automatically.
Step 2: Dockerizing the Services
Each service needs a Dockerfile so it can run in a container.
Books Service Dockerfile (books/Dockerfile)
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
Orders Service Dockerfile (orders/Dockerfile)
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3001
CMD ["node", "index.js"]
To build and run locally:
# Build images
docker build -t books-service ./books
docker build -t orders-service ./orders
# Run containers
docker run -p 3000:3000 books-service
docker run -p 3001:3001 orders-service
At this point, you can hit http://localhost:3000/books and http://localhost:3001/orders.
Step 3: Kubernetes Deployment
Now comes the orchestration. Kubernetes will manage pods, services, and networking.
Books Service Deployment (k8s/books-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: books-deployment
spec:
replicas: 2
selector:
matchLabels:
app: books
template:
metadata:
labels:
app: books
spec:
containers:
- name: books
image: books-service:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: books-service
spec:
selector:
app: books
ports:
- protocol: TCP
port: 3000
targetPort: 3000
Orders Service Deployment (k8s/orders-deployment.yaml)
apiVersion: apps/v1
kind: Deployment
metadata:
name: orders-deployment
spec:
replicas: 2
selector:
matchLabels:
app: orders
template:
metadata:
labels:
app: orders
spec:
containers:
- name: orders
image: orders-service:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
---
apiVersion: v1
kind: Service
metadata:
name: orders-service
spec:
selector:
app: orders
ports:
- protocol: TCP
port: 3001
targetPort: 3001
Testing in Kubernetes
Apply the manifests:
kubectl apply -f k8s/books-deployment.yaml
kubectl apply -f k8s/orders-deployment.yaml
Verify:
kubectl get pods
kubectl get services
You should see books-service and orders-service exposed inside the cluster.
To test locally:
kubectl port-forward service/books-service 3000:3000
kubectl port-forward service/orders-service 3001:3001
Then open:
http://localhost:3000/bookshttp://localhost:3001/orders
Step 4: Adding an API Gateway (Optional but Recommended)
In real-world deployments, you’d usually have an API Gateway (e.g., NGINX Ingress, Kong, or Istio). Let’s use a simple Kubernetes Ingress.
Ingress Definition (k8s/ingress.yaml)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: bookstore-ingress
spec:
rules:
- http:
paths:
- path: /books
pathType: Prefix
backend:
service:
name: books-service
port:
number: 3000
- path: /orders
pathType: Prefix
backend:
service:
name: orders-service
port:
number: 3001
Apply it:
kubectl apply -f k8s/ingress.yaml
Now you can route all traffic through a single endpoint.
Step 5: Scaling and Resilience
With Kubernetes, scaling services is as simple as:
kubectl scale deployment books-deployment --replicas=5
Kubernetes will handle load balancing automatically across pods.
If a pod crashes, Kubernetes restarts it. If a node goes down, pods are rescheduled. This is the real power of orchestration.
Step 6: Production Considerations
Our demo works, but production requires more:
Databases → Each service should have its own persistent database (PostgreSQL, MongoDB, etc.), deployed via StatefulSets.
CI/CD → Automate builds and deployments with GitHub Actions, Jenkins, or GitLab CI.
Secrets Management → Use Kubernetes Secrets for API keys and credentials.
Monitoring → Integrate Prometheus + Grafana for metrics.
Logging → Centralized logging with ELK/EFK stack.
Service Mesh → For advanced traffic routing and observability, consider Istio or Linkerd.
Benefits of This Approach
Scalability → Independent scaling per microservice.
Resilience → Services recover automatically with Kubernetes.
Portability → Docker ensures the same image runs anywhere.
Flexibility → Each service can use a different tech stack if needed.
Challenges to Watch For
Service-to-service communication can get complex (consider async messaging).
Data consistency across services needs patterns like Saga.
Overhead → Kubernetes adds learning curve and operational complexity.
But with proper DevOps practices, these challenges are manageable.
Conclusion
We’ve built a Bookstore microservices app using Node.js, Docker, and Kubernetes. Along the way, you saw:
How to design services around business domains.
How to write simple Express APIs.
How to containerize with Docker.
How to orchestrate with Kubernetes.
How to add an Ingress for unified access.
This stack represents a modern DevOps-ready workflow for microservices in production.
The next step? Expand the system with databases, add CI/CD pipelines, and integrate monitoring. Once you’ve got that in place, you’ll be running a truly production-grade microservices architecture.

