iTranslated by AI
Node.js Docker Setup: From Development Environment Patterns to CI Implementation
Update
I previously introduced patterns for developing on Docker, but I’ve come to feel that the best approach is to not develop on Docker at all. Node.js itself has few environment-specific differences (unlike the "works on Mac but not on Windows" issue), and developing on Docker is slow. It might be better to treat Docker strictly for container deployment.
I'm sharing the know-how from a Dockerfile I've been refining for two years. The sample code uses NestJS, but it should be helpful for other frameworks like Express too (since I used Express before NestJS). Also, this is my first time using Prisma, so my writing style might be a bit off. Please let me know if it is.
Sample Code
This article covers:
- Developing locally using Docker for Desktop.
- Running tests using Docker on a CI server (GitHub Actions).
This article is intended for those who have used Docker before. I won't be explaining Docker or Docker for Desktop itself.
Patterns for Creating a Development Environment Using Docker for Desktop
There are several patterns for creating a development environment using Docker with Node.js.
Please make sure you understand the difference between bind mounts and volume mounts beforehand.
Bind mounting all files, including node_modules, between the host machine and the container.
version: '3.7'
services:
# For development
app:
build:
context: .
dockerfile: ./Dockerfile
ports:
- 3000:3000
init: true
volumes:
# Bind mount
- .:/home/node/nestjs_docker_sample
Bind mounting all files except node_modules between the host and container, while volume mounting node_modules.
version: '3.7'
services:
# For development
app:
build:
context: .
dockerfile: ./Dockerfile
ports:
- '3000:3000'
init: true
volumes:
# Bind mount: specifying all files except node_modules
- ./src:/home/node/nestjs_docker_sample/src
- ./test:/home/node/nestjs_docker_sample/test
# ...omitted
# Volume mount
- node_modules:/home/node/nestjs_docker_sample/node_modules
volumes:
node_modules:
Bind mounting all files including node_modules, but structuring the directory structure to volume mount the container-side node_modules so it isn't overwritten by the host.
| - /home/node/nestjs_docker_sample
| - package.json
| - package-lock.json
| - node_modules
| - app
| - src
| - index.js
| - ...
| - nestjs_docker_sample
| - Dockerfile
| - package.json
| - package-lock.json
| - node_modules
| - src
| - index.js
| - ...
version: '3.7'
services:
# For development
app:
build:
context: .
dockerfile: ./Dockerfile
ports:
- '3000:3000'
init: true
volumes:
# Bind mount
- ./:/home/node/nestjs_docker_sample/app/
# Volume mount
# the volume above prevents our host system's node_modules to be mounted
- node_modules:/home/node/nestjs_docker_sample/app/node_modules/
command: bash -c "rm -rf /usr/local/app/node_modules/* && nodemon index.js"
volumes:
node_modules:
Volume mounting only node_modules and developing inside the container using VSCode Remote Containers.
version: '3.7'
services:
# For development
app:
build:
context: .
dockerfile: ./Dockerfile
ports:
- '3000:3000'
init: true
volumes:
# Volume mount
- 'node_modules:/home/node/nestjs_docker_sample/node_modules
volumes:
node_modules:
The second and third patterns are essentially doing the same thing.
The reason there are so many patterns is that development use cases differ.
Development Environment Use Cases
When you want to develop exclusively within the container
First, volume mount only node_modules. Next, use VSCode Remote Containers or develop using vim inside the container.
VSCode Remote Containers is revolutionary, but the downsides are that you need to set up editor extensions again and it can feel slightly laggy.
Commands and npm install are all executed inside the container. When using VSCode Remote Containers, I wonder how people handle things like Git user settings.
When the environment is in the container, but you want linters and formatters to work on the host editor.
If the node_modules files are not on the host machine, linters and formatters sometimes fail to work properly. Also, type errors might not be displayed, making development on the host editor difficult. Therefore, you keep a copy of the node_modules files on the host machine.
In this case, there are two patterns.
The problem is that when you bind mount, files on the container are forcibly overwritten by files from the host machine. This is problematic during the initial mount.
- If
npm ciis written in the Dockerfile, the container is created with libraries already innode_modules. - You set
node_modulesas a target for bind mounting and startdocker-compose. - During the first mount,
node_moduleson the host machine is empty. - The
node_moduleson the host machine overwrites thenode_moduleson the container. - The
node_moduleson the container becomes empty.
This is the flow. This can be resolved by manually entering the container after the bind mount and running npm ci. This puts the libraries into the host machine's node_modules, so it's fine even if it gets overwritten during subsequent bind mounts. However, it feels redundant to run npm ci again after the bind mount when you've already done it in the Dockerfile.
There are two solutions for this. One is to use Dockerfile multi-stage builds to exclude npm ci from the development image, and the other is to structure the directories so that the node_modules folders on the host and container do not collide.
Also, there are reports that bind mounting node_modules with the host machine can be quite slow. This article goes into detail on that. If anyone is knowledgeable about the compatibility between Docker and Mac, please let me know.
Pattern 1: Accept forced overwriting by the host machine
This is a very simple setup. You just bind mount all files, including node_modules. Operations like npm install will be slower, but I prefer it because it's simpler and easier to use. The issue of overwriting node_modules installed during the Dockerfile process can be avoided through how you write the Dockerfile.
Commands like npm install are executed inside the container, while Git is executed outside the container.
Pattern 2: Cannot accept forced overwriting by the host machine
This is a fairly complex setup. By changing the directory structure between the host and container, you ensure that node_modules does not collide during bind mounting. Then, you run npm ci on each. Additionally, you volume mount the container-side node_modules to prevent I/O performance degradation. It gets quite complicated, so I don't recommend it. The method is described in this article:
With this method, since you run npm ci on the host machine as well, you should be able to execute commands on the host too.
When the environment is in the container, but you don't need linters or formatters on the host editor.
You bind mount all files except node_modules and volume mount only node_modules.
However, there's a possibility that linter and formatter extensions on the host editor will stop working.
So, which one is best?
My recommendation is to either go container-only using VSCode's Remote Containers feature or to bind mount all files including node_modules.
In this article, I will explain Pattern 1 for the use case: "The environment is in the container, but I want to develop on the host editor and use linters and formatters."
Creating the Dockerfile and docker-compose
We will use multi-stage builds. I am not very familiar with how to use BuildKit. If anyone knows an effective way to use a Dockerfile with BuildKit, I would love to read an article about it. I have named the application nestjs_docker_sample.
Key Points:
- The Node-based Docker image includes a home directory and a user named
node, so we will use those. - Environment variables need to be configured for global installations.
- Installing CLI tools globally during the development stage allows us to use those commands within the container.
- Do not run
npm ciin the Dockerfile for the development environment. New developers should runnpm ciafter building the image and runningdocker-compose up. - Create dedicated test environments for running on CI in both the Dockerfile and docker-compose respectively.
- In the dedicated test environment, run
npm ciin the Dockerfile and volume mount it in docker-compose.
Here is the Dockerfile. This is my first time using Prisma for this sample, so the Prod stage might not function correctly.
###############
# base #
###############
# Production base. Only essential OS libraries are installed here.
FROM node:14.19-alpine3.15 as base
ENV LANG=ja_JP.UTF-8
ENV HOME=/home/node
ENV APP_HOME="$HOME/nestjs_docker_sample"
WORKDIR $APP_HOME
# Port number. I've heard that using EXPOSE in a Dockerfile doesn't have an effect now; it's just informative.
# https://shinkufencer.hateblo.jp/entry/2019/01/31/233000
EXPOSE 3000
# Global install. curl was added for easy API checks locally.
# git is required for jest's watch mode.
# postgresql-client is needed if using postgres for the DB.
RUN apk upgrade --no-cache && \
apk add --update --no-cache \
postgresql-client curl git
# https://github.com/nodejs/docker-node/blob/main/docs/BestPractices.md#global-npm-dependencies
# npm global settings
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
# Copy package files
COPY package*.json ./
# Copy .npmrc here if it exists
# COPY .npmrc ./
# Make all files owned by the node user
RUN chown -R node:node .
USER node
RUN echo "WORKDIR is $WORKDIR . HOME is $HOME . LANG is $LANG ." && npm config list
###############
# dev #
###############
# Assumption: code sharing via docker-compose
# We don't run npm ci at this stage because it would be overwritten by the host-side node_modules during mounting.
FROM base as dev
ENV NODE_ENV=development
# Install global packages here
RUN npm i -g @nestjs/cli
RUN npm i -g prisma
###############
# test #
###############
# Assumption: sharing code other than node_modules via docker-compose.
# Corresponds to prebuild-test in docker-compose.
FROM dev as test
ENV NODE_ENV=test
RUN npm ci
###############
# build #
###############
# Build the source code.
# Tests are performed before the build because test files are excluded during the build.
FROM test as build
COPY --chown=node . .
RUN npm run build
###############
# prod #
###############
# Executed by default if no target is specified
# Only dependencies are installed
FROM base as prod
ENV NODE_ENV=production
# Configuration files. Copy files required for execution.
# Prisma-related source code might be needed here. Not verified for production.
COPY --from=build /$APP_HOME/dist /$APP_HOME/.dockerignore ./
RUN npm ci --only=production \
&& npm cache clean --force
# App execution command
CMD ["node", "src/main.js"]
Here is the docker-compose.
version: '3.7'
services:
# For development
app:
build:
context: .
dockerfile: ./infra/node/Dockerfile
# Specify the multi-stage build target
target: dev
ports:
- '3000:3000'
# For debugger
- '9229:9229'
# prisma studio
- '5555:5555'
# Address PID 1 issue
init: true
volumes:
# In development, bind mount node_modules. npm ci is not executed during image build. Run npm ci on the container at the start of development.
- '.:/home/node/nestjs_docker_sample'
env_file:
- .env.local
command: npm run start:dev
# Intended for use in CI tests.
prebuild-test:
build:
context: .
dockerfile: ./infra/node/Dockerfile
target: test
ports:
- '3000:3000'
init: true
volumes:
- ./:/home/node/nestjs_docker_sample
- ./coverage:/home/node/nestjs_docker_sample/coverage
# https://stackoverflow.com/questions/30043872/docker-compose-node-modules-not-present-in-a-volume-after-npm-install-succeeds
# Do not bind mount node_modules in the test environment. If the host-side node_modules is bind mounted, the npm ci installed during image build will be wiped out, requiring re-installation on CI. Use volume mounting instead.
- node_modules:/home/node/nestjs_docker_sample/node_modules
env_file:
- .env.local
command: npm run ci:test
postgres:
build: ./infra/postgres
volumes:
- pg-data:/var/lib/postgresql/data
- ./infra/postgres/initdb:/docker-entrypoint-initdb.d
ports:
- '5432:5432'
environment:
- POSTGRES_HOST_AUTH_METHOD=trust
# For viewing SQL logs
# command: ['postgres', '-c', 'log_statement=all']
volumes:
pg-data:
driver: 'local'
node_modules:
Setting Up GitHub Actions
name: Docker Image CI
on: push
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: echo docker and compose version
run: docker -v && docker-compose -v
- name: build stateful server and migrate
run: docker-compose up -d --build postgres
- name: create coverage dir
run: mkdir -p coverage && chmod 777 coverage
- name: chown prisma schema
run: sudo chown -R $USER:$(id -gn $USER) prisma
- name: run migrate & test
run: docker-compose run prebuild-test
# Save coverage report in Coveralls
#- name: Upload coverage to Codecov
# uses: codecov/codecov-action@v2
# env:
# CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}
Organizing docker-compose Commands with a Makefile
.PHONY: init
init:
make clean
docker-compose build
docker-compose run --rm app npm ci
docker-compose run --rm app prisma migrate dev
.PHONY: clean
clearn:
docker-compose down --volumes
.PHONY: dev
dev:
docker-compose down app
docker-compose up app
.PHONY: unit
unit:
docker-compose run --rm app npm run test
.PHONY: e2e
e2e:
docker-compose run --rm app npm run test:e2e
.PHONY: infra
infra:
docker-compose down postgres
docker-compose up postgres
.PHONY: bash
bash:
docker-compose run --rm --service-ports app sh
How to Develop
Initial Setup
$ make init
During Development
$ make dev
Kill with Ctrl-c when finished.
E2E Testing
$ make infra
In another tab:
$ make e2e
Unit Testing
$ make unit
Executing Commands
When you want to run commands like npm install, prisma-cli, or nest-cli:
$ make bash
Enter the container and execute. Type exit when finished.
Discussion