This is sort of a follow up to my previous post on setting up Docker with two different environments. I wanted to share another trick we are using to efficiently run a lot of non-critical applications.
Problem Statement
We have about nine different applications that all are not really in production. Some are staging environments, some are just not so critical applications and some are totally miscellaneous. Regardless, they all need to be generally live and accessible, but we don't want to run nine different EC2's or even nine different VMs for that matter to keep them all up.
In fact for each one, as shown before, we can set up a docker compose project. So let's say we have nine different compose projects, some of which have many services. In total, just to run these apps, we may have around 20-30 services!
How can we point to each of these services within Docker and make the many of them publicly accessible?
Solution + Implementation
This problem has been solved for a long time and involves the use of what is called a reverse proxy. In this case, we will use NGINX Reverse Proxy Manager running in a docker container which is simple and elegant.
Setting up NGINX
docker-compose.yml
The docker-compose file to run the nginx-proxy-manager is generally trivial. It just uses the prebuilt image from jc21 and intercepts traffic to ports 80, 81, and 443. It also mounts volumes to data and letsencrypt to handle persisting the data on the host for the proxies and ssl certs.
The important piece is the network. In order for this paradigm to work, all of the docker-compose services must be on the same network so that nginx can route traffic to all of them, but we do not want to expose the docker containers on the global local network for security reasons. In the nginx proxy manager, we instantiate this network called "proxy" and set the service to run within that network
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: always
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
networks:
- proxy
networks:
proxy:
driver: bridge
Example Project Docker Compose
Here's an example of a docker-compose that could work within this setup. This launches two services: db and web and both reference the network created above. The "proxy" name will be prefixed with the project name "nginx-reverse-proxy"
version: '3.8'
services:
web:
image: nginx:latest
container_name: nginx_web
ports:
- "8080:80"
networks:
- nginx-reverse-proxy_proxy
volumes:
- ./html:/usr/share/nginx/html
db:
image: postgres:13
container_name: postgres_db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: exampledb
ports:
- "5432:5432"
networks:
- nginx-reverse-proxy_proxy
networks:
nginx-reverse-proxy_proxy:
driver: bridge
external: true
Running the projects
First the nginx-reverse-proxy docker-compose can be started which will create the proxy network and run the container with the project name specified:
docker compose -p nginx-reverse-proxy up -d
Second the example docker compose can be run:
docker compose -p example-project up -d
- db
- web
- nginx-reverse-proxy
docker ps
output should look something like this:
Configuring NGINX Proxy Manager (NPM)
- First hit localhost to see NPM running. You'll see something like this running.
- Now hit the same service over port 81 (http://localhost:81) and you can login with the default login:
- admin@example.com/changeme
- Finally, you can go to the proxies tab and configure NGINX to route certain domains to domains it has access to within the docker network and to the port of that domain. Something like below.
Once this is set up, the domains can just have A records pointed to the server and NPM will handle routing those domains to the appropriate docker containers and the port that those docker containers are running on!