Docker has redefined how developers deploy applications. Instead of wrestling with dependency hell or watching your app break across different environments, containers package everything — code, runtime, libraries — into a single, portable unit. This guide walks you through a production-ready Docker setup on Vultr, from a fresh Ubuntu instance to a fully containerized web application running behind Nginx.
Why Run Docker on Vultr?
Vultr's compute instances are built on NVMe SSD storage with high-frequency CPUs — ideal for containerized workloads that demand fast I/O and quick startup times. Combined with Vultr's global network of 32 data centers, you can deploy containers close to your users with sub-10ms latency across major markets.
Compared to traditional VPS setups, Docker on Vultr gives you:
- Consistent environments — same container image runs identically on your laptop, staging server, and production Vultr instance
- Resource isolation — each service runs in its own container with guaranteed resources
- Fast deployment cycles — push an updated image, pull it on Vultr, restart the container. No SSH into the server to run git pull and pray
- Easy scaling — Docker Compose and Swarm make horizontal scaling straightforward
Prerequisites
You'll need:
- A Vultr account (sign up here)
- A Vultr Cloud Compute instance running Ubuntu 22.04 LTS or 24.04 LTS
- SSH access to your instance
- A domain name pointed to your Vultr server's IP
Step 1: Deploy Your Vultr Instance
From your Vultr dashboard, create a new Cloud Compute instance:
- Location: Choose a region nearest to your users
- OS: Ubuntu 22.04 LTS or 24.04 LTS
- Size: For a Docker host with a few containers, the $6/month plan (1 vCPU, 1GB RAM, 32GB SSD) works for light workloads. For production, start with $24/month (2 vCPU, 4GB RAM, 80GB SSD)
- Firewall: Enable both IPv4 and IPv6
# Verify Ubuntu version after SSHing in
cat /etc/os-release | grep "PRETTY_NAME"
Step 2: Install Docker on Ubuntu
Vultr's Ubuntu images come with minimal packages, so you'll install Docker from the official repository to get the latest stable version.
# Update package index and install prerequisites
sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add Docker repository
echo "deb [arch=$(dpkg --print-architecture) \
signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list
# Install Docker Engine
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# Enable and start Docker
sudo systemctl enable docker
sudo systemctl start docker
# Verify installation
sudo docker run --rm hello-world
If you see the "Hello from Docker!" message, Docker is running correctly.
Run Docker Without Sudo
By default, only root can run Docker commands. For a personal server, add your user to the docker group:
sudo usermod -aG docker $USER
# Log out and back in for the change to take effect
newgrp docker
Security note: Users in the docker group can effectively gain root privileges. Only add trusted users to this group.
Step 3: Install Docker Compose
Docker Compose (v2) is included with the docker-compose-plugin package you installed above. Verify it:
docker compose version
# Docker Compose version v2.24.0 or higher
Step 4: Your First Container — Nginx Web Server
Let's deploy a real web application. We'll run Nginx inside a container and serve a simple static site.
# Create project directory
mkdir -p ~/docker-website && cd ~/docker-website
# Create a simple HTML page
cat > index.html << 'EOF'
Docker on Vultr
Running on Docker + Vultr! 🚀
Containerized web server, deployed successfully.
EOF
# Create docker-compose.yml
cat > docker-compose.yml << 'EOF'
services:
web:
image: nginx:alpine
ports:
- "8080:80"
volumes:
- ./index.html:/usr/share/nginx/html/index.html:ro
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
EOF
# Start the container
docker compose up -d
# Check status
docker compose ps
Visit http://YOUR_VULTR_IP:8080 — you should see your containerized website running. The :alpine tag means the Nginx image is based on Alpine Linux, keeping it lean at ~40MB instead of 130MB+ for the full image.
Step 5: Nginx Reverse Proxy with Docker
Running containers on different ports is fine for testing, but production deployments need a reverse proxy. We'll use the popular nginx-proxy container with automatic container discovery.
# Create a dedicated proxy network
docker network create proxy
# Create proxy directory
mkdir -p ~/proxy && cd ~/proxy
# Start nginx-proxy with Docker socket mount
cat > docker-compose.yml << 'EOF'
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:alpine
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs:ro
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
networks:
- proxy
restart: unless-stopped
networks:
proxy:
external: true
EOF
docker compose up -d
# Update your website compose to use the proxy network
cd ~/docker-website
cat > docker-compose.yml << 'EOF'
services:
web:
image: nginx:alpine
expose:
- "80"
volumes:
- ./index.html:/usr/share/nginx/html/index.html:ro
environment:
- VIRTUAL_HOST=yourdomain.com
- VIRTUAL_PORT=80
networks:
- proxy
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
networks:
proxy:
external: true
EOF
docker compose up -d
Now containers can register themselves with nginx-proxy just by setting the VIRTUAL_HOST environment variable. Add a new service? Update its compose file and docker compose up -d — nginx-proxy detects it automatically.
Step 6: SSL with Let's Encrypt
Pair nginx-proxy with nginx-proxy-acme for automatic Let's Encrypt certificates:
# Add acme-companion to the proxy compose
cd ~/proxy
cat > docker-compose.yml << 'EOF'
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:alpine
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./certs:/etc/nginx/certs
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
networks:
- proxy
restart: unless-stopped
acme:
image: nginxproxy/acme-companion
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./certs:/etc/nginx/certs
- ./acme:/.acme
- ./vhost.d:/etc/nginx/vhost.d
- ./html:/usr/share/nginx/html
environment:
- DEFAULT_EMAIL=your@email.com
networks:
- proxy
restart: unless-stopped
networks:
proxy:
external: true
EOF
# Add SSL environment variables to your app
cd ~/docker-website
cat > docker-compose.yml << 'EOF'
services:
web:
image: nginx:alpine
expose:
- "80"
volumes:
- ./index.html:/usr/share/nginx/html/index.html:ro
environment:
- VIRTUAL_HOST=yourdomain.com
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=yourdomain.com
- LETSENCRYPT_EMAIL=your@email.com
networks:
- proxy
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost/"]
interval: 30s
timeout: 10s
retries: 3
networks:
proxy:
external: true
EOF
docker compose -f ~/proxy/docker-compose.yml up -d
docker compose up -d
Certificates are automatically requested and renewed before expiration. No manual certbot commands needed.
Step 7: Monitoring & Health Checks
Docker Compose includes a built-in health check feature. Here's how to set up lightweight monitoring with Prometheus + Grafana via Docker:
# Create monitoring compose file
mkdir -p ~/monitoring && cd ~/monitoring
cat > docker-compose.yml << 'EOF'
services:
prometheus:
image: prom/prometheus:latest
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.retention.time=15d'
networks:
- proxy
restart: unless-stopped
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:9090/-/healthy"]
interval: 30s
timeout: 10s
retries: 3
grafana:
image: grafana/grafana:latest
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=changeme
- VIRTUAL_HOST=grafana.yourdomain.com
- LETSENCRYPT_HOST=grafana.yourdomain.com
volumes:
- grafana_data:/var/lib/grafana
networks:
- proxy
restart: unless-stopped
depends_on:
- prometheus
volumes:
prometheus_data:
grafana_data:
networks:
proxy:
external: true
EOF
cat > prometheus.yml << 'EOF'
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'docker'
static_configs:
- targets: ['host.docker.internal:9323']
EOF
docker compose up -d
For host.docker.internal to work, start the Docker daemon with --iptables=false or add "host.docker.internal":"host-gateway" to your daemon.json. This lets containers reach the host's Docker metrics endpoint.
Vultr Pricing for Docker Workloads
Here's how Vultr's Cloud Compute plans map to Docker use cases in 2026:
| Plan |
Specs |
Best For |
$/mo |
| Starter |
1 vCPU, 1GB RAM, 32GB NVMe |
Dev/test environments, personal projects |
$6 |
| Standard |
2 vCPU, 4GB RAM, 80GB NVMe |
Small production apps, 3-5 containers |
$24 |
| Performance |
4 vCPU, 8GB RAM, 160GB NVMe |
Multi-service stacks, databases + web |
$48 |
| Enterprise |
8 vCPU, 16GB RAM, 320GB NVMe |
High-traffic sites, CI/CD runners |
$96 |
Vultr's billing is hourly, so you only pay for what you use. Spin up an instance for testing, take a snapshot, destroy it — you won't be charged for idle time. For comparison with competitors, check out our VPS cost comparison 2026.
If you're planning to run multiple containers that need GPU acceleration — say, for AI inference or video transcoding — Vultr's GPU instances start at $100/month with an NVIDIA A100 or H100, and Docker is fully supported there too.
Verdict
Docker + Vultr: A Production-Ready Combo
Vultr's NVMe-backed compute plus Docker's containerization is a pairing that scales from a $6/month hobby project to a $100+/month production deployment. The setup process takes under 30 minutes, and once you're running, deploying updates is a two-command affair (docker compose build && docker compose up -d).
The reverse proxy stack with automatic SSL from Let's Encrypt is particularly elegant — it turns a single Vultr instance into a full-featured hosting platform capable of running dozens of independent services, each with their own domain, all sharing port 80/443.
If you want to explore further, Kubernetes on Vultr is the natural next step when a single server isn't enough. And if you're after more insights on sports analytics infrastructure — one surprisingly popular use case for Vultr's compute — check out our guide on leveraging sports betting APIs.
Ready to deploy your first container?
Get started with Vultr's high-performance VPS — $6/month to try.
Deploy on Vultr →