Containers changed everything. Spin up a PostgreSQL database in 10 seconds. Deploy a Node.js API without worrying about Node versions. Run five different microservices on the same machine without them stepping on each other. Docker makes it possible — and with Vultr's NVMe-backed VPS, you get the performance to run it properly.
This guide walks you through a complete Vultr Docker setup: installing Docker, running your first containers, and deploying a real production-ready web application. By the end, you'll have a fully containerized stack running on a $6/mo Vultr VPS.
Three reasons make this combo work well:
Compared to Docker's own cloud offering or managed Kubernetes services, you're looking at roughly 80% cost reduction for equivalent compute. The tradeoff: you manage the server yourself. For most indie projects and small teams, that's a non-issue.
You'll need:
For this guide, we're using the $6/mo plan: 1 vCPU, 1GB RAM, 32GB NVMe. It's tight for Docker in production but fine for learning and light workloads. For running multiple containers or anything serious, jump to the $20/mo plan (3 vCPU, 4GB RAM, 80GB NVMe).
The official Docker repository gives you the latest stable version. One command handles everything:
# Update packages
sudo apt update && sudo apt upgrade -y
# Install prerequisites
sudo apt install -y ca-certificates curl gnupg lsb-release
# Add Docker's official GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
# Add Docker repository
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
# Verify installation
sudo docker run hello-world
If you see a "Hello from Docker!" message, Docker is installed and running. The last line pulls a test image and runs it in a container to confirm everything works.
sudo before every Docker command:
sudo usermod -aG docker $USER
# Log out and back in for the group change to take effect
You don't want Docker stopping when your server reboots. Enable the service:
sudo systemctl enable docker
sudo systemctl start docker
sudo systemctl status docker
The status command should show Docker as "active (running)." Now your containers will survive server reboots automatically.
For multi-container applications, Docker Compose is essential. It lets you define your entire stack in a YAML file and spin it up with one command:
sudo apt install -y docker-compose
Verify it works:
docker compose version
You should see something like Docker Compose version v2.x.x.
Let's deploy something practical. We'll run a static website served by Nginx inside a Docker container, accessible from the internet.
First, create the project structure:
mkdir -p ~/docker-projects/static-site/html
cd ~/docker-projects/static-site
Create the HTML file:
cat > ~/docker-projects/static-site/html/index.html << 'EOF'
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>My Docker Site on Vultr</title>
<style>
body { font-family: sans-serif; display: flex; justify-content: center; align-items: center; height: 100vh; margin: 0; background: #1a73e8; color: white; }
h1 { font-size: 48px; }
</style>
</head>
<body>
<h1>🐳 Running on Vultr with Docker!</h1>
</body>
</html>
EOF
Create the Docker Compose file:
cat > ~/docker-projects/static-site/docker-compose.yml << 'EOF'
version: '3.8'
services:
web:
image: nginx:latest
ports:
- "80:80"
volumes:
- ./html:/usr/share/nginx/html:ro
restart: always
EOF
Start the stack:
cd ~/docker-projects/static-site
docker compose up -d
Open your browser and visit http://YOUR_VULTR_IP. You should see the "Running on Vultr with Docker!" page. The container is running in detached mode, so it won't block your terminal.
Check container status:
docker compose ps
docker ps
A static site is nice, but let's do something real. Here's a WordPress-capable stack with a database:
mkdir -p ~/docker-projects/wordpress
cd ~/docker-projects/wordpress
cat > ~/docker-projects/wordpress/docker-compose.yml << 'EOF'
version: '3.8'
services:
db:
image: mariadb:latest
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: ChangeThisPassword!
MYSQL_DATABASE: wordpress
MYSQL_USER: wp_user
MYSQL_PASSWORD: AnotherStrongPassword!
wordpress:
image: wordpress:latest
volumes:
- wp_data:/var/www/html
ports:
- "80:80"
restart: always
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: wp_user
WORDPRESS_DB_PASSWORD: AnotherStrongPassword!
WORDPRESS_DB_NAME: wordpress
volumes:
db_data:
wp_data:
EOF
Start it:
cd ~/docker-projects/wordpress
docker compose up -d
Wait about 2 minutes for WordPress to initialize, then visit http://YOUR_VULTR_IP. You'll see the WordPress setup wizard. That's a fully functional CMS running in isolated containers — database and web server in separate containers, data persisted in Docker volumes.
docker compose ps
docker stats --no-stream
The stats command shows real-time CPU and memory usage for each container. Useful for sizing your plan correctly.
Docker defaults are not production defaults. Lock it down:
Prevent pulling tampered images:
export DOCKER_CONTENT_TRUST=1
Create a daemon configuration file to set resource limits:
sudo nano /etc/docker/daemon.json
Add this content:
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"storage-driver": "overlay2",
"live-restore": true
}
EOF
Restart Docker:
sudo systemctl restart docker
Since Docker containers can bypass UFW rules (a known quirk), configure IPTables directly:
sudo nano /etc/docker/daemon.json
Add the following to the existing JSON:
{
"log-driver": "json-file",
"log-opts": { "max-size": "10m", "max-file": "3" },
"storage-driver": "overlay2",
"live-restore": true,
"iptables": true
}
EOF
Then restart Docker:
sudo systemctl restart docker
Add restart: always to your docker-compose.yml services (as shown above). This tells Docker to bring those containers back online automatically when the server reboots.
List all containers set to restart:
docker ps --filter "restart=always"
Three commands you should know:
# View logs for a service
docker compose logs -f wordpress
# Inspect a running container
docker inspect wordpress_wordpress_1
# Check resource usage
docker stats
For production monitoring, consider porting to a proper orchestrator. For now, these commands cover 90% of what you need for a single-server Docker setup.
Here's how a real deployment looks. A FastAPI Python backend with Redis caching:
mkdir -p ~/docker-projects/api
cd ~/docker-projects/api
cat > ~/docker-projects/api/docker-compose.yml << 'EOF'
version: '3.8'
services:
api:
build: .
ports:
- "8000:8000"
environment:
- REDIS_HOST=redis
- PYTHONUNBUFFERED=1
restart: always
depends_on:
- redis
redis:
image: redis:latest
restart: always
EOF
And the Dockerfile:
cat > ~/docker-projects/api/Dockerfile << 'EOF'
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
EOF
cat > ~/docker-projects/api/requirements.txt << 'EOF'
fastapi
uvicorn[standard]
redis
EOF
cat > ~/docker-projects/api/main.py << 'EOF'
from fastapi import FastAPI
import redis, os
app = FastAPI()
r = redis.Redis(host=os.getenv("REDIS_HOST", "localhost"))
@app.get("/")
def read_root():
return {"message": "Docker API on Vultr!", "cache_test": r.ping()}
@app.get("/health")
def health():
return {"status": "ok"}
EOF
Build and run:
cd ~/docker-projects/api
docker compose up -d --build
Visit http://YOUR_VULTR_IP:8000. Your containerized API is live with Redis caching. The Docker volume persists Redis data across container restarts.
Docker isn't running. Fix: sudo systemctl start docker. If that fails, the daemon.json syntax might be broken — check it with sudo cat /etc/docker/daemon.json.
Check the logs: docker logs CONTAINER_NAME. Usually a missing environment variable, bad volume mount path, or port already in use.
Docker images are big. Prune unused images: docker system prune -a. This removes all stopped containers, unused networks, and dangling images. Expect to reclaim 2-5GB.
Something else is using port 80 (probably a bare-metal Nginx). Stop it: sudo systemctl stop nginx, then restart your Docker stack.
Once you're running more than 5-10 containers or need zero-downtime deployments, Docker Swarm is a natural next step. It's built into Docker and handles service orchestration without extra infrastructure.
For truly massive scale, Vultr's managed Kubernetes offering removes the operational overhead entirely. But for most projects, a well-configured Docker Compose setup on a single VPS will get you further than you think.
Want to compare Vultr's bare-metal performance against AWS and GCP? Check out our Cloudbet VPS performance comparison for benchmarks.
Ready to containerize your app?