Docker Tutorial

Vultr Docker Setup: Deploy Containerized Apps in 10 Minutes (2026 Guide)

Published April 15, 2026 · 8 min read · by Vultr Guide Team

Containers have fundamentally changed how we deploy software. Docker lets you package an application with all its dependencies into a single, portable unit that runs consistently everywhere — from your laptop to a production VPS. And when it comes to affordable, high-performance infrastructure for Docker, Vultr is hard to beat.

This guide walks you through a complete Vultr Docker setup on Ubuntu 22.04. By the end, you'll have Docker installed, your first container running, and a practical deployment workflow you can replicate for any project.

Why Run Docker on Vultr?

Before we dive in, let's address the obvious question: why Vultr for Docker specifically?

  • NVMe storage means container image pulls are blazing fast — no waiting 30 seconds for a 500MB image to download
  • $2.50/month entry point is perfect for development, staging, or low-traffic production workloads
  • Global locations let you deploy containers close to your users
  • Snapshots & backups integrate seamlessly with containerized workflows
  • No Docker-specific surcharges or "enterprise" licensing — just clean compute
Use case: A React frontend, Node.js API, and PostgreSQL database — each running in its own container — cost as little as $10/month on Vultr (2 vCPU, 4GB RAM). On AWS ECS or GCP Cloud Run, the same architecture with egress costs can easily run $40-60/month.

Prerequisites

You'll need:

Don't have a Vultr instance yet? Our step-by-step Vultr setup guide covers launching your first server in under 5 minutes.

Step 1: Install Docker on Vultr Ubuntu

SSH into your Vultr instance and follow these steps:

1

Update system packages

Always start fresh. Run the following to ensure your system is up to date:

sudo apt update && sudo apt upgrade -y
2

Install Docker dependencies

Docker requires a few packages that aren't included by default:

sudo apt install -y apt-transport-https ca-certificates curl software-properties-common
3

Add Docker's official GPG key and repository

Download Docker's signing key and add the stable repository:

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
4

Install Docker Engine

Now install Docker itself:

sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
5

Verify the installation

Check that Docker is running correctly:

sudo docker run hello-world

If you see the "Hello from Docker!" message, you're all set.

6

Run Docker without sudo (optional but recommended)

Add your user to the Docker group so you don't need sudo for every command:

sudo usermod -aG docker $USER
newgrp docker

Log out and back in for the group change to take effect.

Step 2: Deploy Your First Container

With Docker installed, let's deploy something real. We'll run a lightweight Nginx web server as our first container:

# Pull the official Nginx image
docker pull nginx:latest

# Run it, mapping port 80 on the host to port 80 in the container
docker run -d -p 80:80 --name my-nginx nginx:latest

# Check it's running
docker ps

Visit your server's IP in a browser — you should see the default Nginx welcome page. That's it. One command to get a fully functional web server running.

Pro tip: Use docker logs -f my-nginx to stream container logs in real time. This is invaluable when debugging issues in production.

Step 3: Use Docker Compose for Multi-Container Apps

Real applications rarely run as a single container. Most need a web app, a database, a cache layer, and potentially a reverse proxy. This is where Docker Compose shines — it lets you define and run multi-container applications with a single YAML file.

Let's deploy a practical example: a Node.js API with PostgreSQL.

docker-compose.yml

A Node.js API + PostgreSQL Stack

version: '3.8'
services:
  api:
    build: ./api
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://user:password@db:5432/myapp
    depends_on:
      - db
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=password
      - POSTGRES_DB=myapp
    volumes:
      - pgdata:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  pgdata:

To bring up the entire stack:

docker compose up -d

To stop it:

docker compose down

The depends_on directive ensures the database starts before the API. The restart: unless-stopped policy automatically restarts containers after server reboots — critical for production reliability.

Step 4: Persist Data with Docker Volumes

One of the most common Docker mistakes is losing data when a container is removed. Containers are ephemeral — when they're gone, their filesystem changes are gone too. Volumes solve this by providing persistent storage that survives container restarts and removals.

# Create a named volume
docker volume create my-data

# Use it in a container
docker run -d -v my-data:/data/db postgres:16-alpine

# Inspect volumes
docker volume ls
docker volume inspect my-data
Critical for production: Never rely solely on Docker volumes for critical data. Implement regular backups to external storage. Vultr's block storage and snapshot solutions work great alongside Docker volumes for a complete backup strategy.

Step 5: Secure Your Docker Setup

A few essential security hardening steps for production Docker deployments:

Enable the Docker firewall

# Allow Docker incoming connections
sudo ufw allow 2375/tcp  # Docker daemon (internal only, NOT exposure to internet)
sudo ufw reload

Scan images for vulnerabilities

# Install Trivy (lightweight vulnerability scanner)
docker pull aquasec/trivy:latest
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy image nginx:latest

Limit container resources

# Limit a container to 512MB RAM and 0.5 CPU
docker run -d -p 80:80 \
  --memory="512m" \
  --cpus="0.5" \
  --name my-nginx nginx:latest

Step 6: Set Up a Reverse Proxy (Optional but Recommended)

When running multiple services on one host, you'll want a reverse proxy to route traffic based on domain or path. nginx-proxy combined with Let's Encrypt makes this almost effortless:

docker network create web

docker run -d -p 80:80 -p 443:443 \
  --name nginx-proxy \
  -v certs:/etc/nginx/certs \
  -v /var/run/docker.sock:/tmp/docker.sock:ro \
  nginxproxy/nginx-proxy:latest

Now any container with VIRTUAL_HOST and LETSENCRYPT_HOST environment variables automatically gets SSL and routing:

docker run -d \
  --network=web \
  -e VIRTUAL_HOST=api.example.com \
  -e LETSENCRYPT_HOST=api.example.com \
  -e LETSENCRYPT_EMAIL=you@example.com \
  --name my-api my-api:latest

Real Example: Deploying a Python FastAPI App

Let's put it all together with a practical example. Here's how to deploy a Python FastAPI application on Vultr with Docker:

1. Create the project structure:

mkdir fastapi-app && cd fastapi-app
mkdir api && cd api

2. Create api/main.py:

from fastapi import FastAPI

app = FastAPI()

@app.get("/")
def read_root():
    return {"message": "Hello from FastAPI on Vultr Docker!"}

@app.get("/health")
def health_check():
    return {"status": "healthy"}

3. Create api/requirements.txt:

fastapi
uvicorn[standard]

4. Create api/Dockerfile:

FROM python:3.11-slim

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .
EXPOSE 3000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "3000"]

5. Create the root docker-compose.yml:

version: '3.8'
services:
  api:
    build: ./api
    ports:
      - "3000:3000"
    restart: unless-stopped
    labels:
      - "com.centurylinklabs.watchtower.enable=true"

6. Deploy:

docker compose up -d --build
docker compose logs -f

Your FastAPI app is now running at http://YOUR_SERVER_IP:3000. The /health endpoint makes it easy to set up uptime monitoring.

Automate Updates with Watchtower

Keeping containers updated manually is tedious. Watchtower automates this — it watches for base image updates and automatically rebuilds your containers:

docker run -d \
  --name watchtower \
  -v /var/run/docker.sock:/var/run/docker.sock \
  containrrr/watchtower

Add --label com.centurylinklabs.watchtower.enable=true to any container you want Watchtower to manage.

Next Steps

You've now got a production-ready Docker setup on Vultr. From here, you can:

  • Explore Vultr's Kubernetes offerings if you need container orchestration at scale
  • Set up Cloudbet for accepting crypto payments in your app
  • Configure Vultr's built-in monitoring and alerts for your containers
  • Use Vultr's snapshot feature to backup your entire Docker environment

Docker on Vultr gives you enterprise-grade container infrastructure at a fraction of the cost of managed alternatives. Combine it with a simple deployment script and you're deploying to production in minutes, not hours.

Ready to Deploy Your First Docker Container?

Get started with $100 free credit and high-performance NVMe VPS from just $2.50/month.

Deploy Your Vultr Server Now →