Self-Host Dokploy: Deploy Without Vercel or Netlify
Self-Host Dokploy: Deploy Without Vercel or Netlify
TL;DR
Dokploy is a free, open-source PaaS that uses Docker Swarm and Traefik to deploy web applications, databases, and Docker Compose stacks — without paying Vercel or Netlify. It launched in 2024 and has grown to 26,000+ GitHub stars. Install takes 2 minutes with a single curl command. The UI is consistently praised as cleaner and more intuitive than older alternatives like CapRover, and the built-in monitoring (CPU, memory, network) is better than Coolify's out of the box.
Key Takeaways
- Dokploy: 26K+ GitHub stars, Apache 2.0, launched 2024 — fastest-growing self-hosted PaaS
- Single-command install:
curl -sSL https://dokploy.com/install.sh | sh - Built on Docker Swarm: multi-node support without Kubernetes complexity
- Traefik integration: automatic HTTPS, routing, load balancing
- Deploy anything: Node.js, Next.js, Go, PHP, Python, static sites, Docker Compose stacks
- Built-in monitoring: real-time CPU, memory, storage, network per service
- OpenAPI/Swagger documented: REST API with JWT authentication
- Requirements: Ubuntu 20.04+, 2GB RAM minimum (4GB recommended), 1 vCPU
Dokploy vs. Coolify vs. CapRover
| Feature | Dokploy | Coolify v4 | CapRover |
|---|---|---|---|
| GitHub Stars | 26K+ | 50K+ | 14K+ |
| Year launched | 2024 | 2021 (v4: 2024) | 2018 |
| UI quality | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐ |
| Container tech | Docker Swarm | Docker | Docker |
| Traefik routing | ✅ | ❌ (Caddy) | ❌ (nginx) |
| Multi-node | ✅ Swarm | ✅ Multi-server | ✅ |
| Nixpacks | ❌ | ✅ | ❌ |
| One-click templates | 100+ | 280+ | 100+ |
| Built-in monitoring | ✅ Best | ⚠️ Basic | ⚠️ Via Netdata |
| API documentation | ✅ Swagger | Limited | ✅ |
| RAM idle | ~300MB | ~400MB | ~200MB |
| Buildpacks | ❌ | ✅ | ✅ |
| Preview deployments | ✅ | ✅ | ❌ |
Dokploy's main advantage over Coolify: cleaner UI and better built-in monitoring. Coolify's advantage: 280+ one-click templates and Nixpacks support. CapRover is the most resource-efficient if RAM is tight.
Installation
One-Line Install
# Run on Ubuntu 20.04+ / Debian 12
# Requires root (sudo)
curl -sSL https://dokploy.com/install.sh | sh
The script:
- Installs Docker if not present
- Configures Docker Swarm mode (single-node to start)
- Deploys Dokploy as a Docker stack
- Opens the web UI on port 3000
Access http://your-server-ip:3000 to complete initial setup.
Manual Install via Docker
# If you prefer manual control:
# 1. Initialize Docker Swarm
docker swarm init --advertise-addr $(hostname -I | awk '{print $1}')
# 2. Create the Dokploy overlay network
docker network create --driver overlay dokploy-network
# 3. Deploy the Dokploy stack
docker stack deploy -c <(curl -fsSL https://dokploy.com/docker-compose.yml) dokploy
# 4. Check deployment status
docker service ls
# Should show: dokploy_traefik, dokploy_dokploy running
Firewall Configuration
# Open required ports
ufw allow 22 # SSH (keep this!)
ufw allow 80 # HTTP
ufw allow 443 # HTTPS
ufw allow 3000 # Dokploy dashboard (restrict after domain setup)
ufw allow 2377 # Docker Swarm manager (if adding worker nodes)
ufw allow 7946 # Docker Swarm node communication
ufw allow 4789 # Docker Swarm overlay network
ufw enable
Deploying Applications
Deploying a Node.js/Next.js App
1. Dashboard → Projects → New Project
2. New Service → Application
3. Select: GitHub / GitLab / Bitbucket
4. Choose your repository and branch
5. Configure:
Build Command: npm run build
Start Command: npm run start
Port: 3000
6. Click Deploy
Dokploy detects common frameworks automatically. For Next.js apps, the default settings usually work without modification.
Deploying with Docker Compose
For multi-container apps, paste your docker-compose.yml directly:
# Example: Next.js + PostgreSQL + Redis
version: "3.8"
services:
app:
build: .
environment:
DATABASE_URL: postgresql://user:pass@db:5432/myapp
REDIS_URL: redis://redis:6379
ports:
- "3000:3000"
depends_on:
- db
- redis
db:
image: postgres:15
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: pass
POSTGRES_DB: myapp
volumes:
- pg_data:/var/lib/postgresql/data
redis:
image: redis:7-alpine
volumes:
pg_data:
Dokploy converts this to a Docker Swarm stack automatically — volumes, networks, and rolling deploys are handled for you.
Environment Variables
# Via UI: Application → Environment → Add Variable
# Or via API:
curl -X POST "https://your-dokploy.domain/api/application.env.create" \
-H "Authorization: Bearer YOUR_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"applicationId": "your-app-id",
"env": "DATABASE_URL=postgresql://...\nREDIS_URL=redis://..."
}'
Custom Domain + Automatic HTTPS
Dokploy uses Traefik to handle routing and Let's Encrypt certificates:
1. DNS: Point your domain A record to your server IP
app.yourdomain.com → YOUR_SERVER_IP
2. In Dokploy: Application → Domains → Add Domain
Domain: app.yourdomain.com
HTTPS: Enable (auto Let's Encrypt)
3. Traefik requests the certificate automatically
- Certificate issued in ~30 seconds
- Auto-renewed before expiry
Wildcard Certificates
# For *.yourdomain.com wildcard cert (requires DNS challenge):
# Configure in Dokploy: Settings → Traefik → Certificates
# With Cloudflare DNS:
CLOUDFLARE_DNS_API_TOKEN=your-token
# Add to Traefik configuration in Dokploy settings:
# certificatesResolvers.cloudflare.acme.dnsChallenge.provider=cloudflare
Managed Databases
Dokploy provisions databases as Swarm services with persistent volumes:
Available databases:
PostgreSQL (12, 14, 15, 16)
MySQL (5.7, 8.0)
MariaDB
MongoDB (5, 6, 7)
Redis
Redis Sentinel (HA)
Provisioning:
Dashboard → New Service → Database → Select type
→ Generates connection string
→ Mounts persistent volume automatically
→ Backups to S3 (configure in settings)
# Connect to a provisioned PostgreSQL instance:
# Dokploy shows the internal connection string for your services:
# postgresql://user:generated-pass@postgres.yourdomain.internal:5432/db
# External access (if needed):
# Dashboard → Database → Enable External Port
# Warning: restrict by IP if exposing externally
Adding Worker Nodes (Multi-Server)
Docker Swarm makes horizontal scaling straightforward:
# On your primary Dokploy server, get the join token:
docker swarm join-token worker
# Output:
# docker swarm join --token SWMTKN-1-xxxxx YOUR_MANAGER_IP:2377
# On each new worker server:
# 1. Install Docker
curl -fsSL https://get.docker.com | sh
# 2. Join the swarm (paste the command from above)
docker swarm join --token SWMTKN-1-xxxxx MANAGER_IP:2377
# 3. In Dokploy dashboard → Servers → the new node appears
# 4. Assign services to specific nodes or let Swarm schedule automatically
Monitoring
Dokploy's built-in monitoring shows real-time metrics per service:
Dashboard → Application → Monitoring tab:
CPU usage (%) — line chart, 1-hour history
Memory usage (MB) — with container limit shown
Disk I/O (MB/s) — read/write
Network I/O (MB/s) — inbound/outbound
For full observability stack, add Grafana + Prometheus via template:
Dashboard → Templates → Monitoring → Grafana + Prometheus
→ Deploys both services, auto-connects to Dokploy metrics
Backup Configuration
# Configure S3-compatible backups for databases
# Dokploy → Settings → S3 Backup
# Compatible with:
# AWS S3
# Cloudflare R2 (cheapest at $0/egress)
# MinIO (self-hosted)
# Backblaze B2
# Example with Cloudflare R2:
BUCKET_NAME=dokploy-backups
ENDPOINT=https://ACCOUNT_ID.r2.cloudflarestorage.com
ACCESS_KEY_ID=your-r2-access-key
SECRET_ACCESS_KEY=your-r2-secret-key
REGION=auto
# Set backup schedule: daily at 2am
# Retention: 7 days (or custom)
Cost Comparison
Monthly costs for a typical startup (5-10 apps, 1-3 databases):
Vercel Pro + Netlify (hybrid): $40-150/month + overages
Railway: $20-80/month
Render: $25-100/month
Dokploy on Hetzner CPX21: €3.79/month (~$4)
Dokploy on Hetzner CPX31: €7.49/month (~$8)
Dokploy on Hetzner CCX23: €15.59/month (~$17, dedicated CPU)
Annual savings vs. Vercel Pro: $480-1,740/year
Why Self-Host Dokploy?
The case for self-hosting Dokploy comes down to three practical factors: data ownership, cost at scale, and operational control.
Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting Dokploy means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.
Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.
Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.
The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.
Server Requirements and Sizing
Before deploying Dokploy, assess your server capacity against expected workload.
Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.
Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives Dokploy headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.
Storage planning: The Docker volumes in this docker-compose.yml store all persistent Dokploy data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.
Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.
Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.
Backup and Disaster Recovery
Running Dokploy without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.
What to back up: The named Docker volumes containing Dokploy's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.
Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.
For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.
Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.
Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your Dokploy backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.
Security Hardening
Self-hosting means you are responsible for Dokploy's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.
Always use a reverse proxy: Never expose Dokploy's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.
Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.
Firewall configuration:
ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.
Network isolation: Docker Compose named networks keep Dokploy's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.
VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.
Update discipline: Subscribe to Dokploy's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.
Troubleshooting Common Issues
Container exits immediately or won't start
Check logs first — they almost always explain the failure:
docker compose logs -f dokploy
Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change Dokploy's port mapping in docker-compose.yml.
Cannot reach the web interface
Work through this checklist:
- Confirm the container is running:
docker compose ps - Test locally on the server:
curl -I http://localhost:PORT - If local access works but external doesn't, check your firewall:
ufw status - If using a reverse proxy, verify it's running and the config is valid:
caddy validate --config /etc/caddy/Caddyfile
Permission errors on volume mounts
Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:
chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data
High resource usage over time
Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats dokploy. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.
Data disappears after container restart
Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.
Keeping Dokploy Updated
Dokploy follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:
docker compose pull # Download updated images
docker compose up -d # Restart with new images
docker image prune -f # Remove old image layers (optional)
Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.
Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.
Post-update verification: After updating, confirm Dokploy is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.
Compare all self-hosted deployment options at OSSAlt.
Related: Coolify vs Dokploy vs CapRover 2026 · Self-Host Coolify
See open source alternatives to Vercel on OSSAlt.