Self-Host Prometheus + Grafana
TL;DR
Prometheus (Apache 2.0, ~55K GitHub stars, Go) scrapes and stores time-series metrics. Grafana (AGPL 3.0, ~63K stars, TypeScript) visualizes them in dashboards. Together they're the industry-standard open source observability stack — used at Netflix, Cloudflare, and thousands of companies. This guide covers the full stack: Prometheus + Grafana + Alertmanager + node_exporter + cAdvisor for complete server and container monitoring.
Key Takeaways
- Prometheus: Apache 2.0, ~55K stars — pull-based metric scraper, PromQL query language
- Grafana: AGPL 3.0, ~63K stars — dashboard visualization, 50+ data sources
- Alertmanager: Routes alerts to Slack, PagerDuty, email based on labels
- node_exporter: Exposes Linux host metrics (CPU, memory, disk, network) for Prometheus
- cAdvisor: Exposes Docker container metrics for Prometheus
- vs Netdata: More customizable, more setup; Netdata is turnkey with less flexibility
Part 1: Full Stack Docker Compose
# docker-compose.yml
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
restart: unless-stopped
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- ./prometheus/rules:/etc/prometheus/rules:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--storage.tsdb.retention.time=90d' # Keep 90 days of data
- '--web.console.libraries=/usr/share/prometheus/console_libraries'
- '--web.console.templates=/usr/share/prometheus/consoles'
- '--web.enable-lifecycle'
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro
environment:
GF_SECURITY_ADMIN_USER: admin
GF_SECURITY_ADMIN_PASSWORD: "${GRAFANA_PASSWORD}"
GF_USERS_ALLOW_SIGN_UP: "false"
GF_SERVER_ROOT_URL: "https://grafana.yourdomain.com"
GF_SERVER_DOMAIN: "grafana.yourdomain.com"
GF_SMTP_ENABLED: "true"
GF_SMTP_HOST: "smtp.yourdomain.com:587"
GF_SMTP_USER: "${SMTP_USER}"
GF_SMTP_PASSWORD: "${SMTP_PASS}"
GF_SMTP_FROM_ADDRESS: "grafana@yourdomain.com"
alertmanager:
image: prom/alertmanager:latest
container_name: alertmanager
restart: unless-stopped
ports:
- "9093:9093"
volumes:
- ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
- alertmanager_data:/alertmanager
node-exporter:
image: prom/node-exporter:latest
container_name: node_exporter
restart: unless-stopped
pid: host
network_mode: host
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /:/rootfs:ro
command:
- '--path.procfs=/host/proc'
- '--path.rootfs=/rootfs'
- '--path.sysfs=/host/sys'
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: unless-stopped
privileged: true
devices:
- /dev/kmsg:/dev/kmsg
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
- /cgroup:/cgroup:ro
volumes:
prometheus_data:
grafana_data:
alertmanager_data:
Part 2: Prometheus Config
# prometheus/prometheus.yml
global:
scrape_interval: 15s
evaluation_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
rule_files:
- "rules/*.yml"
scrape_configs:
# Prometheus self-monitoring:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# Host metrics:
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
labels:
instance: 'server-1'
# Docker container metrics:
- job_name: 'cadvisor'
static_configs:
- targets: ['cadvisor:8080']
# Additional servers (add your other hosts here):
- job_name: 'remote-nodes'
static_configs:
- targets:
- '192.168.1.10:9100' # server-2 with node_exporter
- '192.168.1.11:9100' # server-3 with node_exporter
labels:
env: production
# PostgreSQL (if running postgres_exporter):
- job_name: 'postgresql'
static_configs:
- targets: ['postgres-exporter:9187']
# Redis (if running redis_exporter):
- job_name: 'redis'
static_configs:
- targets: ['redis-exporter:9121']
Part 3: HTTPS with Caddy
grafana.yourdomain.com {
reverse_proxy localhost:3000
}
prometheus.yourdomain.com {
# Restrict Prometheus to internal access only:
@external not remote_ip 192.168.0.0/16 10.0.0.0/8
respond @external 403
reverse_proxy localhost:9090
}
Part 4: Alert Rules
# prometheus/rules/host.yml
groups:
- name: host_alerts
rules:
- alert: HighCPUUsage
expr: 100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 85
for: 10m
labels:
severity: warning
annotations:
summary: "High CPU usage on {{ $labels.instance }}"
description: "CPU usage is {{ $value | humanize }}%"
- alert: DiskSpaceLow
expr: (1 - node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 > 85
for: 5m
labels:
severity: warning
annotations:
summary: "Disk space low on {{ $labels.instance }}"
description: "{{ $labels.mountpoint }} is {{ $value | humanize }}% full"
- alert: HighMemoryUsage
expr: (1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 > 90
for: 5m
labels:
severity: warning
annotations:
summary: "High memory usage on {{ $labels.instance }}"
- alert: ServiceDown
expr: up == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Service {{ $labels.job }} is down on {{ $labels.instance }}"
- name: container_alerts
rules:
- alert: ContainerHighCPU
expr: sum(rate(container_cpu_usage_seconds_total{name!=""}[5m])) by (name) * 100 > 80
for: 5m
labels:
severity: warning
annotations:
summary: "Container {{ $labels.name }} high CPU usage"
Part 5: Alertmanager Config
# alertmanager/alertmanager.yml
global:
resolve_timeout: 5m
slack_api_url: "${SLACK_WEBHOOK_URL}"
route:
group_by: ['alertname', 'instance']
group_wait: 30s
group_interval: 5m
repeat_interval: 4h
receiver: 'slack-default'
routes:
- match:
severity: critical
receiver: 'slack-critical'
repeat_interval: 1h
receivers:
- name: 'slack-default'
slack_configs:
- channel: '#alerts'
title: '{{ range .Alerts }}{{ .Annotations.summary }}{{ end }}'
text: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'
- name: 'slack-critical'
slack_configs:
- channel: '#alerts-critical'
title: '🔴 CRITICAL: {{ range .Alerts }}{{ .Annotations.summary }}{{ end }}'
pagerduty_configs:
- routing_key: "${PAGERDUTY_KEY}"
inhibit_rules:
- source_match:
severity: 'critical'
target_match:
severity: 'warning'
equal: ['alertname', 'instance']
Part 6: Grafana Dashboards
Import community dashboards
- Grafana → + → Import
- Enter dashboard ID from grafana.com/dashboards:
- 1860 — Node Exporter Full (host metrics)
- 893 — Docker and system monitoring
- 9628 — PostgreSQL Database
- 11835 — Redis Exporter Dashboard
- 7362 — Cadvisor Docker metrics
Auto-provision dashboards
# grafana/provisioning/datasources/prometheus.yml
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
url: http://prometheus:9090
isDefault: true
access: proxy
# grafana/provisioning/dashboards/default.yml
apiVersion: 1
providers:
- name: 'default'
folder: ''
type: file
options:
path: /etc/grafana/provisioning/dashboards
# Place dashboard JSON files in:
grafana/provisioning/dashboards/
node-exporter.json ← Auto-loaded on startup
docker.json
Part 7: PromQL Quick Reference
# CPU usage by instance (last 5min average):
100 - (avg by(instance) (rate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
# Memory usage %:
(1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100
# Disk usage by mount:
(1 - node_filesystem_avail_bytes{fstype!="tmpfs"} / node_filesystem_size_bytes) * 100
# HTTP request rate (if you expose an app with /metrics):
rate(http_requests_total[5m])
# 95th percentile request duration:
histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m]))
# Container memory usage:
container_memory_usage_bytes{name!=""}
# Container CPU rate:
sum(rate(container_cpu_usage_seconds_total{name!=""}[5m])) by (name)
Maintenance
# Update stack:
docker compose pull
docker compose up -d
# Reload Prometheus config (no restart):
curl -X POST http://localhost:9090/-/reload
# Backup Prometheus data:
tar -czf prometheus-backup-$(date +%Y%m%d).tar.gz \
$(docker volume inspect prometheus_prometheus_data --format '{{.Mountpoint}}')
# Backup Grafana (dashboards + settings):
tar -czf grafana-backup-$(date +%Y%m%d).tar.gz \
$(docker volume inspect prometheus_grafana_data --format '{{.Mountpoint}}')
# Check active alerts:
curl http://localhost:9090/api/v1/alerts | jq '.data.alerts[]'
Why Self-Host Prometheus + Grafana?
The case for self-hosting Prometheus + Grafana comes down to three practical factors: data ownership, cost at scale, and operational control.
Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting Prometheus + Grafana means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.
Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.
Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.
The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.
Server Requirements and Sizing
Before deploying Prometheus + Grafana, assess your server capacity against expected workload.
Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.
Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives Prometheus + Grafana headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.
Storage planning: The Docker volumes in this docker-compose.yml store all persistent Prometheus + Grafana data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.
Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.
Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.
Backup and Disaster Recovery
Running Prometheus + Grafana without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.
What to back up: The named Docker volumes containing Prometheus + Grafana's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.
Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.
For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.
Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.
Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your Prometheus + Grafana backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.
Security Hardening
Self-hosting means you are responsible for Prometheus + Grafana's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.
Always use a reverse proxy: Never expose Prometheus + Grafana's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.
Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.
Firewall configuration:
ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.
Network isolation: Docker Compose named networks keep Prometheus + Grafana's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.
VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.
Update discipline: Subscribe to Prometheus + Grafana's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.
Troubleshooting Common Issues
Container exits immediately or won't start
Check logs first — they almost always explain the failure:
docker compose logs -f prometheus
Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change Prometheus + Grafana's port mapping in docker-compose.yml.
Cannot reach the web interface
Work through this checklist:
- Confirm the container is running:
docker compose ps - Test locally on the server:
curl -I http://localhost:PORT - If local access works but external doesn't, check your firewall:
ufw status - If using a reverse proxy, verify it's running and the config is valid:
caddy validate --config /etc/caddy/Caddyfile
Permission errors on volume mounts
Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:
chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data
High resource usage over time
Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats prometheus. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Netdata.
Data disappears after container restart
Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.
Keeping Prometheus + Grafana Updated
Prometheus + Grafana follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:
docker compose pull # Download updated images
docker compose up -d # Restart with new images
docker image prune -f # Remove old image layers (optional)
Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.
Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.
Post-update verification: After updating, confirm Prometheus + Grafana is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.
Frequently Asked Questions
How much does it cost to self-host Prometheus + Grafana?
The primary cost is your server. A Hetzner CAX11 (2 vCPU ARM, 4GB RAM) runs about $5/month — enough for Prometheus + Grafana plus a few companion services. Add a domain ($12/year) and you're under $75/year for a complete self-hosted deployment. Compare that to SaaS pricing that typically starts at $5-15/user/month.
Can I run Prometheus + Grafana on a VPS with other services?
Yes. The docker-compose.yml above isolates Prometheus + Grafana on its own named Docker network. As long as your server has sufficient RAM and disk — 4GB RAM and 20GB disk handles most combinations — running multiple self-hosted services on one VPS is both practical and common. Tools like Dozzle and Portainer make monitoring multi-container setups manageable.
How do I migrate data if I switch servers?
Stop the Prometheus + Grafana container, export the Docker volumes (using docker run --rm -v VOLUME:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /data), transfer to the new server, restore the volumes, and update your DNS. Most migrations complete in under an hour. Test the restoration on the new server before decommissioning the old one.
What happens if Prometheus + Grafana releases a breaking update?
Pin your docker-compose.yml to a specific image tag (e.g., image: prometheus/+/grafana:1.2.3 instead of latest). Subscribe to the GitHub releases page for advance notice. When you're ready to upgrade, read the release notes, back up first, test on a staging instance, then update production.
Is Prometheus + Grafana suitable for production use?
Yes, with the hardening described above: reverse proxy for HTTPS, firewall rules, regular backups, and a pinned image tag. Many teams run Prometheus + Grafana in production successfully. The main requirement is treating your self-hosted instance with the same operational discipline you'd apply to any business-critical service.
See all open source monitoring tools at OSSAlt.com/categories/devops.
See open source alternatives to Grafana on OSSAlt.