How to Self-Host Healthchecks: Cron Job Monitoring 2026
TL;DR
Healthchecks (BSD 3-Clause, ~8K GitHub stars, Python/Django) monitors scheduled tasks by expecting periodic pings. When your nightly backup script or weekly cleanup job fails to check in, Healthchecks sends an alert. Unlike uptime monitoring (which watches if a URL is up), Healthchecks watches for what didn't happen — the silent failures that would otherwise go unnoticed for weeks. Healthchecks.io's hosted service limits you to 20 checks on the free plan. Self-hosted is unlimited.
Key Takeaways
- Healthchecks: BSD 3-Clause, ~8K stars, Python — "dead man's switch" for cron jobs
- Push model: Jobs ping a URL when they complete; Healthchecks alerts if no ping arrives
- Grace period: Configurable window — a job running 5 min late won't trigger a false alert
- Schedules: Simple (every N minutes/hours/days) or cron expression
- Integrations: Email, Slack, Discord, PagerDuty, ntfy, Telegram, 50+ more
- Teams: Multiple team members per project, audit log
Part 1: Docker Setup
# docker-compose.yml
services:
db:
image: postgres:15-alpine
restart: unless-stopped
environment:
POSTGRES_DB: hc
POSTGRES_USER: hc
POSTGRES_PASSWORD: "${POSTGRES_PASSWORD}"
volumes:
- db_data:/var/lib/postgresql/data
healthchecks:
image: healthchecks/healthchecks:latest
container_name: healthchecks
restart: unless-stopped
ports:
- "8000:8000"
environment:
SECRET_KEY: "${SECRET_KEY}"
ALLOWED_HOSTS: "hc.yourdomain.com"
DEFAULT_FROM_EMAIL: "hc@yourdomain.com"
EMAIL_HOST: "mail.yourdomain.com"
EMAIL_PORT: 587
EMAIL_USE_TLS: "True"
EMAIL_HOST_USER: "hc@yourdomain.com"
EMAIL_HOST_PASSWORD: "${EMAIL_PASSWORD}"
DB: postgres
DB_HOST: db
DB_USER: hc
DB_PASSWORD: "${POSTGRES_PASSWORD}"
DB_NAME: hc
SITE_ROOT: "https://hc.yourdomain.com"
SITE_NAME: "Healthchecks"
REGISTRATION_OPEN: "False" # Disable public registration
depends_on:
- db
volumes:
db_data:
# Create initial superuser:
docker exec -it healthchecks python manage.py createsuperuser
docker compose up -d
Part 2: HTTPS with Caddy
hc.yourdomain.com {
reverse_proxy localhost:8000
}
Part 3: Create Your First Check
Simple schedule
- + New Check
- Name:
Nightly Database Backup - Schedule:
Simple→ every1 day - Grace time:
1 hour(allow 1h late before alerting) - Save
You get a unique URL like:
https://hc.yourdomain.com/ping/UNIQUE-UUID
Cron schedule
For jobs with specific run times:
- Schedule:
Cron - Cron expression:
0 2 * * *(daily at 2:00 AM) - Timezone:
America/Los_Angeles - Grace time:
30 minutes
Part 4: Integrate with Cron Jobs
Basic ping on success
# /etc/cron.d/backup:
0 2 * * * root /opt/scripts/backup.sh && curl -fsS -m 10 "https://hc.yourdomain.com/ping/YOUR-UUID" >/dev/null
# Breakdown:
# && → only ping if backup.sh succeeds (exit code 0)
# -fsS → fail silently on HTTP errors
# -m 10 → timeout after 10 seconds
# >/dev/null → suppress output
Ping with start signal (detect long-running jobs)
# Signal start:
curl -fsS -m 10 "https://hc.yourdomain.com/ping/YOUR-UUID/start"
# Run the job:
/opt/scripts/slow-job.sh
# Signal success:
curl -fsS -m 10 "https://hc.yourdomain.com/ping/YOUR-UUID"
Healthchecks measures the duration between start and success pings.
Ping with failure signal
#!/bin/bash
# /opt/scripts/backup-with-monitoring.sh
UUID="YOUR-UUID"
HC_URL="https://hc.yourdomain.com/ping/${UUID}"
# Signal start:
curl -fsS -m 10 "${HC_URL}/start"
# Run backup:
if /opt/scripts/backup.sh; then
# Success:
curl -fsS -m 10 "${HC_URL}"
else
# Explicit failure signal:
curl -fsS -m 10 "${HC_URL}/fail"
fi
Ping with log output
# Send last 10KB of job output with the ping:
/opt/scripts/job.sh 2>&1 | tail -c 10000 | curl -fsS -m 10 \
--data-binary @- "https://hc.yourdomain.com/ping/YOUR-UUID"
Part 5: Notification Channels
- Settings → Notifications → + Add email integration
- Email address
- All alerts for your project go here
Slack
Type: Slack
Webhook URL: https://hooks.slack.com/services/...
Telegram
Type: Telegram
Bot Token: from @BotFather
Chat ID: your chat ID
ntfy (self-hosted push)
Type: ntfy
Topic URL: https://ntfy.yourdomain.com/cron-alerts
Priority: High
PagerDuty
Type: PagerDuty
Routing Key: your-pagerduty-integration-key
Part 6: Check Management
Check status meanings
| Status | Meaning |
|---|---|
| New | Never pinged (newly created) |
| Up | Last ping within schedule + grace |
| Late | Past schedule + grace, not yet alerted |
| Down | Alerted — job missed its window |
| Paused | Temporarily paused (maintenance) |
Pause during maintenance
# Pause a check via API:
curl -X POST "https://hc.yourdomain.com/api/v3/checks/UUID/pause" \
-H "X-Api-Key: your-api-key"
# Resume:
# POST to /ping/UUID resumes automatically
Part 7: REST API
API_KEY="your-api-key"
BASE="https://hc.yourdomain.com/api/v3"
# List all checks:
curl "$BASE/checks/" \
-H "X-Api-Key: $API_KEY" | jq '.[].name'
# Get check status:
curl "$BASE/checks/UUID" \
-H "X-Api-Key: $API_KEY" | jq '.status'
# Create a check programmatically:
curl -X POST "$BASE/checks/" \
-H "X-Api-Key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Weekly Report",
"schedule": "0 9 * * 1",
"tz": "America/Los_Angeles",
"grace": 3600,
"channels": "*"
}' | jq '.ping_url'
# Get check's ping log:
curl "$BASE/checks/UUID/pings/" \
-H "X-Api-Key: $API_KEY" | jq '.[0:5]'
Part 8: Common Use Cases
Backup monitoring
# Database backup:
0 2 * * * root pg_dump -U postgres mydb | gzip > /backups/mydb-$(date +%Y%m%d).sql.gz \
&& curl -fsS "https://hc.yourdomain.com/ping/DB-BACKUP-UUID"
# File backup (restic):
0 3 * * * root restic backup /important/data \
&& curl -fsS "https://hc.yourdomain.com/ping/RESTIC-UUID"
Certificate renewal
# Certbot auto-renewal:
0 0 */2 * * root certbot renew --quiet \
&& curl -fsS "https://hc.yourdomain.com/ping/CERT-UUID"
Data sync jobs
# Nightly sync:
0 4 * * * root /opt/scripts/sync-data.py \
&& curl -fsS "https://hc.yourdomain.com/ping/SYNC-UUID" \
|| curl -fsS "https://hc.yourdomain.com/ping/SYNC-UUID/fail"
Maintenance
# Update:
docker compose pull
docker compose up -d
# Backup:
docker exec healthchecks-db-1 pg_dump -U hc hc \
| gzip > healthchecks-db-$(date +%Y%m%d).sql.gz
# Logs:
docker compose logs -f healthchecks
# Prune old pings (keep 6 months):
docker exec healthchecks python manage.py prunepings
Why Self-Host Healthchecks?
The case for self-hosting Healthchecks comes down to three practical factors: data ownership, cost at scale, and operational control.
Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting Healthchecks means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.
Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.
Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.
The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.
Server Requirements and Sizing
Before deploying Healthchecks, assess your server capacity against expected workload.
Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.
Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives Healthchecks headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.
Storage planning: The Docker volumes in this docker-compose.yml store all persistent Healthchecks data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.
Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.
Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.
Backup and Disaster Recovery
Running Healthchecks without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.
What to back up: The named Docker volumes containing Healthchecks's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.
Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.
For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.
Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.
Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your Healthchecks backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.
Security Hardening
Self-hosting means you are responsible for Healthchecks's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.
Always use a reverse proxy: Never expose Healthchecks's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.
Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.
Firewall configuration:
ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.
Network isolation: Docker Compose named networks keep Healthchecks's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.
VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.
Update discipline: Subscribe to Healthchecks's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.
Troubleshooting Common Issues
Container exits immediately or won't start
Check logs first — they almost always explain the failure:
docker compose logs -f healthchecks
Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change Healthchecks's port mapping in docker-compose.yml.
Cannot reach the web interface
Work through this checklist:
- Confirm the container is running:
docker compose ps - Test locally on the server:
curl -I http://localhost:PORT - If local access works but external doesn't, check your firewall:
ufw status - If using a reverse proxy, verify it's running and the config is valid:
caddy validate --config /etc/caddy/Caddyfile
Permission errors on volume mounts
Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:
chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data
High resource usage over time
Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats healthchecks. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.
Data disappears after container restart
Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.
Keeping Healthchecks Updated
Healthchecks follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:
docker compose pull # Download updated images
docker compose up -d # Restart with new images
docker image prune -f # Remove old image layers (optional)
Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.
Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.
Post-update verification: After updating, confirm Healthchecks is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.
Frequently Asked Questions
How much does it cost to self-host Healthchecks?
The primary cost is your server. A Hetzner CAX11 (2 vCPU ARM, 4GB RAM) runs about $5/month — enough for Healthchecks plus a few companion services. Add a domain ($12/year) and you're under $75/year for a complete self-hosted deployment. Compare that to SaaS pricing that typically starts at $5-15/user/month.
Can I run Healthchecks on a VPS with other services?
Yes. The docker-compose.yml above isolates Healthchecks on its own named Docker network. As long as your server has sufficient RAM and disk — 4GB RAM and 20GB disk handles most combinations — running multiple self-hosted services on one VPS is both practical and common. Tools like Dozzle and Portainer make monitoring multi-container setups manageable.
How do I migrate data if I switch servers?
Stop the Healthchecks container, export the Docker volumes (using docker run --rm -v VOLUME:/data -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /data), transfer to the new server, restore the volumes, and update your DNS. Most migrations complete in under an hour. Test the restoration on the new server before decommissioning the old one.
What happens if Healthchecks releases a breaking update?
Pin your docker-compose.yml to a specific image tag (e.g., image: healthchecks:1.2.3 instead of latest). Subscribe to the GitHub releases page for advance notice. When you're ready to upgrade, read the release notes, back up first, test on a staging instance, then update production.
Is Healthchecks suitable for production use?
Yes, with the hardening described above: reverse proxy for HTTPS, firewall rules, regular backups, and a pinned image tag. Many teams run Healthchecks in production successfully. The main requirement is treating your self-hosted instance with the same operational discipline you'd apply to any business-critical service.
See also: Uptime Kuma — for monitoring websites and services
See all open source monitoring tools at OSSAlt.com/categories/devops.