Self-Host n8n: The Zapier Alternative 2026
Self-Host n8n: The Zapier Alternative 2026
Zapier charges $600/month for 2,000 tasks. n8n gives you unlimited tasks, unlimited workflows, and 400+ integrations for the cost of a $5–15/month VPS. It's the most popular open-source workflow automation tool in the world — 100,000+ GitHub stars — and it's built for developers who want code-level control alongside visual workflow building.
This guide walks through production-grade self-hosting: Docker Compose with PostgreSQL, nginx reverse proxy, and the configuration options that matter for real workloads.
Quick Verdict
Use Zapier if automation is an occasional tool for non-technical teammates who need zero setup. Self-host n8n if you're running any significant number of workflows, paying Zapier more than $50/month, or want AI agent capabilities, custom code nodes, and full data ownership.
Why n8n Over Zapier
| Factor | n8n (self-hosted) | Zapier |
|---|---|---|
| Cost | ~$10/mo infrastructure | $20–600+/mo |
| Workflows | Unlimited | Capped by plan |
| Executions | Unlimited | 750–50,000+/mo |
| Code nodes | ✅ JavaScript + Python | ❌ |
| AI agent nodes | ✅ (built-in, any LLM) | Limited |
| Self-host | ✅ | ❌ |
| Custom integrations | ✅ (HTTP + code) | ❌ |
| Data stays on your server | ✅ | ❌ |
| Webhooks | Unlimited | Capped |
| Sub-workflows | ✅ | Limited |
At scale, the difference is stark: a company running 50,000 executions/month pays $0 extra on self-hosted n8n vs $400–600/month on Zapier.
Requirements
- Docker + Docker Compose installed
- A VPS or server (2GB RAM minimum; 4GB recommended for heavy workloads)
- A domain name with DNS access
- Basic comfort with the command line
Step 1: Production Docker Compose Setup
The default SQLite setup works for testing. For production, use PostgreSQL — it handles concurrency, large workflow histories, and doesn't lock during writes.
# docker-compose.yml
version: "3.8"
services:
postgres:
image: postgres:16-alpine
restart: unless-stopped
environment:
POSTGRES_USER: n8n
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRES_DB: n8n
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U n8n"]
interval: 10s
retries: 5
n8n:
image: docker.n8n.io/n8nio/n8n:latest
restart: unless-stopped
ports:
- "5678:5678"
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_HOST=${N8N_HOST}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${N8N_HOST}/
- GENERIC_TIMEZONE=America/New_York
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
volumes:
postgres_data:
n8n_data:
Create your .env file:
# .env
POSTGRES_PASSWORD=choose_a_strong_password_here
N8N_HOST=n8n.yourdomain.com
N8N_ENCRYPTION_KEY=$(openssl rand -hex 32)
The N8N_ENCRYPTION_KEY encrypts credentials stored in the database. Generate this once and never change it — changing it invalidates all stored credentials and you'll need to re-enter every integration's auth.
Step 2: Set Up HTTPS with Caddy
Create a Caddyfile alongside your docker-compose.yml:
n8n.yourdomain.com {
reverse_proxy n8n:5678
}
Add Caddy to your docker-compose.yml:
caddy:
image: caddy:2-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
depends_on:
- n8n
Add caddy_data to your volumes section. Point your domain's A record to your server IP, then:
docker compose up -d
n8n will be accessible at https://n8n.yourdomain.com with automatic SSL.
Step 3: Initial Setup
On first access, n8n prompts you to create an owner account. Complete this immediately — leaving n8n without an account exposes your instance publicly.
After logging in:
- Create a workspace (or use the default)
- Set up credentials — go to Settings → Credentials and configure your integrations (Slack, Gmail, Airtable, etc.)
- Enable community nodes (optional) — Settings → Community Nodes → turn on to install third-party node packages
Step 4: Your First Workflow
n8n workflows consist of trigger nodes (what starts the workflow) and action nodes (what it does).
Example: Slack notification when GitHub issue is created
[GitHub Trigger] → [Slack node]
- Create new workflow → Add node → Trigger → GitHub
- Configure: Repository, Event: "Issues", Action: "opened"
- Add a Slack node: Channel
#engineering, Message:New issue: {{$json.title}} by {{$json.user.login}} - Activate workflow → flip the toggle at top-right
Webhooks are created automatically in your GitHub repo when you activate the workflow.
Core n8n Concepts
Code Nodes
n8n's most powerful feature: run custom JavaScript or Python inside any workflow step.
// Code node example: transform and filter data
const items = $input.all();
return items
.filter(item => item.json.status === 'active')
.map(item => ({
json: {
id: item.json.id,
name: item.json.name.toUpperCase(),
processed_at: new Date().toISOString(),
}
}));
This lets you handle any data transformation, API response parsing, or business logic that no-code nodes can't cover.
AI Agent Nodes (2026 Key Feature)
n8n has first-class AI agent support. Build agents that:
- Use any LLM (OpenAI, Anthropic, Groq, local Ollama)
- Call tools (search the web, query databases, send Slack messages)
- Loop until a task is complete
- Pass results to downstream workflow nodes
[Chat Trigger] → [AI Agent node] → [Send Email]
↓
[Tool: HTTP Request]
[Tool: Postgres Query]
[Tool: Slack]
Connect to your local Ollama for a fully self-hosted AI automation stack: zero API costs, zero data leaving your server.
Sub-Workflows
Call workflows from other workflows using the Execute Sub-Workflow node. This enables modular automation architecture — shared logic in one workflow called from many others.
Webhooks
Any workflow can start with a Webhook trigger. n8n generates a URL like https://n8n.yourdomain.com/webhook/abc123. Send a POST request to it, and your workflow fires. Essential for integrating with external services that push data (Stripe webhooks, GitHub events, custom apps).
400+ Built-In Integrations
n8n's integration library covers every common SaaS tool:
Productivity: Notion, Airtable, Google Sheets, Coda, Basecamp Communication: Slack, Discord, Telegram, Teams, Email (IMAP/SMTP) CRM: Salesforce, HubSpot, Pipedrive, Twenty, EspoCRM Development: GitHub, GitLab, Jira, Linear, PagerDuty Data: PostgreSQL, MySQL, MongoDB, Redis, Elasticsearch Files: Google Drive, Dropbox, Nextcloud, S3 AI: OpenAI, Anthropic, Groq, Hugging Face, Ollama, LangChain Finance: Stripe, PayPal, QuickBooks, Xero
For anything not covered, use the HTTP Request node — it can call any REST API with full control over headers, auth, and body.
Production Hardening
Limit Concurrent Executions
Prevent runaway workflows from exhausting server resources:
# Add to .env
EXECUTIONS_PROCESS=own
N8N_CONCURRENCY_PRODUCTION_LIMIT=20
Configure Execution History Pruning
By default, n8n keeps all execution logs. On high-volume instances this grows quickly:
EXECUTIONS_DATA_PRUNE=true
EXECUTIONS_DATA_MAX_AGE=168 # Keep last 7 days (hours)
EXECUTIONS_DATA_SAVE_ON_ERROR=all
EXECUTIONS_DATA_SAVE_ON_SUCCESS=none # Don't save successful runs (saves disk)
SMTP for Email Notifications
N8N_EMAIL_MODE=smtp
N8N_SMTP_HOST=smtp.yourprovider.com
N8N_SMTP_PORT=587
N8N_SMTP_USER=your@email.com
N8N_SMTP_PASS=your_password
N8N_SMTP_SENDER=n8n@yourdomain.com
Backup Your Instance
Back up two things: PostgreSQL data and n8n credentials/workflow files.
#!/bin/bash
# Daily backup
DATE=$(date +%Y-%m-%d)
# Database backup
docker exec n8n-postgres-1 pg_dump -U n8n n8n > "/opt/backups/n8n-db-$DATE.sql"
# n8n data volume
docker run --rm -v n8n_data:/data -v /opt/backups:/backup \
alpine tar czf "/backup/n8n-data-$DATE.tar.gz" /data
n8n vs Make vs Zapier: The Full Picture
| n8n (self-hosted) | Make | Zapier | |
|---|---|---|---|
| Free tier | Unlimited (self-host) | 1,000 ops/mo | 100 tasks/mo |
| Paid plans | Enterprise only | $9–$29+/mo | $20–$600+/mo |
| Code nodes | ✅ JS + Python | ❌ | ❌ |
| AI agents | ✅ Native | Via HTTP | Limited |
| Webhook triggers | Unlimited | Capped | Capped |
| Data privacy | Your server | Make servers | Zapier servers |
| Learning curve | Medium | Low | Low |
| Integrations | 400+ | 2,000+ | 6,000+ |
| Self-host option | ✅ | ❌ | ❌ |
n8n has fewer pre-built integrations than Zapier, but the HTTP Request node and Code nodes fill the gap for any technical user. If you're passing data through from Zapier to a custom script anyway, n8n replaces both steps.
Migrating from Zapier
- Audit your Zaps — identify your most-used and most-expensive workflows
- Start with simple triggers — webhook-based Zaps migrate in minutes
- Recreate in n8n — n8n's UI is workflow-centric (closer to Make than Zapier; expect a short learning curve)
- Test in parallel — run Zapier and n8n side-by-side for 1–2 weeks before cancelling
- Import/export — n8n workflows are JSON; you can share and version-control them
Most teams complete the migration in a weekend. The workflows that require re-learning are multi-step ones with complex filters — plan extra time for those.
Scaling Beyond a Single Instance
For teams with high execution volumes or multi-region requirements, n8n supports a queue mode that separates the main process (UI + job scheduling) from worker processes (execution):
# Additional env vars for queue mode
N8N_EXECUTION_MODE=queue
QUEUE_BULL_REDIS_HOST=redis
QUEUE_BULL_REDIS_PORT=6379
Add Redis to your docker-compose and spin up worker containers alongside the main n8n instance. Workers pick up jobs from the Redis queue — this lets you scale horizontally and prevents heavy workflows from blocking the UI.
A single n8n instance handles most teams' needs up to ~100k executions/month. Queue mode becomes relevant at larger volumes or when workflows with long execution times (multi-minute API calls, large data processing) would otherwise block others.
Error Handling and Alerting
n8n's error workflows are a production must. Create a dedicated error-handling workflow:
- Create a new workflow named "Error Handler"
- Add a Error Trigger node
- Add a Slack (or email) notification with
{{ $json.execution.error.message }} - In Settings → Workflows, set the Error Workflow to this new workflow
Now any workflow that throws an uncaught error will page you via Slack with the workflow name, error message, and execution ID for debugging.
For structured monitoring, pair n8n with Uptime Kuma (another OSS tool) to ping your n8n health endpoint every 60 seconds:
GET https://n8n.yourdomain.com/healthz
# Returns: { "status": "ok" }
Template Library
n8n's template library (available in-product at Templates → Explore) includes 1,000+ community-contributed workflows. Common starting points:
- Lead enrichment: Enrich new HubSpot contacts with Clearbit or Apollo data
- Invoice processing: Parse PDF invoices with AI, extract line items, push to QuickBooks
- Content pipeline: Monitor RSS feeds, summarize with Claude/GPT, post to Slack
- Database sync: Sync Airtable → PostgreSQL on a schedule
- Incident response: PagerDuty alert → create Linear issue → Slack thread
Templates import as JSON and serve as working starting points — modify for your environment rather than building from scratch.
n8n Cloud vs. Self-Hosted
If managing Docker infrastructure isn't your priority, n8n Cloud is worth knowing:
| n8n Cloud Starter | n8n Cloud Pro | Self-Hosted | |
|---|---|---|---|
| Price | $20/month | $50/month | ~$10/month (VPS) |
| Executions | 2,500/month | 10,000/month | Unlimited |
| Users | 1 | 5 | Unlimited |
| Support | Priority | Community | |
| Setup | Zero | Zero | 30 min |
Self-hosted wins on price at any meaningful execution volume. Cloud makes sense when you want n8n running in 2 minutes with zero ops overhead.
When Self-Hosting n8n Makes Sense
Self-host n8n if:
- You're paying Zapier/Make more than $30/month
- You need custom code in your automations
- Privacy matters (customer data shouldn't pass through third-party servers)
- You want to build AI agent workflows with local or cloud LLMs
- You have a developer on the team who can manage Docker
Don't self-host if:
- Your team is non-technical and needs Zapier's polish and 6,000+ integrations
- You only have 5–10 simple automations with no scaling plans
- Downtime for your automation workflows would be business-critical (use Zapier's SLA)
n8n's Fair-Code License
n8n uses a "fair-code" license (Sustainable Use License). It's free for internal use — including internal tools, personal projects, and running automations for your own organization. You cannot host n8n as a paid service for external customers without a commercial license.
For the vast majority of self-hosters, this makes zero practical difference. If you're building an automation SaaS platform on top of n8n, contact n8n for a commercial license.
Why Self-Host n8n?
The case for self-hosting n8n comes down to three practical factors: data ownership, cost at scale, and operational control.
Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting n8n means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.
Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.
Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.
The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.
Server Requirements and Sizing
Before deploying n8n, assess your server capacity against expected workload.
Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.
Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives n8n headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.
Storage planning: The Docker volumes in this docker-compose.yml store all persistent n8n data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.
Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.
Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.
Backup and Disaster Recovery
Running n8n without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.
What to back up: The named Docker volumes containing n8n's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.
Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.
For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.
Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.
Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your n8n backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.
Security Hardening
Self-hosting means you are responsible for n8n's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.
Always use a reverse proxy: Never expose n8n's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.
Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.
Firewall configuration:
ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable
Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.
Network isolation: Docker Compose named networks keep n8n's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.
VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.
Update discipline: Subscribe to n8n's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.
Troubleshooting Common Issues
Container exits immediately or won't start
Check logs first — they almost always explain the failure:
docker compose logs -f n8n
Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change n8n's port mapping in docker-compose.yml.
Cannot reach the web interface
Work through this checklist:
- Confirm the container is running:
docker compose ps - Test locally on the server:
curl -I http://localhost:PORT - If local access works but external doesn't, check your firewall:
ufw status - If using a reverse proxy, verify it's running and the config is valid:
caddy validate --config /etc/caddy/Caddyfile
Permission errors on volume mounts
Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:
chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data
High resource usage over time
Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats n8n. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.
Data disappears after container restart
Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.
Keeping n8n Updated
n8n follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:
docker compose pull # Download updated images
docker compose up -d # Restart with new images
docker image prune -f # Remove old image layers (optional)
Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.
Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.
Post-update verification: After updating, confirm n8n is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.
Browse all Zapier alternatives at OSSAlt.
Related: 10 Open-Source Tools to Replace SaaS in 2026 · Coolify vs Vercel: Cost Comparison 2026
See open source alternatives to n8n on OSSAlt.