Skip to main content

How to Self-Host Directus — Open Source CMS 2026

·OSSAlt Team
directusheadless-cmsself-hostingdata-platformairtable-alternative
Share:

How to Self-Host Directus — Open Source CMS 2026

TL;DR

Directus is a "data engine" — it wraps any SQL database (PostgreSQL, MySQL, SQLite, MSSQL, Oracle) and automatically generates a REST API, GraphQL API, and visual data management interface around your existing tables. Unlike Strapi or Contentful, Directus doesn't force a proprietary data model — it works with your schema as-is, making it a zero-migration headless CMS layer on top of your existing database. Self-hosting is free under the BSL 1.1 license (commercial use requires a license for certain features, but self-hosting for your own project is unrestricted).

Key Takeaways

  • Directus: 28K+ GitHub stars, TypeScript, works with any existing SQL database without migration
  • Dual purpose: headless CMS for content teams + admin data panel for developers — replaces both Contentful and Retool
  • Zero vendor lock-in: data stays in your standard SQL tables, readable by any other tool
  • Flows: no-code/low-code automation builder (like Zapier or n8n, built into Directus)
  • Extensions: custom endpoints, interfaces, displays, modules via npm packages
  • Alternatives: Strapi (opinionated schema builder), Payload CMS (code-first TypeScript), KeystoneJS, Hasura (GraphQL-first)

Why Directus Instead of Strapi?

The key difference: Strapi creates its own tables in a Directus-like way — you build content types in Strapi and it creates the database structure. Directus wraps your existing database and adds an API and admin UI on top.

Choose Directus if:

  • You have an existing PostgreSQL database and want an instant API + admin panel
  • Your team includes non-technical content editors who need a clean UI
  • You need to manage relational data visually (not just content blobs)
  • You want to avoid a separate CMS database — Directus uses your app's DB

Choose Strapi if:

  • You're starting fresh with no existing database
  • You prefer a more opinionated CMS with a content-type builder workflow
  • You need a plugin marketplace with pre-built integrations

Prerequisites

  • Docker + Docker Compose
  • PostgreSQL 14+ (Directus also supports MySQL 8+, SQLite, MSSQL, Oracle)
  • VPS: 1 vCPU / 2GB RAM minimum (Hetzner CX21 at €3.79/mo)
  • Domain with SSL for production

Docker Compose Setup

Create /opt/directus/docker-compose.yml:

version: '3.8'

services:
  db:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      POSTGRES_DB: directus
      POSTGRES_USER: directus
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - ./postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U directus"]
      interval: 10s

  cache:
    image: redis:7-alpine
    restart: unless-stopped
    volumes:
      - ./redis:/data

  directus:
    image: directus/directus:latest
    restart: unless-stopped
    ports:
      - "127.0.0.1:8055:8055"
    depends_on:
      db:
        condition: service_healthy
    environment:
      SECRET: ${SECRET}
      DB_CLIENT: pg
      DB_HOST: db
      DB_PORT: 5432
      DB_DATABASE: directus
      DB_USER: directus
      DB_PASSWORD: ${DB_PASSWORD}
      CACHE_ENABLED: "true"
      CACHE_STORE: redis
      REDIS: redis://cache:6379
      ADMIN_EMAIL: ${ADMIN_EMAIL}
      ADMIN_PASSWORD: ${ADMIN_PASSWORD}
      # File storage (local or S3)
      STORAGE_LOCATIONS: local
      STORAGE_LOCAL_DRIVER: local
      STORAGE_LOCAL_ROOT: ./uploads
      # For S3:
      # STORAGE_LOCATIONS: s3
      # STORAGE_S3_DRIVER: s3
      # STORAGE_S3_KEY: ${S3_KEY}
      # STORAGE_S3_SECRET: ${S3_SECRET}
      # STORAGE_S3_BUCKET: ${S3_BUCKET}
      # STORAGE_S3_REGION: ${S3_REGION}
      PUBLIC_URL: https://cms.example.com
      # Email
      EMAIL_TRANSPORT: smtp
      EMAIL_FROM: cms@example.com
      EMAIL_SMTP_HOST: smtp.resend.com
      EMAIL_SMTP_PORT: 587
      EMAIL_SMTP_USER: resend
      EMAIL_SMTP_PASSWORD: ${SMTP_PASSWORD}
    volumes:
      - ./uploads:/directus/uploads
      - ./extensions:/directus/extensions

volumes:
  postgres:
  redis:

Create .env:

DB_PASSWORD=strong-db-password
ADMIN_EMAIL=admin@example.com
ADMIN_PASSWORD=strong-admin-password

# Generate: node -e "console.log(require('crypto').randomBytes(32).toString('base64'))"
SECRET=your-secret-key-here

SMTP_PASSWORD=your-smtp-api-key
docker compose up -d
# Access at http://localhost:8055

Connecting to an Existing Database

Directus's unique power is wrapping an existing database. To use it with your app's database instead of creating a new one:

directus:
  environment:
    DB_CLIENT: pg
    DB_HOST: your-existing-db-host
    DB_PORT: 5432
    DB_DATABASE: your_app_database
    DB_USER: directus_readonly_user  # or a limited-permission user
    DB_PASSWORD: ${APP_DB_PASSWORD}

On first startup, Directus creates its own system tables (directus_*) in your database but doesn't touch your existing tables. Navigate to Settings → Data Model to see your existing tables listed. Click any table to enable it as a Directus collection — instantly getting a REST API and admin UI for that table.

This is enormously useful for adding a content editing interface to an existing application without building a custom admin panel.


REST API Usage

// Fetch published articles
const response = await fetch('https://cms.example.com/items/articles?' + new URLSearchParams({
  'filter[status][_eq]': 'published',
  'fields': 'id,title,slug,content,date_created,author.name,author.avatar',
  'sort': '-date_created',
  'limit': '10',
  'offset': '0',
}), {
  headers: {
    Authorization: `Bearer ${process.env.DIRECTUS_TOKEN}`,
  },
})

const { data, meta } = await response.json()

// Create an item
await fetch('https://cms.example.com/items/articles', {
  method: 'POST',
  headers: {
    Authorization: `Bearer ${process.env.DIRECTUS_TOKEN}`,
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    title: 'New Article',
    slug: 'new-article',
    content: 'Article content...',
    status: 'draft',
  }),
})

GraphQL API

Enable GraphQL in Directus settings → GraphQL is enabled by default:

query GetArticles($limit: Int!, $offset: Int!) {
  articles(
    filter: { status: { _eq: "published" } }
    sort: ["-date_created"]
    limit: $limit
    offset: $offset
  ) {
    id
    title
    slug
    content
    date_created
    author {
      name
      avatar {
        id
      }
    }
  }

  articles_aggregated(filter: { status: { _eq: "published" } }) {
    count { id }
  }
}

Directus Flows (Automation)

Flows is Directus's built-in automation engine — trigger actions on data events without leaving the admin UI:

Example: Send Slack notification when a new article is published

  1. Settings → Flows → Create Flow
  2. Trigger: Event Hookitems.update on collection articles when status changes to published
  3. Operation 1: Read Data — fetch the full article record
  4. Operation 2: Webhook — POST to https://hooks.slack.com/services/your-webhook with {"text": "New article published: {{$last.title}}"}

Flows supports conditional logic, loops, transform operations, and running custom code snippets. For complex automation, it's not as powerful as n8n but covers most CMS workflow needs without an external tool.


Roles and Permissions

Directus's permission system is field-level — you can control which users can read/create/update/delete individual fields:

Settings → Roles & Permissions → Create Role: Editor

  • articles: Read all, Create (status: draft only), Update own (except status field)
  • media: Read all, Create

Public role (unauthenticated API access):

  • articles: Read where status = published, fields: id, title, slug, content, date_created
  • All other collections: No access

This enables a fully public REST API for your frontend while keeping draft content and admin data private.


Webhooks for Cache Invalidation

Configure Directus webhooks to trigger Next.js ISR or CDN cache purges when content changes:

Settings → Webhooks → Create Webhook:

  • Name: Next.js Revalidate
  • Method: POST
  • URL: https://your-site.com/api/revalidate
  • Actions: items.create, items.update, items.delete
  • Collections: articles, pages (whichever collections your frontend uses)
  • Headers: { "Authorization": "Bearer your-revalidation-secret" }

Next.js handler:

// app/api/revalidate/route.ts
import { revalidateTag, revalidatePath } from 'next/cache'
import { NextRequest, NextResponse } from 'next/server'

export async function POST(req: NextRequest) {
  if (req.headers.get('authorization') !== `Bearer ${process.env.REVALIDATION_SECRET}`) {
    return NextResponse.json({ error: 'Unauthorized' }, { status: 401 })
  }

  const body = await req.json()
  const { collection, payload } = body

  // Revalidate the tag for this content type
  revalidateTag(collection)

  // If the item has a slug, revalidate the specific path too
  if (payload?.slug) {
    revalidatePath(`/${collection}/${payload.slug}`)
  }

  return NextResponse.json({ revalidated: true })
}

This gives you incremental static regeneration — your Next.js pages are statically generated at build time and automatically updated whenever editors publish changes in Directus.


Extensions

Directus supports custom extensions for interfaces, displays, layouts, modules, and hooks:

# Install extension from npm
cd /opt/directus/extensions
npm install directus-extension-wpslug  # WordPress-style slugs
npm install directus-extension-tree   # hierarchical tree view
npm install @directus/extension-sdk   # for building custom extensions

# Restart to load
docker compose restart directus

Custom server-side hooks (in /opt/directus/extensions/hooks/my-hook/index.js):

export default ({ action }) => {
  action('items.create', ({ collection, key, payload }) => {
    if (collection === 'articles') {
      console.log(`New article created: ${key}`)
      // Send to search index, CDN purge, etc.
    }
  })
}

Nginx Configuration

server {
  listen 443 ssl http2;
  server_name cms.example.com;

  ssl_certificate /etc/letsencrypt/live/cms.example.com/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/cms.example.com/privkey.pem;

  client_max_body_size 50m;

  location / {
    proxy_pass http://127.0.0.1:8055;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection 'upgrade';
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_cache_bypass $http_upgrade;
  }
}

Backup Strategy

#!/bin/bash
DATE=$(date +%Y%m%d-%H%M)
BACKUP_DIR=/opt/backups/directus

mkdir -p $BACKUP_DIR

# Database
docker compose exec -T db pg_dump -U directus directus | gzip > "$BACKUP_DIR/db-$DATE.sql.gz"

# Uploads (if not using S3)
tar -czf "$BACKUP_DIR/uploads-$DATE.tar.gz" /opt/directus/uploads/

# Keep 14 days
find $BACKUP_DIR -name "*.gz" -mtime +14 -delete

If you configured S3 storage, only the database backup is needed — uploads are already in object storage with versioning enabled. Test your backup restoration process at least once to verify the dump is complete and restores cleanly. A corrupt or empty backup is worse than no backup — you'd be unaware of the failure until disaster strikes.


Troubleshooting Common Issues

Relations not loading (nested fields returning null): The fields query parameter must explicitly request nested fields:

# Wrong — returns author as null
/items/articles?fields=title,author

# Correct — returns author as object with name
/items/articles?fields=title,author.name,author.avatar.id

Permission denied for public API: Verify the Public role has read permission on the collection. In Settings → Roles → Public, check that the collection is enabled and the field filters aren't overly restrictive. Directus defaults to no access for the public role.

Admin panel slow on large datasets: Add appropriate indexes to your database for fields you sort/filter by:

CREATE INDEX idx_articles_status_created ON articles(status, date_created DESC);
CREATE INDEX idx_articles_slug ON articles(slug);

Directus also supports defining CONTENT_SECURITY_POLICY_DIRECTIVES for iframe embedding in third-party tools.

File uploads failing: Check that the uploads volume is mounted correctly and writable:

docker compose exec directus ls -la /directus/uploads
# Should show the uploads directory writable by the node user

For S3 uploads, verify CORS is configured on your bucket to allow requests from your Directus domain.


Cost vs Contentful and Airtable

ServiceMonthly CostRecords/API Limit
Contentful Micro$300/month2M API calls
Airtable Pro$20/user/month50K rows/base
Retool$12/user/monthCloud-only
Directus Cloud$15-99/monthVaries
Directus Self-Hosted~€5/month VPSUnlimited

Directus self-hosted is particularly compelling as an Airtable replacement for structured data management — unlimited rows, API calls, and users on a single €5/month VPS.


Upgrading Directus

Directus releases weekly — check the changelog:

cd /opt/directus
docker compose pull directus
docker compose up -d --force-recreate directus
# Directus runs migrations automatically on startup
docker compose logs directus | grep -i "migrated\|error"

Methodology

  • Directus documentation: docs.directus.io
  • GitHub: github.com/directus/directus (28K+ stars)
  • Tested with Directus 11.x, PostgreSQL 16, Docker Compose v2

Why Self-Host Directus — Open Source CMS?

The case for self-hosting Directus — Open Source CMS comes down to three practical factors: data ownership, cost at scale, and operational control.

Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting Directus — Open Source CMS means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.

Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.

Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.

The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.

Server Requirements and Sizing

Before deploying Directus — Open Source CMS, assess your server capacity against expected workload.

Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.

Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives Directus — Open Source CMS headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.

Storage planning: The Docker volumes in this docker-compose.yml store all persistent Directus — Open Source CMS data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.

Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.

Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.

Backup and Disaster Recovery

Running Directus — Open Source CMS without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.

What to back up: The named Docker volumes containing Directus — Open Source CMS's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.

Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.

For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.

Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.

Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your Directus — Open Source CMS backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.

Security Hardening

Self-hosting means you are responsible for Directus — Open Source CMS's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.

Always use a reverse proxy: Never expose Directus — Open Source CMS's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.

Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.

Firewall configuration:

ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable

Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.

Network isolation: Docker Compose named networks keep Directus — Open Source CMS's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.

VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.

Update discipline: Subscribe to Directus — Open Source CMS's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.

Troubleshooting Common Issues

Container exits immediately or won't start

Check logs first — they almost always explain the failure:

docker compose logs -f directus

Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change Directus — Open Source CMS's port mapping in docker-compose.yml.

Cannot reach the web interface

Work through this checklist:

  1. Confirm the container is running: docker compose ps
  2. Test locally on the server: curl -I http://localhost:PORT
  3. If local access works but external doesn't, check your firewall: ufw status
  4. If using a reverse proxy, verify it's running and the config is valid: caddy validate --config /etc/caddy/Caddyfile

Permission errors on volume mounts

Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:

chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data

High resource usage over time

Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats directus. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.

Data disappears after container restart

Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.

Keeping Directus — Open Source CMS Updated

Directus — Open Source CMS follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:

docker compose pull          # Download updated images
docker compose up -d         # Restart with new images
docker image prune -f        # Remove old image layers (optional)

Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.

Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.

Post-update verification: After updating, confirm Directus — Open Source CMS is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.


Browse open source headless CMS and data platform alternatives on OSSAlt.

Related: How to Self-Host Strapi — Headless CMS 2026 · Best Open Source Alternatives to Firebase 2026

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.