Skip to main content

How to Self-Host Dify in 2026: Docker Setup

·OSSAlt Team
difyself-hostedai platformllmragworkflowsdockersetup guide2026
Share:

What Dify Is

Dify (80K+ GitHub stars, MIT license) is an open source platform for building production-ready AI applications. Think of it as an all-in-one workspace for:

  • AI chatbots: Build and deploy conversational AI with custom knowledge bases
  • Agentic workflows: Create multi-step AI pipelines that use tools, APIs, and code
  • RAG applications: Chat with your documents, PDFs, wikis, and data sources
  • LLM management: Connect and switch between OpenAI, Anthropic, Google, Ollama (local), and 100+ other providers

Self-hosting Dify means your prompts, documents, and conversations don't pass through Dify's servers. You control the data and can connect it to entirely local models (via Ollama) for full privacy.

Server Requirements

Minimum

  • 2 CPU cores
  • 4GB RAM (8GB recommended)
  • 20GB storage for application data and model caches
  • 4+ CPU cores
  • 8-16GB RAM
  • 50GB+ storage
  • External database (PostgreSQL) for production reliability
Use CaseServerMonthly
Personal/testingCAX21 (4GB ARM)$6
Small teamCPX31 (8GB)$10
ProductionCPX41 (16GB)$19

If you want to run local models with Ollama alongside Dify (for zero-cost LLM inference), you need more RAM — budget 8GB per 7B model plus Dify's overhead.

Step 1: Prepare Your Server

# Update system
sudo apt update && sudo apt upgrade -y

# Install Docker
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
newgrp docker

# Verify
docker --version
docker compose version

Step 2: Clone Dify

git clone https://github.com/langgenius/dify.git
cd dify/docker

The docker directory contains everything needed for deployment.

Step 3: Configure Environment Variables

cp .env.example .env
nano .env

Critical settings to configure:

# Generate a strong secret key (run: openssl rand -base64 32)
SECRET_KEY=your-generated-secret-key-here

# Postgres settings
DB_USERNAME=dify
DB_PASSWORD=YourStrongPassword123
DB_HOST=db
DB_PORT=5432
DB_DATABASE=dify

# Redis settings
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0

# Storage settings (local by default)
STORAGE_TYPE=local
STORAGE_LOCAL_PATH=storage

# If using S3 for file storage:
# STORAGE_TYPE=s3
# S3_ENDPOINT=https://s3.amazonaws.com
# S3_BUCKET_NAME=your-bucket
# S3_ACCESS_KEY=...
# S3_SECRET_KEY=...

# Your server URL (important for OAuth callbacks and file URLs)
CONSOLE_WEB_URL=https://dify.yourdomain.com
APP_WEB_URL=https://dify.yourdomain.com

# SMTP for email notifications (optional)
MAIL_TYPE=smtp
SMTP_SERVER=smtp.example.com
SMTP_PORT=587
SMTP_USERNAME=your@email.com
SMTP_PASSWORD=yourpassword
MAIL_DEFAULT_SEND_FROM=noreply@yourdomain.com

Step 4: Start Dify

docker compose up -d

Dify starts multiple containers:

  • api: Backend API server
  • worker: Background job processor
  • web: Frontend Next.js application
  • db: PostgreSQL database
  • redis: Cache and job queue
  • nginx: Reverse proxy (routes to api/web)
  • sandbox: Code execution sandbox (for Python/JS in workflows)
  • ssrf_proxy: Security proxy for external requests
  • weaviate: Vector database for embeddings (optional, can use pgvector instead)

Initial startup downloads container images and takes 2-5 minutes. Monitor:

docker compose logs -f api

Wait until you see the API server report it's ready.

Step 5: Initial Setup

Navigate to http://your-server-ip or http://your-server-ip/install

Create Admin Account

Enter your email and password to create the admin account. This is the workspace owner account.

Access the Dashboard

After account creation, you land on Dify's main dashboard. The key sections:

  • Studio: Build and test AI applications
  • Knowledge: Create knowledge bases from documents
  • Models: Configure LLM providers
  • Monitoring: Observe application usage and logs

Step 6: Connect LLM Providers

This is where you configure which AI models Dify can use.

Settings → Model Providers

Connect OpenAI (Cloud)

  1. Go to SettingsModel Provider
  2. Click OpenAI
  3. Enter your API key: sk-...
  4. Save and test connection

Available models after connecting: GPT-4o, GPT-4o-mini, o1, o3, text-embedding-ada-002, dall-e-3

Connect Anthropic (Cloud)

  1. Click Anthropic
  2. Enter API key: sk-ant-...
  3. Available: Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku

Connect Local Models via Ollama

Use local models for free inference — no API costs, full privacy.

First, install Ollama on your server (or a separate machine):

curl -fsSL https://ollama.com/install.sh | sh
ollama pull llama3.1:8b
ollama pull nomic-embed-text  # For embeddings

In Dify:

  1. SettingsModel ProviderOllama
  2. Enter base URL: http://host-ip:11434 (or http://host.docker.internal:11434 if Ollama runs on the same Docker host)
  3. Add models: enter the model name exactly as in Ollama (llama3.1:8b, mistral, etc.)

Embedding model: Add nomic-embed-text as an embedding model for knowledge base search.

Now Dify can use your local Llama models for zero-cost AI inference.

Connect Other Providers

Dify supports many providers — each has a similar setup:

  • Google AI Studio: Gemini models
  • Azure OpenAI: Enterprise OpenAI deployments
  • Mistral: Mistral models
  • OpenRouter: Access 100+ models via one API
  • Groq: Fast inference for open models

Step 7: Build Your First Application

Create a Chatbot

  1. StudioCreate AppChatbot
  2. Name it (e.g., "Customer Support Bot")
  3. Choose an Orchestration mode:
    • Basic: Simple chatbot with a system prompt
    • Advanced (Workflow): Visual pipeline with conditional logic, tools, and multi-step processing

For a basic chatbot:

  1. Set the System Prompt: Define the bot's persona and behavior
  2. Select the Model: Choose from your connected providers
  3. Set Context (optional): Attach knowledge bases for RAG
  4. Debug and Preview: Test in the right panel

Create a Knowledge Base (RAG)

  1. KnowledgeCreate Knowledge
  2. Upload documents: PDF, TXT, MD, DOCX supported
  3. Configure chunking: chunk size affects retrieval quality
  4. Choose embedding model: the model that converts text to searchable vectors
  5. After indexing, attach the knowledge base to any application

Test with: "What does [document] say about X?"

Build a Workflow Application

Workflows are Dify's most powerful feature — multi-step AI pipelines that can:

  • Call external APIs
  • Execute Python or JavaScript code
  • Branch based on conditions
  • Use multiple AI models in sequence
  • Generate images with DALL-E or Stable Diffusion

Example: Document Summarization Workflow

  1. Input: Document file upload
  2. Extract text: Parse PDF/document
  3. Summarize: Send text to LLM with summarization prompt
  4. Format: Structure the summary
  5. Output: Return formatted summary

The visual canvas lets you connect these nodes by dragging between them.

Step 8: Configure HTTPS

For team access and security, set up HTTPS with a reverse proxy.

Using Caddy

sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update && sudo apt install caddy

/etc/caddy/Caddyfile:

dify.yourdomain.com {
    reverse_proxy localhost:80
}
sudo systemctl restart caddy

Update your .env file with the HTTPS URL and restart Dify:

# Update in .env:
CONSOLE_WEB_URL=https://dify.yourdomain.com
APP_WEB_URL=https://dify.yourdomain.com

docker compose up -d

Step 9: Invite Team Members

  1. SettingsMembersInvite Member
  2. Enter email address
  3. Select role: Owner, Admin, Editor, or Member
  4. Invited users receive an email with a setup link

Roles determine what members can create, modify, and access across the workspace.

Step 10: Deploy Applications as APIs or Chatbots

Every Dify application can be deployed in multiple ways:

Embedded Chatbot Widget

Add Dify as a chat widget to any website:

  1. Open your application
  2. PublishEmbed into Site
  3. Copy the JavaScript snippet and paste into your website HTML
<script>
 window.difyChatbotConfig = {
  token: 'your-app-token'
 }
</script>
<script
 src="https://dify.yourdomain.com/embed.min.js"
 id="your-app-token"
 defer>
</script>

REST API

Every Dify application exposes a REST API:

curl -X POST 'https://dify.yourdomain.com/v1/chat-messages' \
  -H 'Authorization: Bearer app-token-here' \
  -H 'Content-Type: application/json' \
  -d '{
    "inputs": {},
    "query": "What is the return policy?",
    "response_mode": "blocking",
    "conversation_id": "",
    "user": "user-123"
  }'

Use this to integrate Dify applications into your own products and services.

Production Considerations

Use External PostgreSQL

For production, use a managed PostgreSQL instance instead of the Docker container:

DB_HOST=your-postgres-host
DB_PORT=5432
DB_USERNAME=dify
DB_PASSWORD=strong-password
DB_DATABASE=dify

This enables proper backups, monitoring, and reliability SLAs.

Configure File Storage

For production, use S3-compatible storage (Hetzner Object Storage, Backblaze B2, AWS S3) instead of local storage:

STORAGE_TYPE=s3
S3_ENDPOINT=https://s3.amazonaws.com
S3_BUCKET_NAME=dify-storage
S3_ACCESS_KEY=your-access-key
S3_SECRET_KEY=your-secret-key
S3_REGION=us-east-1

Backup

Back up regularly:

# Database backup
docker exec dify-db-1 pg_dump -U dify dify | gzip > dify-backup-$(date +%Y%m%d).sql.gz

# Storage backup (if local)
rsync -av /path/to/dify/docker/volumes/app/storage/ /backup/dify-storage/

Cost Analysis

Dify Cloud vs Self-Hosted

Dify CloudMonthlyAnnual
Sandbox (free tier)$0$0
Professional$59$708
Team$159$1,908
Self-HostedMonthlyAnnual
Hetzner CPX31$10$120
+ Ollama local models$0$0
+ OpenAI API (usage)VariableVariable

Self-hosted Dify with local Ollama models = $120/year for the server with no per-query AI costs. Add cloud API costs if you connect OpenAI/Anthropic for their specific capabilities.

Find More AI Platform Alternatives

Browse all AI development platform alternatives on OSSAlt — compare Dify, Flowise, LangFlow, AnythingLLM, and every other open source AI application platform with deployment guides.

Why Self-Host Dify in 2026?

The case for self-hosting Dify in 2026 comes down to three practical factors: data ownership, cost at scale, and operational control.

Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting Dify in 2026 means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.

Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.

Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.

The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.

Server Requirements and Sizing

Before deploying Dify in 2026, assess your server capacity against expected workload.

Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.

Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives Dify in 2026 headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.

Storage planning: The Docker volumes in this docker-compose.yml store all persistent Dify in 2026 data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.

Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.

Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.

Backup and Disaster Recovery

Running Dify in 2026 without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.

What to back up: The named Docker volumes containing Dify in 2026's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.

Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.

For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.

Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.

Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your Dify in 2026 backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.

Security Hardening

Self-hosting means you are responsible for Dify in 2026's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.

Always use a reverse proxy: Never expose Dify in 2026's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.

Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.

Firewall configuration:

ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable

Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.

Network isolation: Docker Compose named networks keep Dify in 2026's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.

VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.

Update discipline: Subscribe to Dify in 2026's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.

Troubleshooting Common Issues

Container exits immediately or won't start

Check logs first — they almost always explain the failure:

docker compose logs -f dify

Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change Dify in 2026's port mapping in docker-compose.yml.

Cannot reach the web interface

Work through this checklist:

  1. Confirm the container is running: docker compose ps
  2. Test locally on the server: curl -I http://localhost:PORT
  3. If local access works but external doesn't, check your firewall: ufw status
  4. If using a reverse proxy, verify it's running and the config is valid: caddy validate --config /etc/caddy/Caddyfile

Permission errors on volume mounts

Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:

chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data

High resource usage over time

Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats dify. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.

Data disappears after container restart

Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.

Keeping Dify in 2026 Updated

Dify in 2026 follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:

docker compose pull          # Download updated images
docker compose up -d         # Restart with new images
docker image prune -f        # Remove old image layers (optional)

Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.

Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.

Post-update verification: After updating, confirm Dify in 2026 is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.