Skip to main content

How to Self-Host OpenObserve: Datadog Alternative 2026

·OSSAlt Team
observabilitymonitoringself-hostingdockerdevops
Share:

How to Self-Host OpenObserve: The Open Source Datadog Alternative in 2026

TL;DR

OpenObserve is a Rust-built observability platform that handles logs, metrics, and traces in a single tool — and stores data with up to 140x better efficiency than Elasticsearch. Where Datadog charges $0.10–$0.25/GB/month for log ingestion plus storage, OpenObserve on a $20/month VPS can handle millions of log events per day essentially for free. It ships as a single binary or a tiny Docker image, has an OpenTelemetry-native ingestion API, and includes pre-built dashboards, alerts, and a SQL-based query language. If you're paying Datadog bills that make you wince, OpenObserve is the first alternative worth seriously evaluating.

Key Takeaways

  • Single binary: OpenObserve ships as one binary (~40MB), not a stack of 5 services like the ELK stack
  • 140x storage reduction: uses columnar storage (Parquet format) vs Elasticsearch's row-based indexes
  • OpenTelemetry native: accepts OTLP directly — no collector translation layer needed for most setups
  • Logs + Metrics + Traces: one tool, one storage backend, one query interface
  • SQL query language: query logs with SQL (SELECT * FROM logs WHERE level='error' LIMIT 100)
  • GitHub stars: 13,000+ (growing fast since its 2023 launch)
  • License: AGPL v3 (community) / commercial (enterprise features)

Why OpenObserve Instead of the ELK Stack or Loki?

The traditional self-hosted observability options have real problems:

Elasticsearch/Kibana (ELK): Powerful but resource-hungry. A production ELK stack needs 3+ nodes, 16GB+ RAM per node, and becomes complex fast. Storage efficiency is poor — indexing overhead can 3-5x your raw log size. Elasticsearch's indexing CPU cost is significant.

Grafana Loki: Much lighter than ELK, but logs only — you need Prometheus for metrics and Tempo for traces. Managing three separate systems, three retention policies, and three query languages (LogQL, PromQL, TraceQL) has real operational overhead.

OpenObserve: One binary, three signal types, one SQL-like query language. Not a replacement for every Loki use case (Loki's label-based indexing is better for very high-cardinality scenarios), but for the majority of self-hosters, OpenObserve's unified approach wins on simplicity.


Architecture Options

Single Node (Most Self-Hosters)

Your Apps
    ↓ (OTLP/HTTP or Fluent Bit or Vector)
OpenObserve (single container)
    ↓
Local disk or S3-compatible storage

Single-node handles millions of log events per day on a $20/month VPS. Suitable for teams up to ~50 engineers.

Cluster Mode (High Availability)

OpenObserve supports distributed mode for production clusters — separate ingester, querier, and compactor nodes backed by S3/MinIO. Most self-hosters don't need this.


Self-Hosting with Docker Compose

docker-compose.yml

version: '3.8'

services:
  openobserve:
    image: public.ecr.aws/zinclabs/openobserve:latest
    container_name: openobserve
    restart: unless-stopped
    ports:
      - "5080:5080"       # Web UI + HTTP API
      - "5081:5081"       # gRPC (OTLP traces)
    environment:
      ZO_ROOT_USER_EMAIL: "admin@example.com"
      ZO_ROOT_USER_PASSWORD: "changeme-strong-password"
      ZO_DATA_DIR: "/data"
      ZO_TELEMETRY: "false"             # Disable usage telemetry
      # Optional: Use S3 for storage instead of local disk
      # ZO_S3_BUCKET_NAME: "my-observability-bucket"
      # ZO_S3_REGION_NAME: "us-east-1"
      # ZO_S3_ACCESS_KEY: "AKIAIOSFODNN7EXAMPLE"
      # ZO_S3_SECRET_KEY: "secret"
    volumes:
      - openobserve_data:/data

  # Optional: Fluent Bit for log shipping
  fluent-bit:
    image: fluent/fluent-bit:latest
    volumes:
      - ./fluent-bit.conf:/fluent/etc/fluent-bit.conf:ro
      - /var/log:/var/log:ro      # Ship host logs
    depends_on:
      - openobserve

volumes:
  openobserve_data:

Start It

docker compose up -d
# Access UI at http://localhost:5080
# Login with the email/password from ZO_ROOT_USER_EMAIL/ZO_ROOT_USER_PASSWORD

Shipping Logs to OpenObserve

If you're already using the OTel Collector:

# otel-collector.yaml — add an OpenObserve exporter
exporters:
  otlphttp/openobserve:
    endpoint: http://openobserve:5080/api/default/
    headers:
      Authorization: "Basic BASE64(email:password)"
    compression: gzip

service:
  pipelines:
    logs:
      receivers: [otlp, filelog]
      processors: [batch]
      exporters: [otlphttp/openobserve]
    metrics:
      receivers: [otlp, prometheus]
      processors: [batch]
      exporters: [otlphttp/openobserve]
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/openobserve]

Option 2: Fluent Bit (For Existing Log Files)

# fluent-bit.conf
[SERVICE]
    Flush         5
    Log_Level     info

[INPUT]
    Name          tail
    Path          /var/log/*.log,/var/log/app/*.log
    Tag           app.logs
    Refresh_Interval 5
    Mem_Buf_Limit 50MB

[OUTPUT]
    Name          http
    Match         *
    Host          openobserve
    Port          5080
    URI           /api/default/logs/_json
    Format        json
    Header        Authorization Basic BASE64(email:password)
    Header        Content-Type application/json
    compress      gzip

Option 3: Vector (For High-Volume Pipelines)

# vector.toml
[sources.app_logs]
type = "file"
include = ["/var/log/app/*.log"]

[sources.docker_logs]
type = "docker_logs"

[sinks.openobserve]
type = "http"
inputs = ["app_logs", "docker_logs"]
uri = "http://openobserve:5080/api/default/logs/_json"
method = "post"
encoding.codec = "json"
auth.strategy = "basic"
auth.user = "admin@example.com"
auth.password = "changeme-strong-password"

Option 4: Direct HTTP API (For Custom Applications)

// Send structured logs directly from your app
async function sendLog(level: string, message: string, metadata: object) {
  const log = {
    level,
    message,
    timestamp: new Date().toISOString(),
    service: 'my-api',
    ...metadata,
  }

  await fetch('http://openobserve:5080/api/default/logs/_json', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': 'Basic ' + btoa('admin@example.com:password'),
    },
    body: JSON.stringify([log]),
  })
}

// In your Express error handler:
app.use((err, req, res, next) => {
  sendLog('error', err.message, {
    stack: err.stack,
    path: req.path,
    method: req.method,
    statusCode: err.status ?? 500,
  })
  res.status(err.status ?? 500).json({ error: err.message })
})

Querying Logs with SQL

OpenObserve uses SQL for log queries — a major DX improvement over LogQL or Lucene syntax:

-- Basic queries
SELECT * FROM logs WHERE level = 'error' LIMIT 100

-- Aggregate error counts by service
SELECT service, COUNT(*) as error_count
FROM logs
WHERE level = 'error'
  AND _timestamp >= NOW() - INTERVAL 1 HOUR
GROUP BY service
ORDER BY error_count DESC

-- Find slow API requests
SELECT path, method, duration_ms, user_id
FROM logs
WHERE duration_ms > 500
  AND _timestamp >= NOW() - INTERVAL 24 HOURS
ORDER BY duration_ms DESC
LIMIT 50

-- Search log message text
SELECT *
FROM logs
WHERE message LIKE '%database connection%'
  AND level IN ('error', 'warn')
  AND _timestamp >= NOW() - INTERVAL 6 HOURS

-- Count events per minute (for trend analysis)
SELECT
  date_trunc('minute', _timestamp) as minute,
  COUNT(*) as events
FROM logs
WHERE _timestamp >= NOW() - INTERVAL 1 HOUR
GROUP BY minute
ORDER BY minute

Setting Up Alerts

OpenObserve has a built-in alerting system with Slack, PagerDuty, and webhook destinations:

// POST /api/default/alerts — Create an alert via API
{
  "name": "High Error Rate",
  "stream": "logs",
  "query": {
    "sql": "SELECT COUNT(*) as error_count FROM logs WHERE level='error' AND _timestamp >= NOW() - INTERVAL 5 MINUTES",
    "start_time": "now-5m",
    "end_time": "now"
  },
  "condition": {
    "column": "error_count",
    "operator": ">",
    "value": 50
  },
  "duration": 5,
  "frequency": 1,
  "destination": "slack-webhook"
}

Configure via the UI: Alerts → Create Alert → Set query → Set threshold → Choose destination.


Production Setup: Nginx Reverse Proxy with TLS

server {
    listen 443 ssl http2;
    server_name observe.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/observe.yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/observe.yourdomain.com/privkey.pem;

    # Restrict UI access to your team IPs
    allow 10.0.0.0/8;
    allow 192.168.0.0/16;
    deny all;

    location / {
        proxy_pass http://localhost:5080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;

        # Required for large log ingestion payloads
        proxy_read_timeout 300s;
        client_max_body_size 50m;
    }
}

Pre-Built Dashboards and Visualizations

OpenObserve ships with a dashboard system similar to Grafana — you can build visualizations using its query builder or SQL editor. For common patterns, import community dashboards:

# Import a Node.js dashboard via API
curl -X POST http://localhost:5080/api/default/dashboards \
  -H "Authorization: Basic BASE64(email:password)" \
  -H "Content-Type: application/json" \
  -d @nodejs-dashboard.json

The UI supports line charts, bar charts, heatmaps, stat panels, and tables. For teams deeply invested in Grafana's visualization ecosystem, OpenObserve also exposes a Grafana-compatible data source plugin — meaning you can use Grafana for dashboards while using OpenObserve for storage and query.


Cost Comparison: Datadog vs OpenObserve Self-Hosted

ScenarioDatadogOpenObserve (Self-Hosted)
1GB logs/day~$90/month$0 (fits on $6/month VPS)
10GB logs/day~$900/month~$20/month (VPS)
100GB logs/day~$9,000/month~$60/month (VPS + storage)
Metrics (100K series)~$250/month$0
APM traces~$400/month$0

The self-hosted infrastructure cost is almost entirely your VPS. Storage is cheap (S3/Backblaze B2) and OpenObserve's Parquet columnar format compresses logs aggressively.


OpenObserve vs Grafana Stack (Loki + Prometheus + Tempo)

AspectOpenObserveGrafana Stack
Setup complexityLow (1 container)High (3+ services)
Query languageSQLLogQL/PromQL/TraceQL
Resource usageLowMedium-High
VisualizationBuilt-inGrafana (excellent)
EcosystemGrowingMassive
Best forSimplicity, unifiedPower users, existing Grafana

Shipping Application Traces (APM)

OpenObserve accepts distributed traces via OTLP. Here's how to instrument a Node.js app:

// instrumentation.ts — run before your app starts
import { NodeSDK } from '@opentelemetry/sdk-node'
import { OTLPTraceExporter } from '@opentelemetry/exporter-trace-otlp-http'
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http'
import { getNodeAutoInstrumentations } from '@opentelemetry/auto-instrumentations-node'

const traceExporter = new OTLPTraceExporter({
  url: 'http://openobserve:5080/api/default/traces',
  headers: {
    Authorization: 'Basic ' + Buffer.from('admin@example.com:password').toString('base64'),
  },
})

const sdk = new NodeSDK({
  serviceName: 'my-api',
  traceExporter,
  instrumentations: [
    getNodeAutoInstrumentations({
      '@opentelemetry/instrumentation-http': { enabled: true },
      '@opentelemetry/instrumentation-express': { enabled: true },
      '@opentelemetry/instrumentation-pg': { enabled: true },
    }),
  ],
})

sdk.start()

Once running, every HTTP request, database query, and external API call generates a trace visible in OpenObserve's trace viewer. You can correlate traces with logs using the trace_id field — click a log line and jump to its trace.


Data Retention and Storage Management

OpenObserve lets you configure retention per stream:

# Set 30-day retention via API
curl -X PUT http://localhost:5080/api/default/streams/logs/settings \
  -H "Authorization: Basic BASE64(email:password)" \
  -H "Content-Type: application/json" \
  -d '{
    "data_retention": 30,
    "max_query_range": 7
  }'

For S3 storage, OpenObserve automatically compacts old data into Parquet files and applies lifecycle rules. A typical production setup:

  • Hot tier (last 7 days): local SSD for fast queries
  • Cold tier (7-90 days): S3 or MinIO (cheap object storage)
  • Archive (90+ days): Glacier or Backblaze B2 deep archive

For most self-hosters, a simple local disk with 30-day retention is sufficient.


Backup and Disaster Recovery

Since OpenObserve stores data as files (Parquet + WAL), backup is straightforward:

#!/bin/bash
# backup-openobserve.sh — daily backup cron

BACKUP_DATE=$(date +%Y%m%d)
BACKUP_DIR="/backups/openobserve/$BACKUP_DATE"

# Stop ingestion briefly for clean snapshot (optional — OZ handles concurrent writes)
docker compose stop openobserve

# Rsync data to backup location
rsync -av /opt/openobserve/data/ $BACKUP_DIR/

docker compose start openobserve

# Upload to S3/B2
rclone copy $BACKUP_DIR b2:my-backups/openobserve/$BACKUP_DATE

# Cleanup backups older than 7 days locally
find /backups/openobserve -type d -mtime +7 -exec rm -rf {} +

echo "Backup complete: $BACKUP_DIR"

Methodology

  • GitHub stars and community data from github.com/openobserve/openobserve, March 2026
  • Storage efficiency comparisons from OpenObserve documentation and community benchmarks
  • Pricing data from Datadog pricing page (datadoghq.com/pricing), March 2026
  • OpenObserve version: latest (check GitHub releases for current version)

Why Self-Host OpenObserve?

The case for self-hosting OpenObserve comes down to three practical factors: data ownership, cost at scale, and operational control.

Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting OpenObserve means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.

Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.

Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.

The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.

Server Requirements and Sizing

Before deploying OpenObserve, assess your server capacity against expected workload.

Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.

Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives OpenObserve headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.

Storage planning: The Docker volumes in this docker-compose.yml store all persistent OpenObserve data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.

Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.

Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.

Backup and Disaster Recovery

Running OpenObserve without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.

What to back up: The named Docker volumes containing OpenObserve's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.

Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.

For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.

Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.

Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your OpenObserve backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.

Security Hardening

Self-hosting means you are responsible for OpenObserve's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.

Always use a reverse proxy: Never expose OpenObserve's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.

Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.

Firewall configuration:

ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable

Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.

Network isolation: Docker Compose named networks keep OpenObserve's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.

VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.

Update discipline: Subscribe to OpenObserve's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.

Troubleshooting Common Issues

Container exits immediately or won't start

Check logs first — they almost always explain the failure:

docker compose logs -f observability

Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change OpenObserve's port mapping in docker-compose.yml.

Cannot reach the web interface

Work through this checklist:

  1. Confirm the container is running: docker compose ps
  2. Test locally on the server: curl -I http://localhost:PORT
  3. If local access works but external doesn't, check your firewall: ufw status
  4. If using a reverse proxy, verify it's running and the config is valid: caddy validate --config /etc/caddy/Caddyfile

Permission errors on volume mounts

Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:

chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data

High resource usage over time

Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats observability. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.

Data disappears after container restart

Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.

Keeping OpenObserve Updated

OpenObserve follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:

docker compose pull          # Download updated images
docker compose up -d         # Restart with new images
docker image prune -f        # Remove old image layers (optional)

Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.

Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.

Post-update verification: After updating, confirm OpenObserve is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.


Explore more open source Datadog alternatives on OSSAlt — community ratings, self-hosting difficulty, and feature comparisons.

Related: Best Open Source Alternatives to Datadog 2026 · Grafana + Prometheus + Loki: Self-Hosted Observability Stack 2026 · How to Self-Host Prometheus + Grafana 2026

See open source alternatives to Datadog on OSSAlt.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.