Skip to main content

Self-Host Hoarder: AI-Powered Bookmark Manager 2026

·OSSAlt Team
hoarderbookmarksaiself-hostingdockerollama2026
Share:

TL;DR

Hoarder (AGPL 3.0, ~10K GitHub stars, Next.js) is a self-hosted bookmark manager with AI superpowers. Save a URL and Hoarder automatically fetches the full page, takes a screenshot, extracts text, and uses a local AI (Ollama) to generate tags — zero manual organization required. Raindrop.io Pro ($3/month) offers similar AI features but sends your data to their cloud. Hoarder runs locally with complete privacy.

Key Takeaways

  • Hoarder: AGPL 3.0, ~10K stars, Next.js — AI-first bookmark manager with automatic tagging
  • Ollama integration: Uses local LLMs (llama3.2, mistral, etc.) for private AI tagging
  • Full-page archiving: Saves snapshots of pages so they're available even if the original goes down
  • Screenshots: Visual thumbnails for every bookmark — makes browsing bookmarks fast
  • Browser extensions: Chrome and Firefox — one-click saving with instant AI processing
  • Mobile apps: iOS and Android — save links from any app via share sheet

Hoarder vs Linkding vs Raindrop.io

FeatureHoarderLinkdingRaindrop.io Pro
AI auto-taggingYes (Ollama)NoYes (cloud)
Full-page archiveYesNoYes
ScreenshotsYesNoYes
PrivacyLocalLocalCloud
Setup complexityMediumSimpleNone
Mobile appsYesPWAYes
Resource usage~500MB+~50MBN/A
PriceFreeFree$3/mo

Part 1: Docker Setup

Hoarder requires three services: the app itself, MeiliSearch (fast full-text search), and Chromium (for page archiving and screenshots).

# docker-compose.yml
services:
  web:
    image: ghcr.io/hoarder-app/hoarder:latest
    container_name: hoarder
    restart: unless-stopped
    ports:
      - "3000:3000"
    volumes:
      - hoarder_data:/data
    environment:
      # Required:
      NEXTAUTH_SECRET: "${NEXTAUTH_SECRET}"    # openssl rand -base64 36
      NEXTAUTH_URL: "https://hoarder.yourdomain.com"

      # MeiliSearch:
      MEILI_ADDR: "http://meilisearch:7700"
      MEILI_MASTER_KEY: "${MEILI_MASTER_KEY}"  # openssl rand -base64 24

      # Chromium for archiving:
      BROWSER_WEB_URL: "http://chrome:9222"

      # Data directory:
      DATA_DIR: /data

      # Optional: Ollama for AI tagging:
      OLLAMA_BASE_URL: "http://ollama:11434"
      OLLAMA_MODEL: "llama3.2"

      # Optional: disable signup after first user:
      # DISABLE_SIGNUPS: "true"

    depends_on:
      - meilisearch
      - chrome

  meilisearch:
    image: getmeili/meilisearch:v1.6
    container_name: hoarder-meilisearch
    restart: unless-stopped
    volumes:
      - meilisearch_data:/meili_data
    environment:
      MEILI_MASTER_KEY: "${MEILI_MASTER_KEY}"
      MEILI_NO_ANALYTICS: "true"

  chrome:
    image: gcr.io/zenika-hub/alpine-chrome:latest
    container_name: hoarder-chrome
    restart: unless-stopped
    command: >
      --no-sandbox
      --disable-gpu
      --disable-dev-shm-usage
      --remote-debugging-address=0.0.0.0
      --remote-debugging-port=9222
      --hide-scrollbars

  # Optional: local AI with Ollama
  ollama:
    image: ollama/ollama:latest
    container_name: hoarder-ollama
    restart: unless-stopped
    volumes:
      - ollama_data:/root/.ollama
    # Uncomment for GPU:
    # deploy:
    #   resources:
    #     reservations:
    #       devices:
    #         - driver: nvidia
    #           count: 1
    #           capabilities: [gpu]

volumes:
  hoarder_data:
  meilisearch_data:
  ollama_data:
# Generate secrets:
echo "NEXTAUTH_SECRET=$(openssl rand -base64 36)" >> .env
echo "MEILI_MASTER_KEY=$(openssl rand -base64 24)" >> .env

docker compose up -d

# Pull the AI model (after Ollama starts):
docker exec hoarder-ollama ollama pull llama3.2

Part 2: HTTPS with Caddy

hoarder.yourdomain.com {
    reverse_proxy localhost:3000
}

Visit https://hoarder.yourdomain.com → create your account.


Part 3: Browser Extensions

Chrome

  1. Install Hoarder Extension
  2. Extension → Options:
    • Server URL: https://hoarder.yourdomain.com
    • Click Login → authenticates via your account
  3. Click extension icon on any page → Save

Firefox

  1. Install from Mozilla Add-ons: search "Hoarder"
  2. Same configuration

When you save a link:

  1. Hoarder immediately stores the URL
  2. Chromium fetches the full page + screenshot (background)
  3. Ollama generates tags from the content (background)
  4. Full text becomes searchable

Part 4: Mobile Apps

iOS

  1. Install Hoarder for iOS
  2. Server: https://hoarder.yourdomain.com
  3. Login with your credentials
  4. Share Sheet → Hoarder — save any link from Safari, Twitter, etc.

Android

  1. Install from Play Store: search "Hoarder"
  2. Same setup
  3. Share links from any app directly to Hoarder

Part 5: AI Tagging Configuration

Using Ollama (local, private)

# In docker-compose.yml:
environment:
  OLLAMA_BASE_URL: "http://ollama:11434"
  OLLAMA_MODEL: "llama3.2"    # or mistral, phi3, etc.
# Available models for tagging (CPU-friendly):
docker exec hoarder-ollama ollama pull llama3.2       # 2GB, good quality
docker exec hoarder-ollama ollama pull phi3:mini       # 2.3GB, fast
docker exec hoarder-ollama ollama pull mistral:7b      # 4.1GB, best quality

# Switch model without restart:
# Update OLLAMA_MODEL env var and restart hoarder container
docker compose restart web

Using OpenAI API (if you prefer cloud)

environment:
  OPENAI_API_KEY: "sk-..."
  OPENAI_BASE_URL: "https://api.openai.com/v1"    # or any OpenAI-compatible API
  INFERENCE_TEXT_MODEL: "gpt-4o-mini"
  INFERENCE_IMAGE_MODEL: "gpt-4o-mini"

Prompt customization

Hoarder uses a default system prompt for tag generation. Override in settings:

# Settings → AI → Custom Prompt
You are a bookmark tagging assistant. Generate 3-5 short, lowercase tags
for the provided content. Focus on: topic, type (article/tool/video/paper),
and primary programming language or technology stack.
Return only comma-separated tags.

Part 6: Lists and Organization

Lists (manual organization)

Beyond AI tags, organize bookmarks into lists:

  1. Lists → + New List: Reading Queue, Work Research, Tools to Try
  2. Drag bookmarks into lists, or save directly to a list from the extension
  3. Lists are shareable — each gets a public URL

Filtering

# In the search bar:
#docker              — filter by tag
list:work            — filter by list
is:archived          — show archived
is:unread            — show unread

# Combine:
#kubernetes is:unread

Part 7: REST API

# Get API key:
# Settings → API Keys → Generate

API_KEY="your-api-key"
BASE="https://hoarder.yourdomain.com"

# List bookmarks:
curl "$BASE/api/v1/bookmarks" \
  -H "Authorization: Bearer $API_KEY" | jq '.bookmarks[].title'

# Add a bookmark:
curl -X POST "$BASE/api/v1/bookmarks" \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "link",
    "url": "https://example.com",
    "title": "Optional override title"
  }'

# Get bookmark with AI tags:
curl "$BASE/api/v1/bookmarks/BOOKMARK_ID" \
  -H "Authorization: Bearer $API_KEY" | jq '{title: .title, tags: .tags}'

# Archive a bookmark:
curl -X PUT "$BASE/api/v1/bookmarks/BOOKMARK_ID" \
  -H "Authorization: Bearer $API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"archived": true}'

Part 8: Import from Other Services

Import from Pocket

  1. Pocket → Export → downloads ril_export.html
  2. Hoarder → Settings → Import → select Pocket HTML file
  3. All links import with original tags preserved

Import from Raindrop.io

  1. Raindrop.io → Export → CSV or HTML
  2. Hoarder → Settings → Import → select file

Import bookmarks from browser

  1. Chrome/Firefox: Bookmarks Manager → Export → HTML
  2. Hoarder → Settings → Import → select HTML file

All imported bookmarks are queued for AI processing automatically.


Resource Requirements

ComponentMinimum RAMRecommended RAM
Hoarder web256MB512MB
MeiliSearch256MB512MB
Chromium512MB1GB
Ollama (llama3.2)4GB8GB
Total~1GB (no Ollama)~6-10GB (with Ollama)

If RAM is limited, use OpenAI API instead of Ollama — it requires no local inference.


Maintenance

# Update:
docker compose pull
docker compose up -d

# Backup:
tar -czf hoarder-backup-$(date +%Y%m%d).tar.gz \
  $(docker volume inspect hoarder_hoarder_data --format '{{.Mountpoint}}')

# Backup MeiliSearch indexes:
tar -czf hoarder-search-$(date +%Y%m%d).tar.gz \
  $(docker volume inspect hoarder_meilisearch_data --format '{{.Mountpoint}}')

# Trigger re-processing of all bookmarks (after AI model change):
# Settings → Re-run AI tagging on all bookmarks

# Logs:
docker compose logs -f web
docker compose logs -f chrome

Why Self-Host Hoarder?

The case for self-hosting Hoarder comes down to three practical factors: data ownership, cost at scale, and operational control.

Data ownership is the fundamental argument. When you use a SaaS version of any tool, your data lives on someone else's infrastructure subject to their terms of service, their security practices, and their business continuity. If the vendor raises prices, gets acquired, changes API limits, or shuts down, you're left scrambling. Self-hosting Hoarder means your data and configuration stay on infrastructure you control — whether that's a VPS, a bare metal server, or a home lab.

Cost at scale matters once you move beyond individual use. Most SaaS equivalents charge per user or per data volume. A self-hosted instance on a $10-20/month VPS typically costs less than per-user SaaS pricing for teams of five or more — and the cost doesn't scale linearly with usage. One well-configured server handles dozens of users for a flat monthly fee.

Operational control is the third factor. The Docker Compose configuration above exposes every setting that commercial equivalents often hide behind enterprise plans: custom networking, environment variables, storage backends, and authentication integrations. You decide when to update, how to configure backups, and what access controls to apply.

The honest tradeoff: you're responsible for updates, backups, and availability. For teams running any production workloads, this is familiar territory. For individuals, the learning curve is real but the tooling (Docker, Caddy, automated backups) is well-documented and widely supported.

Server Requirements and Sizing

Before deploying Hoarder, assess your server capacity against expected workload.

Minimum viable setup: A 1 vCPU, 1GB RAM VPS with 20GB SSD is sufficient for personal use or small teams. Most consumer VPS providers — Hetzner, DigitalOcean, Linode, Vultr — offer machines in this range for $5-10/month. Hetzner offers excellent price-to-performance for European and US regions.

Recommended production setup: 2 vCPUs with 4GB RAM and 40GB SSD handles most medium deployments without resource contention. This gives Hoarder headroom for background tasks, caching, and concurrent users while leaving capacity for other services on the same host.

Storage planning: The Docker volumes in this docker-compose.yml store all persistent Hoarder data. Estimate your storage growth rate early — for data-intensive tools, budget for 3-5x your initial estimate. Hetzner Cloud and Vultr both support online volume resizing without stopping your instance.

Operating system: Any modern 64-bit Linux distribution works. Ubuntu 22.04 LTS and Debian 12 are the most commonly tested configurations. Ensure Docker Engine 24.0+ and Docker Compose v2 are installed — verify with docker --version and docker compose version. Avoid Docker Desktop on production Linux servers; it adds virtualization overhead and behaves differently from Docker Engine in ways that cause subtle networking issues.

Network: Only ports 80 and 443 need to be publicly accessible when running behind a reverse proxy. Internal service ports should be bound to localhost only. A minimal UFW firewall that blocks all inbound traffic except SSH, HTTP, and HTTPS is the single most effective security measure for a self-hosted server.

Backup and Disaster Recovery

Running Hoarder without a tested backup strategy is an unacceptable availability risk. Docker volumes are not automatically backed up — if you delete a volume or the host fails, data is gone with no recovery path.

What to back up: The named Docker volumes containing Hoarder's data (database files, user uploads, application state), your docker-compose.yml and any customized configuration files, and .env files containing secrets.

Backup approach: For simple setups, stop the container, archive the volume contents, then restart. For production environments where stopping causes disruption, use filesystem snapshots or database dump commands (PostgreSQL pg_dump, SQLite .backup, MySQL mysqldump) that produce consistent backups without downtime.

For a complete automated backup workflow that ships snapshots to S3-compatible object storage, see the Restic + Rclone backup guide. Restic handles deduplication and encryption; Rclone handles multi-destination uploads. The same setup works for any Docker volume.

Backup cadence: Daily backups to remote storage are a reasonable baseline for actively used tools. Use a 30-day retention window minimum — long enough to recover from mistakes discovered weeks later. For critical data, extend to 90 days and use a secondary destination.

Restore testing: A backup that has never been restored is a backup you cannot trust. Once a month, restore your Hoarder backup to a separate Docker Compose stack on different ports and verify the data is intact. This catches silent backup failures, script errors, and volume permission issues before they matter in a real recovery.

Security Hardening

Self-hosting means you are responsible for Hoarder's security posture. The Docker Compose setup provides a functional base; production deployments need additional hardening.

Always use a reverse proxy: Never expose Hoarder's internal port directly to the internet. The docker-compose.yml binds to localhost; Caddy or Nginx provides HTTPS termination. Direct HTTP access transmits credentials in plaintext. A reverse proxy also centralizes TLS management, rate limiting, and access logging.

Strong credentials: Change default passwords immediately after first login. For secrets in docker-compose environment variables, generate random values with openssl rand -base64 32 rather than reusing existing passwords.

Firewall configuration:

ufw default deny incoming
ufw allow 22/tcp
ufw allow 80/tcp
ufw allow 443/tcp
ufw enable

Internal service ports (databases, admin panels, internal APIs) should only be reachable from localhost or the Docker network, never directly from the internet.

Network isolation: Docker Compose named networks keep Hoarder's services isolated from other containers on the same host. Database containers should not share networks with containers that don't need direct database access.

VPN access for sensitive services: For internal-only tools, restricting access to a VPN adds a strong second layer. Headscale is an open source Tailscale control server that puts your self-hosted stack behind a WireGuard mesh, eliminating public internet exposure for internal tools.

Update discipline: Subscribe to Hoarder's GitHub releases page to receive security advisory notifications. Schedule a monthly maintenance window to pull updated images. Running outdated container images is the most common cause of self-hosted service compromises.

Troubleshooting Common Issues

Container exits immediately or won't start

Check logs first — they almost always explain the failure:

docker compose logs -f hoarder

Common causes: a missing required environment variable, a port already in use, or a volume permission error. Port conflicts appear as bind: address already in use. Find the conflicting process with ss -tlpn | grep PORT and either stop it or change Hoarder's port mapping in docker-compose.yml.

Cannot reach the web interface

Work through this checklist:

  1. Confirm the container is running: docker compose ps
  2. Test locally on the server: curl -I http://localhost:PORT
  3. If local access works but external doesn't, check your firewall: ufw status
  4. If using a reverse proxy, verify it's running and the config is valid: caddy validate --config /etc/caddy/Caddyfile

Permission errors on volume mounts

Some containers run as a non-root user. If the Docker volume is owned by root, the container process cannot write to it. Find the volume's host path with docker volume inspect VOLUME_NAME, check the tool's documentation for its expected UID, and apply correct ownership:

chown -R 1000:1000 /var/lib/docker/volumes/your_volume/_data

High resource usage over time

Memory or CPU growing continuously usually indicates unconfigured log rotation, an unbound cache, or accumulated data needing pruning. Check current usage with docker stats hoarder. Add resource limits in docker-compose.yml to prevent one container from starving others. For ongoing visibility into resource trends, deploy Prometheus + Grafana or Netdata.

Data disappears after container restart

Data stored in the container's writable layer — rather than a named volume — is lost when the container is removed or recreated. This happens when the volume mount path in docker-compose.yml doesn't match where the application writes data. Verify mount paths against the tool's documentation and correct the mapping. Named volumes persist across container removal; only docker compose down -v deletes them.

Keeping Hoarder Updated

Hoarder follows a regular release cadence. Staying current matters for security patches and compatibility. The update process with Docker Compose is straightforward:

docker compose pull          # Download updated images
docker compose up -d         # Restart with new images
docker image prune -f        # Remove old image layers (optional)

Read the changelog before major version updates. Some releases include database migrations or breaking configuration changes. For major version bumps, test in a staging environment first — run a copy of the service on different ports with the same volume data to validate the migration before touching production.

Version pinning: For stability, pin to a specific image tag in docker-compose.yml instead of latest. Update deliberately after reviewing the changelog. This trades automatic patch delivery for predictable behavior — the right call for business-critical services.

Post-update verification: After updating, confirm Hoarder is functioning correctly. Most services expose a /health endpoint that returns HTTP 200 — curl it from the server or monitor it with your uptime tool.


See also: Linkding — simpler, minimal bookmark manager with lower resource requirements

See all open source productivity tools at OSSAlt.com/categories/productivity.

The SaaS-to-Self-Hosted Migration Guide (Free PDF)

Step-by-step: infrastructure setup, data migration, backups, and security for 15+ common SaaS replacements. Used by 300+ developers.

Join 300+ self-hosters. Unsubscribe in one click.