Files
Homelab/docs/guides/OMV.md

13 KiB

OMV Configuration Guide for Docker Swarm Integration

This guide outlines the setup for an OpenMediaVault (OMV) virtual machine and its integration with a Docker Swarm cluster for providing network storage to services like Jellyfin, Nextcloud, Immich, and others.


1. OMV Virtual Machine Configuration

The OMV instance is configured as a virtual machine with the following specifications:

  • RAM: 2-4 GB
  • CPU: 2 Cores
  • System Storage: 32 GB
  • Data Storage: A 512GB SATA SSD is passed through directly from the Proxmox host. This SSD is dedicated to network shares.
  • Network: Static IP address 192.168.1.70 on the 192.168.1.0/24 subnet

2. Network Share Setup in OMV

The primary purpose of this OMV instance is to serve files to other applications and services on the network, particularly Docker Swarm containers.

Shared Folders Overview

The following shared folders should be created in OMV (via Storage → Shared Folders):

Folder Name Purpose Protocol Permissions
Media Media files for Jellyfin SMB swarm-user: RW
ImmichUploads Photo uploads for Immich NFS UID 999: RW
TraefikLetsEncrypt SSL certificates for Traefik NFS Root: RW
ImmichDB Immich PostgreSQL database NFS Root: RW
NextcloudDB Nextcloud PostgreSQL database NFS Root: RW
NextcloudApps Nextcloud custom apps NFS www-data (33): RW
NextcloudConfig Nextcloud configuration NFS www-data (33): RW
NextcloudData Nextcloud user data NFS www-data (33): RW

SMB (Server Message Block) Shares

SMB is used for services that require file-based media access, particularly for services accessed by multiple platforms (Windows, Linux, macOS).

Media Share

  • Shared Folder: Media
  • Purpose: Stores media files for Jellyfin and other media servers
  • SMB Configuration:
    • Share Name: Media
    • Public: No (authentication required)
    • Browseable: Yes
    • Read-only: No
    • Guest Access: No
    • Permissions: swarm-user has read/write access
  • Path on OMV: /srv/dev-disk-by-uuid-fd2daa6f-bd75-4ac1-9c4c-9e4d4b84d845/Media

NFS (Network File System) Shares

NFS is utilized for services requiring block-level access, specific POSIX permissions, or better performance for containerized applications.

Nextcloud Shares

  • Shared Folders: NextcloudApps, NextcloudConfig, NextcloudData
  • Purpose: Application files, configuration, and user data for Nextcloud
  • NFS Configuration:
    • Client: 192.168.1.0/24 (Accessible to the entire subnet)
    • Privilege: Read/Write
    • Extra Options: all_squash,anongid=33,anonuid=33,sync,no_subtree_check
      • all_squash: Maps all client UIDs/GIDs to anonymous user
      • anonuid=33,anongid=33: Maps to www-data user/group (Nextcloud/Apache/Nginx)
      • sync: Ensures data is written to disk before acknowledging (data integrity)
      • no_subtree_check: Improves reliability for directory exports

Database Shares

  • Shared Folders: ImmichDB, NextcloudDB
  • Purpose: PostgreSQL database storage for Immich and Nextcloud
  • NFS Configuration:
    • Client: 192.168.1.0/24
    • Privilege: Read/Write
    • Extra Options: rw,sync,no_subtree_check,no_root_squash
      • no_root_squash: Allows root on client to be treated as root on server (needed for database operations)
      • sync: Critical for database integrity

Application Data Shares

  • Shared Folder: ImmichUploads

  • Purpose: Photo and video uploads for Immich

  • NFS Configuration:

    • Client: 192.168.1.0/24
    • Privilege: Read/Write
    • Extra Options: rw,sync,no_subtree_check,all_squash,anonuid=999,anongid=999
      • Maps to Immich's internal user (typically UID/GID 999)
  • Shared Folder: TraefikLetsEncrypt

  • Purpose: SSL certificate storage for Traefik reverse proxy

  • NFS Configuration:

    • Client: 192.168.1.0/24
    • Privilege: Read/Write
    • Extra Options: rw,sync,no_subtree_check,no_root_squash

3. Integrating OMV Shares with Docker Swarm Services

To use the OMV network shares with Docker Swarm services, the shares must be mounted on the Docker worker nodes where the service containers will run. The mounted path on the node is then passed into the container as a volume.

Prerequisites on Docker Nodes

All Docker nodes that will mount shares need the appropriate client utilities installed:

# For SMB shares
sudo apt-get update
sudo apt-get install cifs-utils

# For NFS shares
sudo apt-get update
sudo apt-get install nfs-common

Example 1: Jellyfin Media Access via SMB

Jellyfin, running as a Docker Swarm service, requires access to the media files stored on the OMV Media share.

Step 1: Create SMB Credentials File

Create a credentials file on the Docker node to avoid storing passwords in /etc/fstab:

# Create credentials file
sudo nano /root/.smbcredentials

Add the following content:

username=swarm-user
password=YOUR_PASSWORD_HERE

Secure the file:

sudo chmod 600 /root/.smbcredentials

Step 2: Mount the SMB Share on the Docker Node

# Create mount point
sudo mkdir -p /mnt/media

# Test the mount first
sudo mount -t cifs //192.168.1.70/Media /mnt/media -o credentials=/root/.smbcredentials,iocharset=utf8,vers=3.0

# Verify it works
ls -la /mnt/media

# Unmount test
sudo umount /mnt/media

Step 3: Add Permanent Mount to /etc/fstab

sudo nano /etc/fstab

Add this line:

//192.168.1.70/Media /mnt/media cifs credentials=/root/.smbcredentials,iocharset=utf8,vers=3.0,file_mode=0755,dir_mode=0755 0 0

Mount all entries:

sudo mount -a

Step 4: Configure the Jellyfin Docker Swarm Service

In the Docker Compose YAML file for your Jellyfin service:

services:
  jellyfin:
    image: jellyfin/jellyfin:latest
    volumes:
      - /mnt/media:/media:ro  # Read-only access to prevent accidental deletion
    deploy:
      placement:
        constraints:
          - node.labels.media==true  # Deploy only on nodes with media mount
    # ... other configurations

Example 2: Nextcloud Data Access via NFS

Nextcloud, running as a Docker Swarm service, requires access to its application, configuration, and data files stored on the OMV NFS shares.

Step 1: Create Mount Points

sudo mkdir -p /mnt/nextcloud/{apps,config,data}

Step 2: Test NFS Mounts

# Test each mount
sudo mount -t nfs 192.168.1.70:/NextcloudApps /mnt/nextcloud/apps -o vers=4.2
sudo mount -t nfs 192.168.1.70:/NextcloudConfig /mnt/nextcloud/config -o vers=4.2
sudo mount -t nfs 192.168.1.70:/NextcloudData /mnt/nextcloud/data -o vers=4.2

# Verify
ls -la /mnt/nextcloud/apps
ls -la /mnt/nextcloud/config
ls -la /mnt/nextcloud/data

# Unmount tests
sudo umount /mnt/nextcloud/apps
sudo umount /mnt/nextcloud/config
sudo umount /mnt/nextcloud/data

Step 3: Add Permanent Mounts to /etc/fstab

sudo nano /etc/fstab

Add these lines:

192.168.1.70:/NextcloudApps /mnt/nextcloud/apps nfs auto,nofail,noatime,rw,vers=4.2,all_squash,anongid=33,anonuid=33 0 0
192.168.1.70:/NextcloudConfig /mnt/nextcloud/config nfs auto,nofail,noatime,rw,vers=4.2,all_squash,anongid=33,anonuid=33 0 0
192.168.1.70:/NextcloudData /mnt/nextcloud/data nfs auto,nofail,noatime,rw,vers=4.2,all_squash,anongid=33,anonuid=33 0 0

Mount Options Explained:

  • auto: Mount at boot
  • nofail: Don't fail boot if mount fails
  • noatime: Don't update access times (performance)
  • rw: Read-write
  • vers=4.2: Use NFSv4.2 (better performance and security)
  • all_squash,anongid=33,anonuid=33: Map all users to www-data

Mount all entries:

sudo mount -a

Step 4: Configure the Nextcloud Docker Swarm Service

services:
  nextcloud:
    image: nextcloud:latest
    volumes:
      - /mnt/nextcloud/apps:/var/www/html/custom_apps
      - /mnt/nextcloud/config:/var/www/html/config
      - /mnt/nextcloud/data:/var/www/html/data
    deploy:
      placement:
        constraints:
          - node.labels.nextcloud==true
    # ... other configurations

Example 3: Database Storage via NFS

For stateful services like databases, storing their data on a resilient network share is critical for data integrity and high availability.

Step 1: Create Mount Points

sudo mkdir -p /mnt/database/{immich,nextcloud}

Step 2: Test NFS Mounts

# Test mounts
sudo mount -t nfs 192.168.1.70:/ImmichDB /mnt/database/immich -o vers=4.2
sudo mount -t nfs 192.168.1.70:/NextcloudDB /mnt/database/nextcloud -o vers=4.2

# Verify
ls -la /mnt/database/immich
ls -la /mnt/database/nextcloud

# Unmount tests
sudo umount /mnt/database/immich
sudo umount /mnt/database/nextcloud

Step 3: Add Permanent Mounts to /etc/fstab

sudo nano /etc/fstab

Add these lines:

192.168.1.70:/ImmichDB /mnt/database/immich nfs auto,nofail,noatime,rw,vers=4.2,sync,no_subtree_check,no_root_squash 0 0
192.168.1.70:/NextcloudDB /mnt/database/nextcloud nfs auto,nofail,noatime,rw,vers=4.2,sync,no_subtree_check,no_root_squash 0 0

Critical for Databases:

  • sync: Ensures writes are committed to disk before acknowledgment (prevents data corruption)
  • no_root_squash: Allows database containers running as root to maintain proper permissions

Mount all entries:

sudo mount -a

Step 4: Configure Database Docker Swarm Services

Immich Database:

services:
  immich-db:
    image: tensorchord/pgvecto-rs:pg14-v0.2.0
    volumes:
      - /mnt/database/immich:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: immich
      POSTGRES_DB: immich
    deploy:
      placement:
        constraints:
          - node.labels.database==true

Nextcloud Database:

services:
  nextcloud-db:
    image: postgres:15-alpine
    volumes:
      - /mnt/database/nextcloud:/var/lib/postgresql/data
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: nextcloud
      POSTGRES_DB: nextcloud
    deploy:
      placement:
        constraints:
          - node.labels.database==true

Example 4: Immich Upload Storage via NFS

# Create mount point
sudo mkdir -p /mnt/immich/uploads

# Add to /etc/fstab
192.168.1.70:/ImmichUploads /mnt/immich/uploads nfs auto,nofail,noatime,rw,vers=4.2,sync,no_subtree_check,all_squash,anonuid=999,anongid=999 0 0

# Mount
sudo mount -a

Docker Service:

services:
  immich-server:
    image: ghcr.io/immich-app/immich-server:release
    volumes:
      - /mnt/immich/uploads:/usr/src/app/upload
    # ... other configurations

Example 5: Traefik Certificate Storage via NFS

# Create mount point
sudo mkdir -p /mnt/traefik/letsencrypt

# Add to /etc/fstab
192.168.1.70:/TraefikLetsEncrypt /mnt/traefik/letsencrypt nfs auto,nofail,noatime,rw,vers=4.2,sync,no_subtree_check,no_root_squash 0 0

# Mount
sudo mount -a

Docker Service:

services:
  traefik:
    image: traefik:latest
    volumes:
      - /mnt/traefik/letsencrypt:/letsencrypt
    # ... other configurations

4. Best Practices and Recommendations

Security

  1. Use dedicated service accounts with minimal required permissions
  2. Secure credential files with chmod 600
  3. Limit NFS exports to specific subnets or IPs when possible
  4. Use NFSv4.2 for improved security and performance

Reliability

  1. Use nofail in fstab to prevent boot failures if NFS is unavailable
  2. Test mounts manually before adding to fstab
  3. Monitor NFS/SMB services on OMV server
  4. Regular backups of configuration and data

Performance

  1. Use NFS for containerized applications (better performance than SMB)
  2. Use noatime to reduce write operations
  3. Use sync for databases to ensure data integrity
  4. Consider async for media files if performance is critical (with backup strategy)

Verification Commands

# Check all mounts
mount | grep -E 'nfs|cifs'

# Check NFS statistics
nfsstat -m

# Test write permissions
touch /mnt/media/test.txt && rm /mnt/media/test.txt

# Check OMV exports (from OMV server)
sudo exportfs -v

# Check SMB status (from OMV server)
sudo smbstatus

5. Troubleshooting

Issue: Mount hangs at boot

Solution: Add nofail option to fstab entries

Issue: Permission denied errors

Solution:

  • Verify UID/GID mappings match between NFS options and container user
  • Check folder permissions on OMV server
  • Ensure no_root_squash is set for services requiring root access

Issue: Stale NFS handles

Solution:

# Unmount forcefully
sudo umount -f /mnt/path

# Or lazy unmount
sudo umount -l /mnt/path

# Restart NFS client
sudo systemctl restart nfs-client.target

Issue: SMB connection refused

Solution:

  • Verify SMB credentials
  • Check SMB service status on OMV: sudo systemctl status smbd
  • Verify firewall rules allow SMB traffic (ports 445, 139)

Your OMV server is now fully integrated with your Docker Swarm cluster, providing robust, centralized storage for all your containerized services.