95 lines
11 KiB
Markdown
95 lines
11 KiB
Markdown
# Homelab Optimization Recommendations - Refined
|
|
|
|
This document outlines the refined rationale behind organizing your Docker Compose services into `standalone` and `swarm` categories within the `optimized` directory. The primary goal is to **reduce duplication**, **select optimal deployment methods**, and **streamline your homelab configuration** for better resource utilization (especially with Podman for standalone services) and efficient Docker Swarm orchestration.
|
|
|
|
## General Principles
|
|
|
|
* **Minimization:** Only include necessary `docker-compose.yml` files, eliminating redundancy.
|
|
* **Optimal Placement:** Services are placed in `standalone` only if they have strict host-level dependencies (`network_mode: host`, device access, `privileged` mode) or are single-instance infrastructure components that don't benefit from Swarm orchestration. All other suitable services are moved to `swarm`.
|
|
* **Consolidation:** Where a service appeared in multiple stacks, the most comprehensive, up-to-date, or Swarm-optimized version was selected.
|
|
* **Podman Compatibility:** Standalone services are documented with `podman-compose` or `podman run` instructions to aid in transitioning to Podman.
|
|
|
|
---
|
|
|
|
## Standalone Services
|
|
|
|
These services have been moved to `optimized/standalone/` due to their intrinsic host-specific requirements or their nature as single-instance infrastructure components. Each subdirectory contains the original `docker-compose.yml` and a `README.md` with instructions for running with `podman-compose` or `podman run`.
|
|
|
|
### List of Selected Standalone Services:
|
|
|
|
1. **`alpine-unbound` (from `builds/alpine-unbound/docker-compose.yml`)**
|
|
* **Rationale:** Primarily a build definition for a custom Unbound DNS server. While the resulting image *could* be used in Swarm, running a single, dedicated DNS instance is often best managed outside a dynamic Swarm for network stability and direct host integration.
|
|
2. **`ubuntu-unbound` (from `builds/ubuntu-unbound/docker-compose.yml`)**
|
|
* **Rationale:** Similar to `alpine-unbound`, this is a build definition. The resulting `ubuntu-server` service uses `network_mode: host` and `privileged: true`, making it strictly unsuitable for Docker Swarm and necessitating a standalone deployment on a dedicated host.
|
|
3. **`Caddy` (from `services/standalone/Caddy/docker-compose.yml`)**
|
|
* **Rationale:** Given that Traefik is designated as the primary Swarm ingress controller, this Caddy instance is preserved for a specific, likely local or non-Swarm-related, reverse proxy or fallback function.
|
|
4. **`MacOS` (from `services/standalone/MacOS/docker-compose.yaml`)**
|
|
* **Rationale:** Runs a macOS virtual machine, which requires direct hardware device access (`/dev/kvm`) and is highly resource-intensive. This is inherently a host-specific, standalone application and cannot be orchestrated by Swarm.
|
|
5. **`Pihole` (from `services/standalone/Pihole/docker-compose.yml`)**
|
|
* **Rationale:** Uses `network_mode: host` to function as a network-wide DNS ad blocker. DNS services are generally most effective and stable when run on dedicated hosts with direct network access, rather than within a dynamic Swarm overlay network.
|
|
6. **`Pihole_Adguard` (from `services/standalone/Pihole/pihole_adguard/docker-compose.yml`)**
|
|
* **Rationale:** This is a chained DNS setup (AdGuard Home -> Pi-hole) where both services explicitly require `network_mode: host` for proper network integration, making it a definitive standalone deployment.
|
|
7. **`Portainer_Agent_Standalone` (from `services/standalone/Portainer Agent/docker-compose.yml`)**
|
|
* **Rationale:** This specific `docker-compose.yml` is configured for deploying a single Portainer agent to manage a *standalone* Docker host. This is distinct from the Portainer stack designed for Swarm, which deploys agents globally across Swarm nodes.
|
|
8. **`RustDesk` (from `services/standalone/RustDesk/docker-compose.yml`)**
|
|
* **Rationale:** The RustDesk signaling and relay servers are backend services for remote access. They are typically self-contained and do not significantly benefit from Swarm's orchestration features (like automatic scaling) for their core purpose, making them suitable for standalone deployment.
|
|
9. **`Traefik_Standalone` (from `services/standalone/Traefik/docker-compose.yml`)**
|
|
* **Rationale:** This Traefik instance is explicitly configured to act as a reverse proxy for services running on a *single Docker host*, by directly utilizing the local `docker.sock`. It is not designed for a multi-node Swarm environment, which uses a separate, dedicated Traefik configuration.
|
|
|
|
### Services Removed from Standalone:
|
|
|
|
* **`Nextcloud` (original `services/standalone/Nextcloud/docker-compose.yml`)**
|
|
* **Reason for Removal:** This large, integrated stack contained several services (Plex, Jellyfin, TSDProxy) that relied on `network_mode: host`, and `Nextcloud` itself was duplicated by a Swarm-optimized version. To streamline and avoid conflicts, the entire standalone `Nextcloud` stack was removed. The media services (Plex, Jellyfin) and Nextcloud have dedicated, Swarm-optimized stacks, making this file redundant and suboptimal.
|
|
|
|
---
|
|
|
|
## Swarm Services
|
|
|
|
These services are selected for their suitability and benefit from Docker Swarm's orchestration capabilities, including load balancing, service discovery, and high availability across your cluster. They are located in `optimized/swarm/`. Only the most complete and Swarm-native configurations have been retained.
|
|
|
|
### List of Selected Swarm Services:
|
|
|
|
1. **`media-stack.yml` (from `services/swarm/stacks/media/media-stack.yml`)**
|
|
* **Rationale:** Provides comprehensive media services (`Homarr`, `Plex`, `Jellyfin`, `Immich` components). This version was chosen for its use of `sterl.xyz` domains and its robust Swarm configuration without `network_mode: host` constraints, making it fully Swarm-native.
|
|
2. **`networking-stack.yml` (from `services/swarm/stacks/networking/networking-stack.yml`)**
|
|
* **Rationale:** This is the definitive Swarm networking stack. It now includes **DockTail** as the preferred Tailscale integration solution, replacing `tsdproxy`. DockTail (chosen over `tsdproxy` for its stateless nature, explicit Tailscale Funnel support, and "zero-configuration service mesh" approach) uses label-based configuration to automatically expose Docker containers as Tailscale services. This stack also contains Traefik (implied v3.8 features, `cfresolver`, `sterl.xyz` domains) and a `whoami` test service, consolidating all core Swarm networking components.
|
|
3. **`productivity-stack.yml` (from `services/swarm/stacks/productivity/productivity-stack.yml`)**
|
|
* **Rationale:** Contains a Swarm-optimized Nextcloud deployment with its PostgreSQL database and Redis, using `sterl.xyz` domains. This provides a highly available and scalable Nextcloud instance within your Swarm.
|
|
4. **`ai-stack.yml` (from `services/swarm/stacks/ai/ai.yml`)**
|
|
* **Rationale:** `openwebui` is configured with Swarm `deploy` sections, resource limits, and node placement constraints (`heavy`, `ai`), ensuring it runs optimally within your Swarm cluster.
|
|
5. **`applications-stack.yml` (from `services/swarm/stacks/applications/applications-stack.yml`)**
|
|
* **Rationale:** This stack bundles `paperless` (with its Redis and PostgreSQL), `stirling-pdf`, and `searxng`. All services are configured for Swarm deployment, providing centralized document management, PDF tools, and a privacy-focused search engine with Traefik integration.
|
|
6. **`infrastructure-stack.yml` (from `services/swarm/stacks/infrastructure/infrastructure-stack.yml`)**
|
|
* **Rationale:** Retains the core `komodo` components (`komodo-mongo`, `komodo-core`, `komodo-periphery`) essential for your infrastructure. **Redundant `tsdproxy` and `watchtower` services have been removed** from this file, as `tsdproxy` is now replaced by DockTail in `networking-stack.yml` and `watchtower` is handled by `monitoring-stack.yml`.
|
|
7. **`monitoring-stack.yml` (from `services/swarm/stacks/monitoring/monitoring-stack.yml`)**
|
|
* **Rationale:** A comprehensive monitoring solution for your Swarm cluster, including `Prometheus`, `Grafana`, `Alertmanager`, a global `node-exporter`, and `cAdvisor`. This consolidates all monitoring-related services.
|
|
8. **`n8n-stack.yml` (from `services/swarm/stacks/productivity/n8n-stack.yml`)**
|
|
* **Rationale:** The n8n workflow automation platform, fully configured for robust and highly available operation within your Docker Swarm.
|
|
9. **`gitea-stack.yml` (from `services/swarm/stacks/tools/gitea-stack.yml`)**
|
|
* **Rationale:** Deploys Gitea (a self-hosted Git service) along with its PostgreSQL database, optimized for Swarm to provide a resilient and accessible code repository.
|
|
10. **`portainer-stack.yml` (from `services/swarm/stacks/tools/portainer-stack.yml`)**
|
|
* **Rationale:** This stack deploys the Portainer Server and its agents globally across your Swarm nodes, enabling centralized management of your entire cluster.
|
|
11. **`tools-stack.yml` (from `services/swarm/stacks/tools/tools-stack.yml`)**
|
|
* **Rationale:** Includes `dozzle` for efficient, centralized log viewing across all containers in your Swarm environment.
|
|
|
|
### Services Removed from Swarm (due to redundancy or suboptimal configuration):
|
|
|
|
* **`services/swarm/omv_volume_stacks/docker-swarm-media-stack.yml`**: Superseded by the more comprehensive `media-stack.yml` in `services/swarm/stacks/media/`.
|
|
* **`services/swarm/omv_volume_stacks/networking-stack.yml`**: Superseded by the more robust `networking-stack.yml` in `services/swarm/stacks/networking/`.
|
|
* **`services/swarm/omv_volume_stacks/productivity-stack.yml`**: Superseded by the `productivity-stack.yml` in `services/swarm/stacks/productivity/`.
|
|
* **`services/swarm/stacks/archive/full-stack-complete.yml`**: A large, consolidated stack whose individual components are better managed within their respective, more focused, optimized stacks.
|
|
* **`services/swarm/stacks/archive/tsdproxy-stack.yml`**: The `tsdproxy` service is now consistently replaced by DockTail within the `networking-stack.yml` for unified management.
|
|
* **`services/swarm/stacks/monitoring/node-exporter-stack.yml`**: Node-exporter is already integrated as a global service within the `monitoring-stack.yml`.
|
|
* **`services/swarm/traefik/stack.yml`**: This represented an older Traefik `v2.10` configuration and was superseded by the more recent and feature-rich Traefik in `networking-stack.yml`.
|
|
* **`services/swarm/traefik/traefik.yml`**: Largely duplicated the functionality and configuration chosen for the `networking-stack.yml` Traefik, so it was removed for consolidation.
|
|
|
|
---
|
|
|
|
## Kubernetes Placeholder
|
|
|
|
* **`kubernetes/README.md`**: An empty directory with a placeholder README has been created to acknowledge future plans for Kubernetes migration or deployment, aligning with your request.
|
|
|
|
---
|
|
|
|
This refined optimized structure significantly reduces redundancy, ensures that each service is deployed using the most appropriate method (standalone for host-dependent, Swarm for orchestrated), and provides a cleaner, more manageable configuration for your homelab. It also incorporates DockTail as a modern and efficient solution for Tailscale integration.
|