Refactor: Reorganize services into standalone structure
This commit is contained in:
1
.gitignore
vendored
Normal file
1
.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
homelab-backup-*.tar.gz
|
||||
94
optimized/RECOMMENDATIONS.md
Normal file
94
optimized/RECOMMENDATIONS.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# Homelab Optimization Recommendations - Refined
|
||||
|
||||
This document outlines the refined rationale behind organizing your Docker Compose services into `standalone` and `swarm` categories within the `optimized` directory. The primary goal is to **reduce duplication**, **select optimal deployment methods**, and **streamline your homelab configuration** for better resource utilization (especially with Podman for standalone services) and efficient Docker Swarm orchestration.
|
||||
|
||||
## General Principles
|
||||
|
||||
* **Minimization:** Only include necessary `docker-compose.yml` files, eliminating redundancy.
|
||||
* **Optimal Placement:** Services are placed in `standalone` only if they have strict host-level dependencies (`network_mode: host`, device access, `privileged` mode) or are single-instance infrastructure components that don't benefit from Swarm orchestration. All other suitable services are moved to `swarm`.
|
||||
* **Consolidation:** Where a service appeared in multiple stacks, the most comprehensive, up-to-date, or Swarm-optimized version was selected.
|
||||
* **Podman Compatibility:** Standalone services are documented with `podman-compose` or `podman run` instructions to aid in transitioning to Podman.
|
||||
|
||||
---
|
||||
|
||||
## Standalone Services
|
||||
|
||||
These services have been moved to `optimized/standalone/` due to their intrinsic host-specific requirements or their nature as single-instance infrastructure components. Each subdirectory contains the original `docker-compose.yml` and a `README.md` with instructions for running with `podman-compose` or `podman run`.
|
||||
|
||||
### List of Selected Standalone Services:
|
||||
|
||||
1. **`alpine-unbound` (from `builds/alpine-unbound/docker-compose.yml`)**
|
||||
* **Rationale:** Primarily a build definition for a custom Unbound DNS server. While the resulting image *could* be used in Swarm, running a single, dedicated DNS instance is often best managed outside a dynamic Swarm for network stability and direct host integration.
|
||||
2. **`ubuntu-unbound` (from `builds/ubuntu-unbound/docker-compose.yml`)**
|
||||
* **Rationale:** Similar to `alpine-unbound`, this is a build definition. The resulting `ubuntu-server` service uses `network_mode: host` and `privileged: true`, making it strictly unsuitable for Docker Swarm and necessitating a standalone deployment on a dedicated host.
|
||||
3. **`Caddy` (from `services/standalone/Caddy/docker-compose.yml`)**
|
||||
* **Rationale:** Given that Traefik is designated as the primary Swarm ingress controller, this Caddy instance is preserved for a specific, likely local or non-Swarm-related, reverse proxy or fallback function.
|
||||
4. **`MacOS` (from `services/standalone/MacOS/docker-compose.yaml`)**
|
||||
* **Rationale:** Runs a macOS virtual machine, which requires direct hardware device access (`/dev/kvm`) and is highly resource-intensive. This is inherently a host-specific, standalone application and cannot be orchestrated by Swarm.
|
||||
5. **`Pihole` (from `services/standalone/Pihole/docker-compose.yml`)**
|
||||
* **Rationale:** Uses `network_mode: host` to function as a network-wide DNS ad blocker. DNS services are generally most effective and stable when run on dedicated hosts with direct network access, rather than within a dynamic Swarm overlay network.
|
||||
6. **`Pihole_Adguard` (from `services/standalone/Pihole/pihole_adguard/docker-compose.yml`)**
|
||||
* **Rationale:** This is a chained DNS setup (AdGuard Home -> Pi-hole) where both services explicitly require `network_mode: host` for proper network integration, making it a definitive standalone deployment.
|
||||
7. **`Portainer_Agent_Standalone` (from `services/standalone/Portainer Agent/docker-compose.yml`)**
|
||||
* **Rationale:** This specific `docker-compose.yml` is configured for deploying a single Portainer agent to manage a *standalone* Docker host. This is distinct from the Portainer stack designed for Swarm, which deploys agents globally across Swarm nodes.
|
||||
8. **`RustDesk` (from `services/standalone/RustDesk/docker-compose.yml`)**
|
||||
* **Rationale:** The RustDesk signaling and relay servers are backend services for remote access. They are typically self-contained and do not significantly benefit from Swarm's orchestration features (like automatic scaling) for their core purpose, making them suitable for standalone deployment.
|
||||
9. **`Traefik_Standalone` (from `services/standalone/Traefik/docker-compose.yml`)**
|
||||
* **Rationale:** This Traefik instance is explicitly configured to act as a reverse proxy for services running on a *single Docker host*, by directly utilizing the local `docker.sock`. It is not designed for a multi-node Swarm environment, which uses a separate, dedicated Traefik configuration.
|
||||
|
||||
### Services Removed from Standalone:
|
||||
|
||||
* **`Nextcloud` (original `services/standalone/Nextcloud/docker-compose.yml`)**
|
||||
* **Reason for Removal:** This large, integrated stack contained several services (Plex, Jellyfin, TSDProxy) that relied on `network_mode: host`, and `Nextcloud` itself was duplicated by a Swarm-optimized version. To streamline and avoid conflicts, the entire standalone `Nextcloud` stack was removed. The media services (Plex, Jellyfin) and Nextcloud have dedicated, Swarm-optimized stacks, making this file redundant and suboptimal.
|
||||
|
||||
---
|
||||
|
||||
## Swarm Services
|
||||
|
||||
These services are selected for their suitability and benefit from Docker Swarm's orchestration capabilities, including load balancing, service discovery, and high availability across your cluster. They are located in `optimized/swarm/`. Only the most complete and Swarm-native configurations have been retained.
|
||||
|
||||
### List of Selected Swarm Services:
|
||||
|
||||
1. **`media-stack.yml` (from `services/swarm/stacks/media/media-stack.yml`)**
|
||||
* **Rationale:** Provides comprehensive media services (`Homarr`, `Plex`, `Jellyfin`, `Immich` components). This version was chosen for its use of `sterl.xyz` domains and its robust Swarm configuration without `network_mode: host` constraints, making it fully Swarm-native.
|
||||
2. **`networking-stack.yml` (from `services/swarm/stacks/networking/networking-stack.yml`)**
|
||||
* **Rationale:** This is the definitive Swarm networking stack. It now includes **DockTail** as the preferred Tailscale integration solution, replacing `tsdproxy`. DockTail (chosen over `tsdproxy` for its stateless nature, explicit Tailscale Funnel support, and "zero-configuration service mesh" approach) uses label-based configuration to automatically expose Docker containers as Tailscale services. This stack also contains Traefik (implied v3.8 features, `cfresolver`, `sterl.xyz` domains) and a `whoami` test service, consolidating all core Swarm networking components.
|
||||
3. **`productivity-stack.yml` (from `services/swarm/stacks/productivity/productivity-stack.yml`)**
|
||||
* **Rationale:** Contains a Swarm-optimized Nextcloud deployment with its PostgreSQL database and Redis, using `sterl.xyz` domains. This provides a highly available and scalable Nextcloud instance within your Swarm.
|
||||
4. **`ai-stack.yml` (from `services/swarm/stacks/ai/ai.yml`)**
|
||||
* **Rationale:** `openwebui` is configured with Swarm `deploy` sections, resource limits, and node placement constraints (`heavy`, `ai`), ensuring it runs optimally within your Swarm cluster.
|
||||
5. **`applications-stack.yml` (from `services/swarm/stacks/applications/applications-stack.yml`)**
|
||||
* **Rationale:** This stack bundles `paperless` (with its Redis and PostgreSQL), `stirling-pdf`, and `searxng`. All services are configured for Swarm deployment, providing centralized document management, PDF tools, and a privacy-focused search engine with Traefik integration.
|
||||
6. **`infrastructure-stack.yml` (from `services/swarm/stacks/infrastructure/infrastructure-stack.yml`)**
|
||||
* **Rationale:** Retains the core `komodo` components (`komodo-mongo`, `komodo-core`, `komodo-periphery`) essential for your infrastructure. **Redundant `tsdproxy` and `watchtower` services have been removed** from this file, as `tsdproxy` is now replaced by DockTail in `networking-stack.yml` and `watchtower` is handled by `monitoring-stack.yml`.
|
||||
7. **`monitoring-stack.yml` (from `services/swarm/stacks/monitoring/monitoring-stack.yml`)**
|
||||
* **Rationale:** A comprehensive monitoring solution for your Swarm cluster, including `Prometheus`, `Grafana`, `Alertmanager`, a global `node-exporter`, and `cAdvisor`. This consolidates all monitoring-related services.
|
||||
8. **`n8n-stack.yml` (from `services/swarm/stacks/productivity/n8n-stack.yml`)**
|
||||
* **Rationale:** The n8n workflow automation platform, fully configured for robust and highly available operation within your Docker Swarm.
|
||||
9. **`gitea-stack.yml` (from `services/swarm/stacks/tools/gitea-stack.yml`)**
|
||||
* **Rationale:** Deploys Gitea (a self-hosted Git service) along with its PostgreSQL database, optimized for Swarm to provide a resilient and accessible code repository.
|
||||
10. **`portainer-stack.yml` (from `services/swarm/stacks/tools/portainer-stack.yml`)**
|
||||
* **Rationale:** This stack deploys the Portainer Server and its agents globally across your Swarm nodes, enabling centralized management of your entire cluster.
|
||||
11. **`tools-stack.yml` (from `services/swarm/stacks/tools/tools-stack.yml`)**
|
||||
* **Rationale:** Includes `dozzle` for efficient, centralized log viewing across all containers in your Swarm environment.
|
||||
|
||||
### Services Removed from Swarm (due to redundancy or suboptimal configuration):
|
||||
|
||||
* **`services/swarm/omv_volume_stacks/docker-swarm-media-stack.yml`**: Superseded by the more comprehensive `media-stack.yml` in `services/swarm/stacks/media/`.
|
||||
* **`services/swarm/omv_volume_stacks/networking-stack.yml`**: Superseded by the more robust `networking-stack.yml` in `services/swarm/stacks/networking/`.
|
||||
* **`services/swarm/omv_volume_stacks/productivity-stack.yml`**: Superseded by the `productivity-stack.yml` in `services/swarm/stacks/productivity/`.
|
||||
* **`services/swarm/stacks/archive/full-stack-complete.yml`**: A large, consolidated stack whose individual components are better managed within their respective, more focused, optimized stacks.
|
||||
* **`services/swarm/stacks/archive/tsdproxy-stack.yml`**: The `tsdproxy` service is now consistently replaced by DockTail within the `networking-stack.yml` for unified management.
|
||||
* **`services/swarm/stacks/monitoring/node-exporter-stack.yml`**: Node-exporter is already integrated as a global service within the `monitoring-stack.yml`.
|
||||
* **`services/swarm/traefik/stack.yml`**: This represented an older Traefik `v2.10` configuration and was superseded by the more recent and feature-rich Traefik in `networking-stack.yml`.
|
||||
* **`services/swarm/traefik/traefik.yml`**: Largely duplicated the functionality and configuration chosen for the `networking-stack.yml` Traefik, so it was removed for consolidation.
|
||||
|
||||
---
|
||||
|
||||
## Kubernetes Placeholder
|
||||
|
||||
* **`kubernetes/README.md`**: An empty directory with a placeholder README has been created to acknowledge future plans for Kubernetes migration or deployment, aligning with your request.
|
||||
|
||||
---
|
||||
|
||||
This refined optimized structure significantly reduces redundancy, ensures that each service is deployed using the most appropriate method (standalone for host-dependent, Swarm for orchestrated), and provides a cleaner, more manageable configuration for your homelab. It also incorporates DockTail as a modern and efficient solution for Tailscale integration.
|
||||
13
optimized/kubernetes/README.md
Normal file
13
optimized/kubernetes/README.md
Normal file
@@ -0,0 +1,13 @@
|
||||
# Kubernetes Configurations
|
||||
|
||||
This directory is reserved for future Kubernetes configurations.
|
||||
|
||||
As your homelab evolves, services currently running in Docker Swarm or as standalone containers may be migrated to Kubernetes for more advanced orchestration, scaling, and management capabilities.
|
||||
|
||||
## Future Plans
|
||||
|
||||
* Placeholder for `.yaml` deployment files.
|
||||
* Instructions on how to deploy services to Kubernetes.
|
||||
* Notes on Kubernetes-specific considerations (e.g., storage, ingress).
|
||||
|
||||
Stay tuned for future updates!
|
||||
40
optimized/standalone/Caddy/README.md
Normal file
40
optimized/standalone/Caddy/README.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Caddy Fallback Server
|
||||
|
||||
This directory contains the `docker-compose.yml` for running a standalone Caddy server, potentially as a fallback or for specific local proxy needs.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Caddy
|
||||
```
|
||||
2. Ensure `Caddyfile` and `maintenance.html` exist in this directory as they are mounted as volumes.
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run the Caddy service directly with Podman. Note that for proper function, the `Caddyfile`, `maintenance.html`, and volume mounts are crucial.
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name caddy_fallback \
|
||||
--restart unless-stopped \
|
||||
-p "8080:80" \
|
||||
-p "8443:443" \
|
||||
-v ./Caddyfile:/etc/caddy/Caddyfile \
|
||||
-v ./maintenance.html:/srv/maintenance/maintenance.html \
|
||||
-v caddy_data:/data \
|
||||
-v caddy_config:/config \
|
||||
-v caddy_logs:/var/log/caddy \
|
||||
caddy:latest
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* Ensure the `Caddyfile` and `maintenance.html` are configured correctly for your use case.
|
||||
* The Caddy service was categorized as `standalone` because Traefik is designated for Swarm ingress, implying Caddy has a specialized, non-Swarm role here.
|
||||
27
optimized/standalone/Caddy/docker-compose.yml
Normal file
27
optimized/standalone/Caddy/docker-compose.yml
Normal file
@@ -0,0 +1,27 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
caddy:
|
||||
image: caddy:latest
|
||||
container_name: caddy_fallback
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8080:80"
|
||||
- "8443:443"
|
||||
volumes:
|
||||
- ./Caddyfile:/etc/caddy/Caddyfile
|
||||
- ./maintenance.html:/srv/maintenance/maintenance.html
|
||||
- caddy_data:/data
|
||||
- caddy_config:/config
|
||||
- caddy_logs:/var/log/caddy
|
||||
networks:
|
||||
- caddy_net
|
||||
|
||||
volumes:
|
||||
caddy_data:
|
||||
caddy_config:
|
||||
caddy_logs:
|
||||
|
||||
networks:
|
||||
caddy_net:
|
||||
driver: bridge
|
||||
55
optimized/standalone/MacOS/README.md
Normal file
55
optimized/standalone/MacOS/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# macOS VM
|
||||
|
||||
This directory contains the `docker-compose.yaml` for running a macOS virtual machine within Podman (or Docker). This setup is highly hardware-specific due to the use of `/dev/kvm` and direct device access, making it unsuitable for a Swarm environment.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. **Important**: Ensure your host system meets the requirements for running KVM-accelerated VMs (e.g., `/dev/kvm` is available and configured).
|
||||
2. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/MacOS
|
||||
```
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run the macOS VM directly with Podman. Pay close attention to the device mappings and network configuration.
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name macos \
|
||||
--restart always \
|
||||
-e VERSION="15" \
|
||||
-e DISK_SIZE="50G" \
|
||||
-e RAM_SIZE="6G" \
|
||||
-e CPU_CORES="4" \
|
||||
--device /dev/kvm \
|
||||
--device /dev/net/tun \
|
||||
--cap-add NET_ADMIN \
|
||||
-p 8006:8006 \
|
||||
-p 5900:5900/tcp \
|
||||
-p 5900:5900/udp \
|
||||
-v ./macos:/storage \
|
||||
dockurr/macos
|
||||
```
|
||||
**Note**: The original `docker-compose.yaml` defines a custom network with a specific `ipv4_address`. To replicate this with `podman run`, you would first need to create the network:
|
||||
```bash
|
||||
podman network create --subnet 172.70.20.0/29 macos
|
||||
```
|
||||
Then, you would need to attach the container to this network and specify the IP:
|
||||
```bash
|
||||
# ... (previous podman run command parts)
|
||||
--network macos --ip 172.70.20.3 \
|
||||
dockurr/macos
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This service requires significant host resources and direct hardware access.
|
||||
* The `stop_grace_period` is important for proper VM shutdown.
|
||||
* Ensure the `./macos` directory exists and has appropriate permissions for the VM storage.
|
||||
34
optimized/standalone/MacOS/docker-compose.yaml
Normal file
34
optimized/standalone/MacOS/docker-compose.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
# https://github.com/dockur/macos
|
||||
services:
|
||||
macos:
|
||||
image: dockurr/macos
|
||||
container_name: macos
|
||||
environment:
|
||||
VERSION: "15"
|
||||
DISK_SIZE: "50G"
|
||||
RAM_SIZE: "6G"
|
||||
CPU_CORES: "4"
|
||||
# DHCP: "Y" # if enabled you must create a macvlan
|
||||
devices:
|
||||
- /dev/kvm
|
||||
- /dev/net/tun
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
ports:
|
||||
- 8006:8006
|
||||
- 5900:5900/tcp
|
||||
- 5900:5900/udp
|
||||
volumes:
|
||||
- ./macos:/storage
|
||||
restart: always
|
||||
stop_grace_period: 2m
|
||||
networks:
|
||||
macos:
|
||||
ipv4_address: 172.70.20.3
|
||||
|
||||
networks:
|
||||
macos:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.70.20.0/29
|
||||
name: macos
|
||||
45
optimized/standalone/Pihole/README.md
Normal file
45
optimized/standalone/Pihole/README.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Pi-hole DNS Blocker
|
||||
|
||||
This directory contains the `docker-compose.yml` for running a standalone Pi-hole DNS ad blocker.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Pihole
|
||||
```
|
||||
2. Ensure you have replaced placeholder values like `WEBPASSWORD` with your actual secure password.
|
||||
3. Ensure the necessary host directories for volumes (`./etc-pihole`, `./etc-dnsmasq.d`) exist or create them.
|
||||
4. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
Due to `network_mode: host`, this service shares the host's network namespace and directly uses the host's IP address.
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name pihole \
|
||||
--network host \
|
||||
--restart unless-stopped \
|
||||
-e TZ="America/Chicago" \
|
||||
-e WEBPASSWORD="YOURSECUREPASSWORD" \
|
||||
-e FTLCONF_webserver_enabled="true" \
|
||||
-e FTLCONF_webserver_port="7300" \
|
||||
-e WEB_BIND_ADDR="0.0.0.0" \
|
||||
-e DNS1="127.0.0.1#5335" \
|
||||
-e DNS2="0.0.0.0" \
|
||||
-v ./etc-pihole:/etc/pihole \
|
||||
-v ./etc-dnsmasq.d:/etc/dnsmasq.d \
|
||||
pihole/pihole:latest
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* `network_mode: host` is essential for Pi-hole to function correctly as a DNS server for your local network.
|
||||
* The `WEBPASSWORD` environment variable is critical for securing your Pi-hole web interface.
|
||||
* Ensure the volume bind mounts (`./etc-pihole`, `./etc-dnsmasq.d`) are pointing to correct and persistent locations on your host.
|
||||
17
optimized/standalone/Pihole/docker-compose.yml
Normal file
17
optimized/standalone/Pihole/docker-compose.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
services:
|
||||
pihole:
|
||||
image: pihole/pihole:latest
|
||||
container_name: pihole
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
WEBPASSWORD: "YOURPASSWORD"
|
||||
FTLCONF_webserver_enabled: "true"
|
||||
FTLCONF_webserver_port: "7300"
|
||||
WEB_BIND_ADDR: "0.0.0.0"
|
||||
DNS1: "127.0.0.1#5335"
|
||||
DNS2: "0.0.0.0"
|
||||
volumes:
|
||||
- ./etc-pihole:/etc/pihole
|
||||
- ./etc-dnsmasq.d:/etc/dnsmasq.d
|
||||
restart: unless-stopped
|
||||
25
optimized/standalone/Pihole_Adguard/README.md
Normal file
25
optimized/standalone/Pihole_Adguard/README.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Pi-hole and AdGuard Home Chained DNS
|
||||
|
||||
This directory contains the `docker-compose.yml` for running a chained DNS setup with Pi-hole and AdGuard Home. Both services utilize `network_mode: host`, making this stack suitable for standalone deployment on a dedicated host.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this stack using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Pihole_Adguard
|
||||
```
|
||||
2. Ensure you have replaced placeholder values like `WEBPASSWORD` with your actual secure password.
|
||||
3. Ensure the necessary host directories for volumes (`pihole_etc`, `pihole_dnsmasq`, `adguard_conf`, `adguard_work`, `adguard_certs`) exist or create them.
|
||||
4. Start the services:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This setup provides advanced DNS features, including ad-blocking (Pi-hole) and encrypted DNS (AdGuard Home).
|
||||
* `network_mode: host` is crucial for both services to integrate seamlessly with your host's network and act as primary DNS resolvers.
|
||||
* Careful configuration of upstream DNS in AdGuard Home (pointing to Pi-hole) is required post-installation.
|
||||
* Ensure the volume bind mounts are pointing to correct and persistent locations on your host.
|
||||
47
optimized/standalone/Pihole_Adguard/docker-compose.yml
Normal file
47
optimized/standalone/Pihole_Adguard/docker-compose.yml
Normal file
@@ -0,0 +1,47 @@
|
||||
# =============================================================================
|
||||
# DNS Chain: Router(:53) → AdGuard(:53,DOH,DOT) → Pi-hole(:5353) → Unbound(:5335)
|
||||
# =============================================================================
|
||||
# NOTE: For HAOS, use the run_command file instead - compose doesn't work there
|
||||
# NOTE: Post-install: Configure AdGuard upstream to <host-ip>:5053
|
||||
# NOTE: Pi-hole handles blocking/caching, AdGuard handles DOH/DOT encryption
|
||||
# =============================================================================
|
||||
|
||||
services:
|
||||
pihole:
|
||||
image: pihole/pihole:latest
|
||||
container_name: pihole
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
WEBPASSWORD: "YOURPASSWORD"
|
||||
FTLCONF_webserver_enabled: "true"
|
||||
FTLCONF_webserver_port: "7300"
|
||||
WEB_BIND_ADDR: "0.0.0.0"
|
||||
FTLCONF_dns_port: "5053"
|
||||
# DNS1/DNS2 are deprecated in Pi-hole v6+, use FTLCONF_dns_upstreams
|
||||
FTLCONF_dns_upstreams: "127.0.0.1#5335"
|
||||
volumes:
|
||||
- pihole_etc:/etc/pihole:rw
|
||||
- pihole_dnsmasq:/etc/dnsmasq.d:rw
|
||||
restart: unless-stopped
|
||||
|
||||
adguardhome:
|
||||
image: adguard/adguardhome:latest
|
||||
container_name: adguardhome
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
volumes:
|
||||
- adguard_conf:/opt/adguardhome/conf:rw
|
||||
- adguard_work:/opt/adguardhome/work:rw
|
||||
- adguard_certs:/opt/adguardhome/conf/certs:ro
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- pihole
|
||||
|
||||
volumes:
|
||||
pihole_etc:
|
||||
pihole_dnsmasq:
|
||||
adguard_conf:
|
||||
adguard_work:
|
||||
adguard_certs:
|
||||
39
optimized/standalone/Portainer_Agent_Standalone/README.md
Normal file
39
optimized/standalone/Portainer_Agent_Standalone/README.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# Portainer Agent (Standalone Host)
|
||||
|
||||
This directory contains the `docker-compose.yml` for deploying a Portainer Agent on a standalone Docker (or Podman) host. This agent allows a central Portainer instance (potentially running in a Swarm) to manage this individual host.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To deploy the Portainer Agent using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Portainer_Agent_Standalone
|
||||
```
|
||||
2. **Important**: Replace `192.168.1.81` with the actual IP address or resolvable hostname of your Portainer Server instance in the `docker-compose.yml`.
|
||||
3. Start the agent:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run the Portainer Agent directly with Podman:
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name portainer-agent \
|
||||
--restart always \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
|
||||
-e AGENT_CLUSTER_ADDR=192.168.1.81 \
|
||||
-e AGENT_PORT=9001 \
|
||||
-p "9001:9001" \
|
||||
portainer/agent:latest
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This agent is specifically for managing a *standalone* Docker/Podman host. If you intend to manage a Swarm cluster, the Portainer Swarm stack (found in `optimized/swarm/Portainer`) should be used, which typically deploys agents globally across the Swarm nodes.
|
||||
* The volumes `/var/run/docker.sock` and `/var/lib/docker/volumes` are critical for the agent to communicate with and manage the Docker/Podman daemon.
|
||||
* Ensure `AGENT_CLUSTER_ADDR` points to your actual Portainer Server.
|
||||
@@ -0,0 +1,14 @@
|
||||
version: '3.8'
|
||||
services:
|
||||
portainer-agent:
|
||||
image: portainer/agent:latest
|
||||
container_name: portainer-agent
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /var/lib/docker/volumes:/var/lib/docker/volumes
|
||||
environment:
|
||||
AGENT_CLUSTER_ADDR: 192.168.1.81 # Replace with the actual IP address
|
||||
AGENT_PORT: 9001
|
||||
ports:
|
||||
- "9001:9001" # Port for agent communication
|
||||
restart: always
|
||||
63
optimized/standalone/RustDesk/README.md
Normal file
63
optimized/standalone/RustDesk/README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# RustDesk Server
|
||||
|
||||
This directory contains the `docker-compose.yml` for deploying the RustDesk hbbs (rendezvous) and hbbr (relay) servers. These servers facilitate peer-to-peer remote control connections.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run these services using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/RustDesk
|
||||
```
|
||||
2. **Important**: Review and update the `--relay-servers` IP address in `hbbs` command and other environment variables in `hbbr` if necessary.
|
||||
3. Start the services:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run each RustDesk component directly with Podman.
|
||||
|
||||
**For `rustdesk-hbbs`:**
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name rustdesk-hbbs \
|
||||
--restart unless-stopped \
|
||||
--platform linux/arm64 \
|
||||
-v rustdesk_data:/root \
|
||||
-p "21115:21115/tcp" \
|
||||
-p "21115:21115/udp" \
|
||||
-p "21116:21116/tcp" \
|
||||
-p "21116:21116/udp" \
|
||||
rustdesk/rustdesk-server:latest hbbs --relay-servers "192.168.1.245:21117"
|
||||
```
|
||||
|
||||
**For `rustdesk-hbbr`:**
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name rustdesk-hbbr \
|
||||
--restart unless-stopped \
|
||||
--platform linux/arm64 \
|
||||
-v rustdesk_data:/root \
|
||||
-p "21117:21117/tcp" \
|
||||
-p "21118:21118/udp" \
|
||||
-p "21119:21119/tcp" \
|
||||
-p "21119:21119/udp" \
|
||||
-e TOTAL_BANDWIDTH=20480 \
|
||||
-e SINGLE_BANDWIDTH=128 \
|
||||
-e LIMIT_SPEED="100Mb/s" \
|
||||
-e DOWNGRADE_START_CHECK=600 \
|
||||
-e DOWNGRADE_THRESHOLD=0.9 \
|
||||
rustdesk/rustdesk-server:latest hbbr
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* RustDesk servers are suitable for standalone deployment as they provide specific backend functionality for remote connections and don't inherently require Swarm orchestration for their core purpose.
|
||||
* Ensure the `rustdesk_data` volume is persistent for configuration and state.
|
||||
* Make sure the specified ports are open on your firewall.
|
||||
* The `--platform linux/arm64` is important if you are running on an ARM-based system.
|
||||
39
optimized/standalone/RustDesk/docker-compose.yml
Normal file
39
optimized/standalone/RustDesk/docker-compose.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
rustdesk-hbbs:
|
||||
image: rustdesk/rustdesk-server:latest
|
||||
container_name: rustdesk-hbbs
|
||||
restart: unless-stopped
|
||||
platform: linux/arm64
|
||||
command: ["hbbs", "--relay-servers", "192.168.1.245:21117"]
|
||||
volumes:
|
||||
- rustdesk_data:/root
|
||||
ports:
|
||||
- "21115:21115/tcp"
|
||||
- "21115:21115/udp"
|
||||
- "21116:21116/tcp"
|
||||
- "21116:21116/udp"
|
||||
|
||||
rustdesk-hbbr:
|
||||
image: rustdesk/rustdesk-server:latest
|
||||
container_name: rustdesk-hbbr
|
||||
restart: unless-stopped
|
||||
platform: linux/arm64
|
||||
command: ["hbbr"]
|
||||
volumes:
|
||||
- rustdesk_data:/root
|
||||
ports:
|
||||
- "21117:21117/tcp"
|
||||
- "21118:21118/udp"
|
||||
- "21119:21119/tcp"
|
||||
- "21119:21119/udp"
|
||||
environment:
|
||||
- TOTAL_BANDWIDTH=20480
|
||||
- SINGLE_BANDWIDTH=128
|
||||
- LIMIT_SPEED=100Mb/s
|
||||
- DOWNGRADE_START_CHECK=600
|
||||
- DOWNGRADE_THRESHOLD=0.9
|
||||
|
||||
volumes:
|
||||
rustdesk_data:
|
||||
57
optimized/standalone/Traefik_Standalone/README.md
Normal file
57
optimized/standalone/Traefik_Standalone/README.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Traefik (Standalone Docker/Podman Host)
|
||||
|
||||
This directory contains the `docker-compose.yml` for a Traefik instance configured to run on a single Docker or Podman host. It acts as a reverse proxy and load balancer for services running on that specific host, utilizing the local `docker.sock` for provider discovery.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this Traefik instance using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Traefik_Standalone
|
||||
```
|
||||
2. **Important**: Replace `DUCKDNS_TOKEN` placeholder with your actual DuckDNS token in the `docker-compose.yml`.
|
||||
3. Ensure the `./letsencrypt` directory exists and has appropriate permissions for ACME certificate storage.
|
||||
4. Ensure `traefik_dynamic.yml` exists and contains your dynamic configurations.
|
||||
5. Start the services:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run Traefik directly with Podman. Due to the extensive command-line arguments and volume mounts, using `podman-compose` is generally recommended for this setup.
|
||||
|
||||
A simplified `podman run` example for Traefik (you would need to adapt the command arguments and volumes fully):
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name traefik \
|
||||
--restart unless-stopped \
|
||||
-e DUCKDNS_TOKEN="YOUR_DUCKDNS_TOKEN" \
|
||||
-p "80:80" -p "443:443" -p "8089:8089" \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock:ro \
|
||||
-v ./letsencrypt:/letsencrypt \
|
||||
-v ./traefik_dynamic.yml:/etc/traefik/traefik_dynamic.yml:ro \
|
||||
traefik:latest \
|
||||
--api.insecure=false \
|
||||
--api.dashboard=true \
|
||||
--entrypoints.web.address=:80 \
|
||||
--entrypoints.websecure.address=:443 \
|
||||
--entrypoints.dashboard.address=:8089 \
|
||||
--providers.docker=true \
|
||||
--providers.docker.endpoint=unix:///var/run/docker.sock \
|
||||
--providers.docker.exposedbydefault=false \
|
||||
--providers.file.filename=/etc/traefik/traefik_dynamic.yml \
|
||||
--providers.file.watch=true \
|
||||
--certificatesresolvers.duckdns.acme.email=your@email.com \
|
||||
--certificatesresolvers.duckdns.acme.storage=/letsencrypt/acme.json \
|
||||
--certificatesresolvers.duckdns.acme.dnschallenge.provider=duckdns \
|
||||
--certificatesresolvers.duckdns.acme.dnschallenge.disablepropagationcheck=true
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This Traefik instance is for a single host. Your Swarm environment will have its own Traefik instance for cluster-wide routing.
|
||||
* Ensure that `traefik_dynamic.yml` and the `letsencrypt` directory are correctly configured and persistent.
|
||||
* The `whoami` service is a simple test service and will be automatically discovered by Traefik if correctly configured.
|
||||
53
optimized/standalone/Traefik_Standalone/docker-compose.yml
Normal file
53
optimized/standalone/Traefik_Standalone/docker-compose.yml
Normal file
@@ -0,0 +1,53 @@
|
||||
version: "3.9"
|
||||
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:latest
|
||||
container_name: traefik
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
# Replace this placeholder with your DuckDNS token
|
||||
- DUCKDNS_TOKEN=03a4d8f7-695a-4f51-b66c-cc2fac555fc1
|
||||
networks:
|
||||
- web
|
||||
ports:
|
||||
- "80:80" # http
|
||||
- "443:443" # https
|
||||
- "8089:8089" # traefik dashboard (secure it if exposed)
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- ./letsencrypt:/letsencrypt # <-- keep this directory inside WSL filesystem
|
||||
- ./traefik_dynamic.yml:/etc/traefik/traefik_dynamic.yml:ro
|
||||
command:
|
||||
|
||||
- --api.insecure=false
|
||||
- --api.dashboard=true
|
||||
- --entrypoints.web.address=:80
|
||||
- --entrypoints.websecure.address=:443
|
||||
- --entrypoints.dashboard.address=:8089
|
||||
- --providers.docker=true
|
||||
- --providers.docker.endpoint=unix:///var/run/docker.sock
|
||||
- --providers.docker.exposedbydefault=false
|
||||
- --providers.file.filename=/etc/traefik/traefik_dynamic.yml
|
||||
- --providers.file.watch=true
|
||||
- --certificatesresolvers.duckdns.acme.email=sterlenjohnson6@gmail.com
|
||||
- --certificatesresolvers.duckdns.acme.storage=/letsencrypt/acme.json
|
||||
- --certificatesresolvers.duckdns.acme.dnschallenge.provider=duckdns
|
||||
- --certificatesresolvers.duckdns.acme.dnschallenge.disablepropagationcheck=true
|
||||
|
||||
whoami:
|
||||
image: containous/whoami:latest
|
||||
container_name: whoami
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- web
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.whoami.rule=Host(`whoami.sj98.duckdns.org`)"
|
||||
- "traefik.http.routers.whoami.entrypoints=websecure"
|
||||
- "traefik.http.routers.whoami.tls=true"
|
||||
- "traefik.http.routers.whoami.tls.certresolver=duckdns"
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
||||
45
optimized/standalone/alpine-unbound/README.md
Normal file
45
optimized/standalone/alpine-unbound/README.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Alpine Unbound
|
||||
|
||||
This directory contains the `docker-compose.yml` for building and running an Alpine-based Unbound DNS resolver.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/alpine-unbound
|
||||
```
|
||||
2. Build the image (if not already built by the original `build.sh`):
|
||||
```bash
|
||||
podman-compose build
|
||||
```
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman (if built elsewhere)
|
||||
|
||||
If you have already built the `alpine-unbound:latest` image, you can run it directly with Podman. Note that translating a full `docker-compose.yml` to a single `podman run` command can be complex due to network and volume declarations.
|
||||
|
||||
A simplified `podman run` example (adjust networks and volumes as needed for your specific setup):
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name alpine_unbound \
|
||||
--network dns_net \
|
||||
-p 5335:5335/tcp \
|
||||
-p 5335:5335/udp \
|
||||
-v unbound_config:/etc/unbound/unbound.conf.d \
|
||||
-v unbound_data:/var/lib/unbound \
|
||||
alpine-unbound:latest
|
||||
```
|
||||
|
||||
Ensure the `dns_net` network and necessary volumes exist before running.
|
||||
|
||||
## Notes
|
||||
|
||||
* Remember to replace any placeholder values (e.g., timezone, ports) with your actual configuration.
|
||||
* The original `build.sh` file might contain additional steps or configurations relevant to the build process.
|
||||
* For persistent configuration, ensure the `unbound_config` volume is correctly managed.
|
||||
42
optimized/standalone/alpine-unbound/docker-compose.yml
Normal file
42
optimized/standalone/alpine-unbound/docker-compose.yml
Normal file
@@ -0,0 +1,42 @@
|
||||
version: "3.9"
|
||||
|
||||
services:
|
||||
alpine-unbound:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
image: alpine-unbound:latest
|
||||
container_name: alpine_unbound
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=America/New_York
|
||||
volumes:
|
||||
- unbound_config:/etc/unbound/unbound.conf.d
|
||||
- unbound_data:/var/lib/unbound
|
||||
ports:
|
||||
- "5335:5335/tcp"
|
||||
- "5335:5335/udp"
|
||||
networks:
|
||||
- dns_net
|
||||
healthcheck:
|
||||
test: [ "CMD", "/usr/local/bin/healthcheck.sh" ]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 5s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 32M
|
||||
|
||||
networks:
|
||||
dns_net:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
unbound_config:
|
||||
driver: local
|
||||
unbound_data:
|
||||
driver: local
|
||||
44
optimized/standalone/ubuntu-unbound/README.md
Normal file
44
optimized/standalone/ubuntu-unbound/README.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Ubuntu Unbound
|
||||
|
||||
This directory contains the `docker-compose.yml` for building and running an Ubuntu-based server with Unbound DNS.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/ubuntu-unbound
|
||||
```
|
||||
2. Build the image (if not already built by the original `build.sh`):
|
||||
```bash
|
||||
podman-compose build
|
||||
```
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
Due to `network_mode: host` and `privileged: true`, directly translating this `docker-compose.yml` into a single `podman run` command can be complex and may require manual setup of host network configuration.
|
||||
|
||||
A basic `podman run` example (adapt carefully, as `network_mode: host` has specific implications):
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name ubuntu_server \
|
||||
--network host \
|
||||
--privileged \
|
||||
-e TZ=America/New_York \
|
||||
-v ubuntu_data:/data \
|
||||
-v ubuntu_config:/config \
|
||||
ubuntu-server:latest # Assuming 'ubuntu-server:latest' is the built image name
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* Remember to replace any placeholder values (e.g., timezone) with your actual configuration.
|
||||
* The original `build.sh` file might contain additional steps or configurations relevant to the build process.
|
||||
* `network_mode: host` means the container shares the host's network namespace, using the host's IP address directly.
|
||||
* `privileged: true` grants the container nearly all capabilities of the host machine, which should be used with extreme caution.
|
||||
23
optimized/standalone/ubuntu-unbound/docker-compose.yml
Normal file
23
optimized/standalone/ubuntu-unbound/docker-compose.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
version: "3.9"
|
||||
|
||||
services:
|
||||
ubuntu-server:
|
||||
build: .
|
||||
container_name: ubuntu_server
|
||||
restart: unless-stopped
|
||||
network_mode: host
|
||||
privileged: true
|
||||
environment:
|
||||
- TZ=America/New_York # Change to your timezone
|
||||
volumes:
|
||||
- ubuntu_data:/data
|
||||
- ubuntu_config:/config
|
||||
ports:
|
||||
- "2222:2222" # SSH
|
||||
- "5335:5335" # Unbound DNS
|
||||
|
||||
volumes:
|
||||
ubuntu_data:
|
||||
driver: local
|
||||
ubuntu_config:
|
||||
driver: local
|
||||
55
optimized/swarm/ai-stack.yml
Normal file
55
optimized/swarm/ai-stack.yml
Normal file
@@ -0,0 +1,55 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
openwebui_data:
|
||||
|
||||
services:
|
||||
openwebui:
|
||||
image: ghcr.io/open-webui/open-webui:main
|
||||
volumes:
|
||||
- openwebui_data:/app/backend/data
|
||||
networks:
|
||||
- traefik-public
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 60s
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.heavy == true
|
||||
resources:
|
||||
limits:
|
||||
memory: 4G
|
||||
cpus: '4.0'
|
||||
reservations:
|
||||
memory: 2G
|
||||
cpus: '1.0'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 10s
|
||||
failure_action: rollback
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.openwebui.rule=Host(`ai.sterl.xyz`)"
|
||||
- "traefik.http.routers.openwebui.entrypoints=websecure"
|
||||
- "traefik.http.routers.openwebui.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.openwebui.loadbalancer.server.port=8080"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=openwebui"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
251
optimized/swarm/applications-stack.yml
Normal file
251
optimized/swarm/applications-stack.yml
Normal file
@@ -0,0 +1,251 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
homelab-backend:
|
||||
driver: overlay
|
||||
|
||||
volumes:
|
||||
paperless_data:
|
||||
paperless_media:
|
||||
paperless_db:
|
||||
paperless_redis:
|
||||
stirling_pdf_data:
|
||||
searxng_data:
|
||||
|
||||
secrets:
|
||||
paperless_db_password:
|
||||
external: true
|
||||
paperless_secret_key:
|
||||
external: true
|
||||
|
||||
|
||||
services:
|
||||
paperless-redis:
|
||||
image: redis:7-alpine
|
||||
volumes:
|
||||
- paperless_redis:/data
|
||||
networks:
|
||||
- homelab-backend
|
||||
healthcheck:
|
||||
test: ["CMD", "redis-cli", "ping"]
|
||||
interval: 30s
|
||||
timeout: 3s
|
||||
retries: 3
|
||||
deploy:
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
cpus: '0.5'
|
||||
reservations:
|
||||
memory: 64M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
paperless-db:
|
||||
image: postgres:15-alpine
|
||||
volumes:
|
||||
- paperless_db:/var/lib/postgresql/data
|
||||
networks:
|
||||
- homelab-backend
|
||||
environment:
|
||||
- POSTGRES_DB=paperless
|
||||
- POSTGRES_USER=paperless
|
||||
- POSTGRES_PASSWORD_FILE=/run/secrets/paperless_db_password
|
||||
secrets:
|
||||
- paperless_db_password
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U paperless"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
deploy:
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: '1.0'
|
||||
reservations:
|
||||
memory: 256M
|
||||
cpus: '0.25'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
paperless:
|
||||
image: ghcr.io/paperless-ngx/paperless-ngx:latest
|
||||
volumes:
|
||||
- paperless_data:/usr/src/paperless/data
|
||||
- paperless_media:/usr/src/paperless/media
|
||||
environment:
|
||||
- PAPERLESS_REDIS=redis://paperless-redis:6379
|
||||
- PAPERLESS_DBHOST=paperless-db
|
||||
- PAPERLESS_DBNAME=paperless
|
||||
- PAPERLESS_DBUSER=paperless
|
||||
- PAPERLESS_DBPASS_FILE=/run/secrets/paperless_db_password
|
||||
- PAPERLESS_URL=https://paperless.sterl.xyz
|
||||
- PAPERLESS_SECRET_KEY_FILE=/run/secrets/paperless_secret_key
|
||||
- TZ=America/Chicago
|
||||
secrets:
|
||||
- paperless_db_password
|
||||
- paperless_secret_key
|
||||
|
||||
depends_on:
|
||||
- paperless-redis
|
||||
- paperless-db
|
||||
networks:
|
||||
- traefik-public
|
||||
- homelab-backend
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8000/api/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 90s
|
||||
deploy:
|
||||
replicas: 2
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 1536M
|
||||
cpus: '2.0'
|
||||
reservations:
|
||||
memory: 768M
|
||||
cpus: '0.5'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 10s
|
||||
max_attempts: 3
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 10s
|
||||
failure_action: rollback
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.paperless.rule=Host(`paperless.sterl.xyz`)"
|
||||
- "traefik.http.routers.paperless.entrypoints=websecure"
|
||||
- "traefik.http.routers.paperless.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.paperless.loadbalancer.server.port=8000"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=paperless"
|
||||
- "docktail.container_port=8000"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
stirling-pdf:
|
||||
image: stirlingtools/stirling-pdf:latest
|
||||
volumes:
|
||||
- stirling_pdf_data:/configs
|
||||
environment:
|
||||
- DOCKER_ENABLE_SECURITY=false
|
||||
- INSTALL_BOOK_AND_ADVANCED_HTML_OPS=false
|
||||
- LANGS=en_US
|
||||
networks:
|
||||
- traefik-public
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
replicas: 2
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 1536M
|
||||
cpus: '2.0'
|
||||
reservations:
|
||||
memory: 768M
|
||||
cpus: '0.5'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 10s
|
||||
failure_action: rollback
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.pdf.rule=Host(`pdf.sterl.xyz`)"
|
||||
- "traefik.http.routers.pdf.entrypoints=websecure"
|
||||
- "traefik.http.routers.pdf.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.pdf.loadbalancer.server.port=8080"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=pdf"
|
||||
- "docktail.container_port=8080"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
searxng:
|
||||
image: searxng/searxng:latest
|
||||
volumes:
|
||||
- searxng_data:/etc/searxng
|
||||
environment:
|
||||
- SEARXNG_BASE_URL=https://search.sterl.xyz/
|
||||
networks:
|
||||
- traefik-public
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/healthz"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
replicas: 2
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 1536M
|
||||
cpus: '2.0'
|
||||
reservations:
|
||||
memory: 512M
|
||||
cpus: '0.5'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 10s
|
||||
failure_action: rollback
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.searxng.rule=Host(`search.sterl.xyz`)"
|
||||
- "traefik.http.routers.searxng.entrypoints=websecure"
|
||||
- "traefik.http.routers.searxng.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.searxng.loadbalancer.server.port=8080"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=search"
|
||||
- "docktail.container_port=8080"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
107
optimized/swarm/gitea-stack.yml
Normal file
107
optimized/swarm/gitea-stack.yml
Normal file
@@ -0,0 +1,107 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
gitea-internal:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
|
||||
volumes:
|
||||
gitea_data:
|
||||
gitea_db_data:
|
||||
|
||||
secrets:
|
||||
gitea_db_password:
|
||||
external: true
|
||||
|
||||
services:
|
||||
gitea:
|
||||
image: gitea/gitea:latest
|
||||
volumes:
|
||||
- gitea_data:/data
|
||||
networks:
|
||||
- traefik-public
|
||||
- gitea-internal
|
||||
ports:
|
||||
- "2222:22"
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- GITEA__database__DB_TYPE=postgres
|
||||
- GITEA__database__HOST=gitea-db:5432
|
||||
- GITEA__database__NAME=gitea
|
||||
- GITEA__database__USER=gitea
|
||||
- GITEA__database__PASSWD_FILE=/run/secrets/gitea_db_password
|
||||
- GITEA__server__DOMAIN=git.sterl.xyz
|
||||
- GITEA__server__ROOT_URL=https://git.sterl.xyz
|
||||
- GITEA__server__SSH_DOMAIN=git.sterl.xyz
|
||||
- GITEA__server__SSH_PORT=2222
|
||||
- GITEA__service__DISABLE_REGISTRATION=false
|
||||
secrets:
|
||||
- gitea_db_password
|
||||
depends_on:
|
||||
- gitea-db
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -q --spider http://localhost:3000 || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
cpus: '1.0'
|
||||
reservations:
|
||||
memory: 256M
|
||||
cpus: '0.2'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.gitea.rule=Host(`git.sterl.xyz`)"
|
||||
- "traefik.http.routers.gitea.entrypoints=websecure"
|
||||
- "traefik.http.routers.gitea.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.gitea.loadbalancer.server.port=3000"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=gitea"
|
||||
- "docktail.container_port=3000"
|
||||
|
||||
gitea-db:
|
||||
image: postgres:15-alpine
|
||||
volumes:
|
||||
- gitea_db_data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- gitea-internal
|
||||
environment:
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD_FILE=/run/secrets/gitea_db_password
|
||||
- POSTGRES_DB=gitea
|
||||
secrets:
|
||||
- gitea_db_password
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U gitea"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: '0.5'
|
||||
reservations:
|
||||
memory: 128M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
109
optimized/swarm/infrastructure-stack.yml
Normal file
109
optimized/swarm/infrastructure-stack.yml
Normal file
@@ -0,0 +1,109 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
homelab-backend:
|
||||
driver: overlay
|
||||
|
||||
volumes:
|
||||
komodo_data:
|
||||
komodo_mongo_data:
|
||||
|
||||
services:
|
||||
komodo-mongo:
|
||||
image: mongo:7
|
||||
volumes:
|
||||
- komodo_mongo_data:/data/db
|
||||
networks:
|
||||
- homelab-backend
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.leader == true
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: '1.0'
|
||||
reservations:
|
||||
memory: 128M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
komodo-core:
|
||||
image: ghcr.io/moghtech/komodo:latest
|
||||
depends_on:
|
||||
- komodo-mongo
|
||||
environment:
|
||||
- KOMODO_DATABASE_ADDRESS=komodo-mongo:27017
|
||||
volumes:
|
||||
- komodo_data:/config
|
||||
networks:
|
||||
- traefik-public
|
||||
- homelab-backend
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.leader == true
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: '1.0'
|
||||
reservations:
|
||||
memory: 128M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.komodo.rule=Host(`komodo.sterl.xyz`)"
|
||||
- "traefik.http.routers.komodo.entrypoints=websecure"
|
||||
- "traefik.http.routers.komodo.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.komodo.loadbalancer.server.port=9120"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=komodo"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
komodo-periphery:
|
||||
image: ghcr.io/moghtech/komodo-periphery:latest
|
||||
environment:
|
||||
- PERIPHERY_Id=periphery-{{.Node.Hostname}}
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
deploy:
|
||||
mode: global
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
cpus: '0.5'
|
||||
reservations:
|
||||
memory: 32M
|
||||
cpus: '0.05'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
|
||||
|
||||
|
||||
198
optimized/swarm/media-stack.yml
Normal file
198
optimized/swarm/media-stack.yml
Normal file
@@ -0,0 +1,198 @@
|
||||
version: '3.9'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
media-backend:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
|
||||
volumes:
|
||||
plex_config:
|
||||
jellyfin_config:
|
||||
immich_upload:
|
||||
immich_model_cache:
|
||||
immich_db:
|
||||
immich_redis:
|
||||
homarr_config:
|
||||
|
||||
services:
|
||||
|
||||
############################################
|
||||
# HOMARR
|
||||
############################################
|
||||
homarr:
|
||||
image: ghcr.io/ajnart/homarr:latest
|
||||
networks:
|
||||
- traefik-public
|
||||
- media-backend
|
||||
volumes:
|
||||
- homarr_config:/app/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
environment:
|
||||
- TZ=America/Chicago
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
- node.labels.leader == true
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
|
||||
- "traefik.http.routers.homarr.rule=Host(`homarr.sterl.xyz`)"
|
||||
- "traefik.http.routers.homarr.entrypoints=websecure"
|
||||
- "traefik.http.routers.homarr.tls.certresolver=cfresolver"
|
||||
|
||||
- "traefik.http.services.homarr-svc.loadbalancer.server.port=7575"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=homarr"
|
||||
- "docktail.container_port=7575"
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
max_attempts: 3
|
||||
|
||||
|
||||
|
||||
############################################
|
||||
# JELLYFIN
|
||||
############################################
|
||||
jellyfin:
|
||||
image: jellyfin/jellyfin:latest
|
||||
networks:
|
||||
- traefik-public
|
||||
- media-backend
|
||||
volumes:
|
||||
- jellyfin_config:/config
|
||||
- /mnt/media:/media:ro
|
||||
environment:
|
||||
- TZ=America/Chicago
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
|
||||
- "traefik.http.routers.jellyfin.rule=Host(`jellyfin.sterl.xyz`)"
|
||||
- "traefik.http.routers.jellyfin.entrypoints=websecure"
|
||||
- "traefik.http.routers.jellyfin.tls.certresolver=cfresolver"
|
||||
|
||||
- "traefik.http.services.jellyfin-svc.loadbalancer.server.port=8096"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=jellyfin"
|
||||
- "docktail.container_port=8096"
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
max_attempts: 3
|
||||
|
||||
|
||||
############################################
|
||||
# IMMICH SERVER
|
||||
############################################
|
||||
immich-server:
|
||||
image: ghcr.io/immich-app/immich-server:release
|
||||
networks:
|
||||
- traefik-public
|
||||
- media-backend
|
||||
volumes:
|
||||
- immich_upload:/usr/src/app/upload
|
||||
- /mnt/media/Photos:/usr/src/app/upload/library:rw
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
environment:
|
||||
- DB_HOSTNAME=immich-db
|
||||
- DB_USERNAME=immich
|
||||
- DB_PASSWORD=immich
|
||||
- DB_DATABASE_NAME=immich
|
||||
- REDIS_HOSTNAME=immich-redis
|
||||
- TZ=America/Chicago
|
||||
- IMMICH_MEDIA_LOCATION=/usr/src/app/upload/library
|
||||
depends_on:
|
||||
- immich-redis
|
||||
- immich-db
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
|
||||
- "traefik.http.routers.immich.rule=Host(`immich.sterl.xyz`)"
|
||||
- "traefik.http.routers.immich.entrypoints=websecure"
|
||||
- "traefik.http.routers.immich.tls.certresolver=cfresolver"
|
||||
|
||||
- "traefik.http.services.immich-svc.loadbalancer.server.port=2283"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=immich"
|
||||
- "docktail.container_port=2283"
|
||||
- "traefik.http.routers.immich.middlewares=immich-headers"
|
||||
- "traefik.http.middlewares.immich-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
|
||||
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
max_attempts: 0
|
||||
|
||||
|
||||
############################################
|
||||
# IMMICH MACHINE LEARNING
|
||||
############################################
|
||||
immich-machine-learning:
|
||||
image: ghcr.io/immich-app/immich-machine-learning:release
|
||||
networks:
|
||||
- media-backend
|
||||
volumes:
|
||||
- immich_model_cache:/cache
|
||||
environment:
|
||||
- TZ=America/Chicago
|
||||
depends_on:
|
||||
- immich-server
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.heavy == true
|
||||
- node.labels.ai == true
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
max_attempts: 0
|
||||
|
||||
|
||||
############################################
|
||||
# IMMICH REDIS
|
||||
############################################
|
||||
immich-redis:
|
||||
image: redis:7-alpine
|
||||
networks:
|
||||
- media-backend
|
||||
volumes:
|
||||
- immich_redis:/data
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
max_attempts: 0
|
||||
|
||||
|
||||
############################################
|
||||
# IMMICH DATABASE
|
||||
############################################
|
||||
immich-db:
|
||||
image: tensorchord/pgvecto-rs:pg14-v0.2.0
|
||||
networks:
|
||||
- media-backend
|
||||
volumes:
|
||||
- immich_db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=immich
|
||||
- POSTGRES_USER=immich
|
||||
- POSTGRES_DB=immich
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
max_attempts: 0
|
||||
248
optimized/swarm/monitoring-stack.yml
Normal file
248
optimized/swarm/monitoring-stack.yml
Normal file
@@ -0,0 +1,248 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
monitoring:
|
||||
driver: overlay
|
||||
|
||||
volumes:
|
||||
prometheus_data:
|
||||
grafana_data:
|
||||
alertmanager_data:
|
||||
|
||||
secrets:
|
||||
grafana_admin_password:
|
||||
external: true
|
||||
|
||||
configs:
|
||||
prometheus_config:
|
||||
external: true
|
||||
name: prometheus.yml
|
||||
alertmanager_config:
|
||||
external: true
|
||||
name: alertmanager.yml
|
||||
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus:latest
|
||||
volumes:
|
||||
- prometheus_data:/prometheus
|
||||
configs:
|
||||
- source: prometheus_config
|
||||
target: /etc/prometheus/prometheus.yml
|
||||
networks:
|
||||
- monitoring
|
||||
- traefik-public
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9090/-/healthy"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 2G
|
||||
cpus: '1.0'
|
||||
reservations:
|
||||
memory: 512M
|
||||
cpus: '0.25'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 10s
|
||||
failure_action: rollback
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.prometheus.rule=Host(`prometheus.sterl.xyz`)"
|
||||
- "traefik.http.routers.prometheus.entrypoints=websecure"
|
||||
- "traefik.http.routers.prometheus.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=prometheus"
|
||||
- "docktail.container_port=9090"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana:latest
|
||||
volumes:
|
||||
- grafana_data:/var/lib/grafana
|
||||
environment:
|
||||
- GF_SERVER_ROOT_URL=https://grafana.sterl.xyz
|
||||
- GF_SECURITY_ADMIN_PASSWORD__FILE=/run/secrets/grafana_admin_password
|
||||
secrets:
|
||||
- grafana_admin_password
|
||||
networks:
|
||||
- monitoring
|
||||
- traefik-public
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:3000/api/health"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
cpus: '1.0'
|
||||
reservations:
|
||||
memory: 256M
|
||||
cpus: '0.25'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 10s
|
||||
failure_action: rollback
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.grafana.rule=Host(`grafana.sterl.xyz`)"
|
||||
- "traefik.http.routers.grafana.entrypoints=websecure"
|
||||
- "traefik.http.routers.grafana.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=grafana"
|
||||
- "docktail.container_port=3000"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
alertmanager:
|
||||
image: prom/alertmanager:latest
|
||||
volumes:
|
||||
- alertmanager_data:/alertmanager
|
||||
configs:
|
||||
- source: alertmanager_config
|
||||
target: /etc/alertmanager/config.yml
|
||||
command:
|
||||
- '--config.file=/etc/alertmanager/config.yml'
|
||||
- '--storage.path=/alertmanager'
|
||||
networks:
|
||||
- monitoring
|
||||
- traefik-public
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9093/-/healthy"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
cpus: '0.25'
|
||||
reservations:
|
||||
memory: 64M
|
||||
cpus: '0.05'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.alertmanager.rule=Host(`alertmanager.sterl.xyz`)"
|
||||
- "traefik.http.routers.alertmanager.entrypoints=websecure"
|
||||
- "traefik.http.routers.alertmanager.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.alertmanager.loadbalancer.server.port=9093"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=alertmanager"
|
||||
- "docktail.container_port=9093"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
node-exporter:
|
||||
image: prom/node-exporter:latest
|
||||
volumes:
|
||||
- /proc:/host/proc:ro
|
||||
- /sys:/host/sys:ro
|
||||
- /:/rootfs:ro
|
||||
command:
|
||||
- '--path.procfs=/host/proc'
|
||||
- '--path.rootfs=/rootfs'
|
||||
- '--path.sysfs=/host/sys'
|
||||
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
|
||||
networks:
|
||||
- monitoring
|
||||
deploy:
|
||||
mode: global
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
cpus: '0.2'
|
||||
reservations:
|
||||
memory: 32M
|
||||
cpus: '0.05'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "5m"
|
||||
max-file: "2"
|
||||
|
||||
cadvisor:
|
||||
image: gcr.io/cadvisor/cadvisor:latest
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker/:/var/lib/docker:ro
|
||||
- /dev/disk/:/dev/disk:ro
|
||||
command:
|
||||
- '--docker_only=true'
|
||||
- '--housekeeping_interval=30s'
|
||||
networks:
|
||||
- monitoring
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/healthz"]
|
||||
interval: 30s
|
||||
timeout: 5s
|
||||
retries: 3
|
||||
deploy:
|
||||
mode: global
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
cpus: '0.3'
|
||||
reservations:
|
||||
memory: 64M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "5m"
|
||||
max-file: "2"
|
||||
77
optimized/swarm/n8n-stack.yml
Normal file
77
optimized/swarm/n8n-stack.yml
Normal file
@@ -0,0 +1,77 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
n8n_data:
|
||||
|
||||
services:
|
||||
n8n:
|
||||
image: n8nio/n8n:latest
|
||||
volumes:
|
||||
- n8n_data:/home/node/.n8n
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
networks:
|
||||
- traefik-public
|
||||
extra_hosts:
|
||||
- "gateway:192.168.1.1"
|
||||
- "proxmox:192.168.1.57"
|
||||
- "omv:192.168.1.70"
|
||||
- "swarm-manager:192.168.1.196"
|
||||
- "swarm-leader:192.168.1.245"
|
||||
- "swarm-worker-light:192.168.1.62"
|
||||
- "lm-studio:192.168.1.81"
|
||||
- "fedora:192.168.1.81"
|
||||
- "n8n.sterl.xyz:192.168.1.196"
|
||||
environment:
|
||||
- N8N_HOST=n8n.sterl.xyz
|
||||
- N8N_PROTOCOL=https
|
||||
- NODE_ENV=production
|
||||
- WEBHOOK_URL=https://n8n.sterl.xyz/
|
||||
- N8N_EDITOR_BASE_URL=https://n8n.sterl.xyz/
|
||||
- N8N_PUSH_BACKEND=websocket
|
||||
- N8N_PROXY_HOPS=1
|
||||
- N8N_SECURE_COOKIE=false
|
||||
- N8N_METRICS=false
|
||||
- N8N_SKIP_WEBHOOK_CSRF_CHECK=true
|
||||
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
|
||||
# Database configuration (fix deprecation warning)
|
||||
- DB_SQLITE_POOL_SIZE=10
|
||||
# Task runners (fix deprecation warning)
|
||||
- N8N_RUNNERS_ENABLED=true
|
||||
# Security settings (fix deprecation warnings)
|
||||
- N8N_BLOCK_ENV_ACCESS_IN_NODE=false
|
||||
- N8N_GIT_NODE_DISABLE_BARE_REPOS=true
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "wget -q --spider http://localhost:5678/healthz || exit 1"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 4G
|
||||
cpus: '2.0'
|
||||
reservations:
|
||||
memory: 512M
|
||||
cpus: '0.5'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.n8n.rule=Host(`n8n.sterl.xyz`)"
|
||||
- "traefik.http.routers.n8n.entrypoints=websecure"
|
||||
- "traefik.http.routers.n8n.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.n8n.loadbalancer.server.port=5678"
|
||||
- "traefik.http.services.n8n.loadbalancer.sticky.cookie=true"
|
||||
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.name=n8n_sticky"
|
||||
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.secure=true"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
|
||||
138
optimized/swarm/networking-stack.yml
Normal file
138
optimized/swarm/networking-stack.yml
Normal file
@@ -0,0 +1,138 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
traefik_letsencrypt:
|
||||
external: true
|
||||
|
||||
|
||||
configs:
|
||||
traefik_dynamic:
|
||||
external: true
|
||||
|
||||
|
||||
secrets:
|
||||
cf_api_token:
|
||||
external: true
|
||||
|
||||
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:latest
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- traefik_letsencrypt:/letsencrypt
|
||||
networks:
|
||||
- traefik-public
|
||||
secrets:
|
||||
- cf_api_token
|
||||
environment:
|
||||
# Cloudflare API Token (with DNS edit permissions for your domain)
|
||||
- CF_DNS_API_TOKEN_FILE=/run/secrets/cf_api_token
|
||||
- CF_ZONE_API_TOKEN_FILE=/run/secrets/cf_api_token
|
||||
|
||||
# Optional: your Pi-hole DNS can stay
|
||||
dns:
|
||||
- 192.168.1.196
|
||||
- 192.168.1.245
|
||||
- 1.1.1.1
|
||||
|
||||
command:
|
||||
# Entrypoints
|
||||
- "--entrypoints.web.address=:80"
|
||||
- "--entrypoints.websecure.address=:443"
|
||||
|
||||
# SWARM Provider
|
||||
- "--providers.swarm=true"
|
||||
- "--providers.swarm.network=traefik-public"
|
||||
- "--providers.swarm.exposedbydefault=false"
|
||||
|
||||
# File Provider (Dynamic Config)
|
||||
- "--providers.file.filename=/dynamic.yml"
|
||||
- "--providers.file.watch=true"
|
||||
|
||||
# Dashboard
|
||||
- "--api.dashboard=true"
|
||||
- "--api.insecure=false"
|
||||
|
||||
# HTTP -> HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
|
||||
- "--entrypoints.web.http.redirections.entrypoint.scheme=https"
|
||||
|
||||
# Let's Encrypt / ACME Cloudflare DNS Challenge
|
||||
- "--certificatesresolvers.cfresolver.acme.email=sterlenjohnson6@gmail.com"
|
||||
- "--certificatesresolvers.cfresolver.acme.storage=/letsencrypt/acme.json"
|
||||
- "--certificatesresolvers.cfresolver.acme.dnschallenge=true"
|
||||
- "--certificatesresolvers.cfresolver.acme.dnschallenge.provider=cloudflare"
|
||||
|
||||
# Optional: increase delay for propagation
|
||||
- "--certificatesresolvers.cfresolver.acme.dnschallenge.propagation.delayBeforeChecks=60"
|
||||
# Logging
|
||||
- "--log.level=INFO"
|
||||
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
labels:
|
||||
# Dashboard Router
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.traefik.rule=Host(`traefik.sterl.xyz`)"
|
||||
- "traefik.http.routers.traefik.entrypoints=websecure"
|
||||
- "traefik.http.routers.traefik.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.traefik.service=api@internal"
|
||||
|
||||
whoami:
|
||||
image: traefik/whoami
|
||||
networks:
|
||||
- traefik-public
|
||||
deploy:
|
||||
labels:
|
||||
# Whoami Router
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.whoami.rule=Host(`whoami.sterl.xyz`)"
|
||||
- "traefik.http.routers.whoami.entrypoints=websecure"
|
||||
- "traefik.http.routers.whoami.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.whoami.loadbalancer.server.port=80"
|
||||
|
||||
docktail:
|
||||
image: ghcr.io/marvinvr/docktail:latest
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
# Optional: Your Tailscale Auth Key if not using tag approval flow
|
||||
- TAILSCALE_AUTH_KEY=tskey-auth-ksGqv9DLDZ11CNTRL-TPrRcTiHWYUyuVskzy4nYUGwm2bxPM2d
|
||||
# Optional: Set log level
|
||||
# - DOCKTAIL_LOG_LEVEL=info
|
||||
# Optional: Specify a Tailnet IP for the container itself if needed
|
||||
# - DOCKTAIL_TAILNET_IP=100.x.y.z
|
||||
deploy:
|
||||
mode: global # Run DockTail on all Swarm nodes
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
cpus: '0.2'
|
||||
reservations:
|
||||
memory: 32M
|
||||
cpus: '0.05'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
|
||||
143
optimized/swarm/portainer-stack.yml
Normal file
143
optimized/swarm/portainer-stack.yml
Normal file
@@ -0,0 +1,143 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
portainer-agent:
|
||||
driver: overlay
|
||||
attachable: true
|
||||
|
||||
volumes:
|
||||
portainer_data:
|
||||
|
||||
services:
|
||||
portainer:
|
||||
image: portainer/portainer-ce:latest
|
||||
command:
|
||||
- "-H"
|
||||
- "tcp://tasks.agent:9001"
|
||||
- "--tlsskipverify"
|
||||
ports:
|
||||
- "9000:9000"
|
||||
- "9443:9443"
|
||||
volumes:
|
||||
- portainer_data:/data
|
||||
networks:
|
||||
- traefik-public
|
||||
- portainer-agent
|
||||
healthcheck:
|
||||
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9000/api/status"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 40s
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
cpus: '0.5'
|
||||
reservations:
|
||||
memory: 256M
|
||||
cpus: '0.25'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 10s
|
||||
max_attempts: 3
|
||||
update_config:
|
||||
parallelism: 1
|
||||
delay: 10s
|
||||
failure_action: rollback
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.portainer.rule=Host(`portainer.sterl.xyz`)"
|
||||
- "traefik.http.routers.portainer.entrypoints=websecure"
|
||||
- "traefik.http.routers.portainer.tls.certresolver=cfresolver"
|
||||
- "traefik.http.routers.portainer.service=portainer"
|
||||
- "traefik.http.routers.portainer.tls=true"
|
||||
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
|
||||
- "traefik.http.services.portainer.loadbalancer.sticky.cookie=true"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "traefik.docker.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=portainer"
|
||||
- "docktail.container_port=9000"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
# Linux agent
|
||||
agent:
|
||||
image: portainer/agent:latest
|
||||
environment:
|
||||
AGENT_CLUSTER_ADDR: tasks.agent
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /var/lib/docker/volumes:/var/lib/docker/volumes
|
||||
networks:
|
||||
- portainer-agent
|
||||
deploy:
|
||||
mode: global
|
||||
placement:
|
||||
constraints:
|
||||
- node.platform.os == linux
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
cpus: '0.25'
|
||||
reservations:
|
||||
memory: 64M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "5m"
|
||||
max-file: "2"
|
||||
|
||||
# Windows agent (optional - only deploys if Windows node exists)
|
||||
agent-windows:
|
||||
image: portainer/agent:latest
|
||||
environment:
|
||||
AGENT_CLUSTER_ADDR: tasks.agent
|
||||
volumes:
|
||||
- type: npipe
|
||||
source: \\\\.\\pipe\\docker_engine
|
||||
target: \\\\.\\pipe\\docker_engine
|
||||
- type: bind
|
||||
source: C:\\ProgramData\\docker\\volumes
|
||||
target: C:\\ProgramData\\docker\\volumes
|
||||
networks:
|
||||
portainer-agent:
|
||||
aliases:
|
||||
- agent
|
||||
deploy:
|
||||
mode: global
|
||||
placement:
|
||||
constraints:
|
||||
- node.platform.os == windows
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
cpus: '0.25'
|
||||
reservations:
|
||||
memory: 64M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
delay: 5s
|
||||
max_attempts: 3
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "5m"
|
||||
max-file: "2"
|
||||
115
optimized/swarm/productivity-stack.yml
Normal file
115
optimized/swarm/productivity-stack.yml
Normal file
@@ -0,0 +1,115 @@
|
||||
version: '3.9'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
productivity-backend:
|
||||
driver: overlay
|
||||
|
||||
volumes:
|
||||
nextcloud_data:
|
||||
nextcloud_db:
|
||||
nextcloud_redis:
|
||||
|
||||
services:
|
||||
nextcloud-db:
|
||||
image: postgres:15-alpine
|
||||
volumes:
|
||||
- nextcloud_db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_DB=nextcloud
|
||||
- POSTGRES_USER=nextcloud
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD} # Replace with a secure password in production
|
||||
networks:
|
||||
- productivity-backend
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.leader == true
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
cpus: '1.0'
|
||||
reservations:
|
||||
memory: 256M
|
||||
cpus: '0.25'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
|
||||
nextcloud-redis:
|
||||
image: redis:7-alpine
|
||||
volumes:
|
||||
- nextcloud_redis:/data
|
||||
networks:
|
||||
- productivity-backend
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.leader == true
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
cpus: '0.5'
|
||||
reservations:
|
||||
memory: 64M
|
||||
cpus: '0.1'
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
|
||||
nextcloud:
|
||||
image: nextcloud:latest
|
||||
volumes:
|
||||
- nextcloud_data:/var/www/html
|
||||
environment:
|
||||
- POSTGRES_HOST=nextcloud-db
|
||||
- POSTGRES_DB=nextcloud
|
||||
- POSTGRES_USER=nextcloud
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD} # Replace with a secure password in production
|
||||
- REDIS_HOST=nextcloud-redis
|
||||
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER} # Replace with your desired admin username
|
||||
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD} # Replace with a secure password
|
||||
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.sterl.xyz
|
||||
- OVERWRITEPROTOCOL=https
|
||||
- OVERWRITEHOST=nextcloud.sterl.xyz
|
||||
- TRUSTED_PROXIES=172.16.0.0/12
|
||||
depends_on:
|
||||
- nextcloud-db
|
||||
- nextcloud-redis
|
||||
networks:
|
||||
- traefik-public
|
||||
- productivity-backend
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.leader == true
|
||||
resources:
|
||||
limits:
|
||||
memory: 2G
|
||||
reservations:
|
||||
memory: 512M
|
||||
restart_policy:
|
||||
condition: on-failure
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.nextcloud.rule=Host(`nextcloud.sterl.xyz`)"
|
||||
- "traefik.http.routers.nextcloud.entrypoints=websecure"
|
||||
- "traefik.http.routers.nextcloud.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
# Nextcloud-specific middlewares
|
||||
- "traefik.http.routers.nextcloud.middlewares=nextcloud-chain"
|
||||
- "traefik.http.middlewares.nextcloud-chain.chain.middlewares=nextcloud-caldav,nextcloud-headers"
|
||||
# CalDAV/CardDAV redirect
|
||||
- "traefik.http.middlewares.nextcloud-caldav.redirectregex.regex=^https://(.*)/.well-known/(card|cal)dav"
|
||||
- "traefik.http.middlewares.nextcloud-caldav.redirectregex.replacement=https://$$1/remote.php/dav/"
|
||||
- "traefik.http.middlewares.nextcloud-caldav.redirectregex.permanent=true"
|
||||
# Security headers
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.stsSeconds=31536000"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.stsIncludeSubdomains=true"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.stsPreload=true"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.forceSTSHeader=true"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.customFrameOptionsValue=SAMEORIGIN"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.customResponseHeaders.X-Robots-Tag=noindex,nofollow"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=nextcloud"
|
||||
- "docktail.container_port=80"
|
||||
52
optimized/swarm/tools-stack.yml
Normal file
52
optimized/swarm/tools-stack.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
services:
|
||||
dozzle:
|
||||
image: amir20/dozzle:latest
|
||||
user: "0:0"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
- DOZZLE_MODE=swarm
|
||||
- DOZZLE_LEVEL=debug
|
||||
- DOZZLE_NO_ANALYTICS=true
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "5m"
|
||||
max-file: "2"
|
||||
deploy:
|
||||
mode: global
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
cpus: '0.25'
|
||||
reservations:
|
||||
memory: 64M
|
||||
cpus: '0.05'
|
||||
restart_policy:
|
||||
condition: any
|
||||
delay: 5s
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dozzle.rule=Host(`dozzle.sterl.xyz`)"
|
||||
- "traefik.http.routers.dozzle.entrypoints=websecure"
|
||||
- "traefik.http.routers.dozzle.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.dozzle.loadbalancer.server.port=8080"
|
||||
- "traefik.swarm.network=traefik-public"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=logs"
|
||||
- "docktail.container_port=8080"
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "if [ -S /var/run/docker.sock ]; then exit 0; else exit 1; fi"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 10s
|
||||
|
||||
32
services/standalone/AI/docker-compose.yml
Normal file
32
services/standalone/AI/docker-compose.yml
Normal file
@@ -0,0 +1,32 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
openwebui_data:
|
||||
|
||||
services:
|
||||
openwebui:
|
||||
image: ghcr.io/open-webui/open-webui:main
|
||||
container_name: openwebui
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- openwebui_data:/app/backend/data
|
||||
networks:
|
||||
- traefik-public
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.openwebui.rule=Host(`ai.sterl.xyz`)"
|
||||
- "traefik.http.routers.openwebui.entrypoints=websecure"
|
||||
- "traefik.http.routers.openwebui.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.openwebui.loadbalancer.server.port=8080"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=openwebui"
|
||||
- "docktail.container_port=8080"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
1
services/standalone/Gitea/.env.example
Normal file
1
services/standalone/Gitea/.env.example
Normal file
@@ -0,0 +1 @@
|
||||
GITEA_DB_PASSWORD=replace_me
|
||||
63
services/standalone/Gitea/docker-compose.yml
Normal file
63
services/standalone/Gitea/docker-compose.yml
Normal file
@@ -0,0 +1,63 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
gitea-internal:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
gitea_data:
|
||||
gitea_db_data:
|
||||
|
||||
services:
|
||||
gitea:
|
||||
image: gitea/gitea:latest
|
||||
container_name: gitea
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- gitea_data:/data
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
- /etc/timezone:/etc/timezone:ro
|
||||
networks:
|
||||
- traefik-public
|
||||
- gitea-internal
|
||||
ports:
|
||||
- "2222:22"
|
||||
environment:
|
||||
- USER_UID=1000
|
||||
- USER_GID=1000
|
||||
- GITEA__database__DB_TYPE=postgres
|
||||
- GITEA__database__HOST=gitea-db:5432
|
||||
- GITEA__database__NAME=gitea
|
||||
- GITEA__database__USER=gitea
|
||||
- GITEA__database__PASSWD=${GITEA_DB_PASSWORD}
|
||||
- GITEA__server__DOMAIN=git.sterl.xyz
|
||||
- GITEA__server__ROOT_URL=https://git.sterl.xyz
|
||||
- GITEA__server__SSH_DOMAIN=git.sterl.xyz
|
||||
- GITEA__server__SSH_PORT=2222
|
||||
- GITEA__service__DISABLE_REGISTRATION=false
|
||||
depends_on:
|
||||
- gitea-db
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.gitea.rule=Host(`git.sterl.xyz`)"
|
||||
- "traefik.http.routers.gitea.entrypoints=websecure"
|
||||
- "traefik.http.routers.gitea.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.gitea.loadbalancer.server.port=3000"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=gitea"
|
||||
- "docktail.container_port=3000"
|
||||
|
||||
gitea-db:
|
||||
image: postgres:15-alpine
|
||||
container_name: gitea-db
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- gitea_db_data:/var/lib/postgresql/data
|
||||
networks:
|
||||
- gitea-internal
|
||||
environment:
|
||||
- POSTGRES_USER=gitea
|
||||
- POSTGRES_PASSWORD=${GITEA_DB_PASSWORD}
|
||||
- POSTGRES_DB=gitea
|
||||
67
services/standalone/Infrastructure/docker-compose.yml
Normal file
67
services/standalone/Infrastructure/docker-compose.yml
Normal file
@@ -0,0 +1,67 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
homelab-backend:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
komodo_data:
|
||||
komodo_mongo_data:
|
||||
|
||||
services:
|
||||
komodo-mongo:
|
||||
image: mongo:7
|
||||
container_name: komodo-mongo
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- komodo_mongo_data:/data/db
|
||||
networks:
|
||||
- homelab-backend
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
komodo-core:
|
||||
image: ghcr.io/moghtech/komodo:latest
|
||||
container_name: komodo-core
|
||||
depends_on:
|
||||
- komodo-mongo
|
||||
environment:
|
||||
- KOMODO_DATABASE_ADDRESS=komodo-mongo:27017
|
||||
volumes:
|
||||
- komodo_data:/config
|
||||
networks:
|
||||
- traefik-public
|
||||
- homelab-backend
|
||||
restart: unless-stopped
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.komodo.rule=Host(`komodo.sterl.xyz`)"
|
||||
- "traefik.http.routers.komodo.entrypoints=websecure"
|
||||
- "traefik.http.routers.komodo.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.komodo.loadbalancer.server.port=9120"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=komodo"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
|
||||
komodo-periphery:
|
||||
image: ghcr.io/moghtech/komodo-periphery:latest
|
||||
container_name: komodo-periphery
|
||||
environment:
|
||||
- PERIPHERY_Id=periphery-standalone
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
restart: unless-stopped
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
198
services/standalone/Media/docker-compose.yml
Normal file
198
services/standalone/Media/docker-compose.yml
Normal file
@@ -0,0 +1,198 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
media-backend:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
homarr_config:
|
||||
jellyfin_config:
|
||||
immich_upload:
|
||||
immich_model_cache:
|
||||
immich_db:
|
||||
immich_redis:
|
||||
|
||||
services:
|
||||
homarr:
|
||||
image: ghcr.io/ajnart/homarr:latest
|
||||
container_name: homarr
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- homarr_config:/app/data
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
environment:
|
||||
- TZ=America/Chicago
|
||||
networks:
|
||||
- traefik-public
|
||||
- media-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.homarr.rule=Host(`homarr.sterl.xyz`)"
|
||||
- "traefik.http.routers.homarr.entrypoints=websecure"
|
||||
- "traefik.http.routers.homarr.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.homarr-svc.loadbalancer.server.port=7575"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=homarr"
|
||||
- "docktail.container_port=7575"
|
||||
|
||||
jellyfin:
|
||||
image: jellyfin/jellyfin:latest
|
||||
container_name: jellyfin
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == fedora
|
||||
volumes:
|
||||
- jellyfin_config:/config
|
||||
- /mnt/media:/media:ro
|
||||
environment:
|
||||
- TZ=America/Chicago
|
||||
networks:
|
||||
- traefik-public
|
||||
- media-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.jellyfin.rule=Host(`jellyfin.sterl.xyz`)"
|
||||
- "traefik.http.routers.jellyfin.entrypoints=websecure"
|
||||
- "traefik.http.routers.jellyfin.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.jellyfin-svc.loadbalancer.server.port=8096"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=jellyfin"
|
||||
- "docktail.container_port=8096"
|
||||
|
||||
immich-server:
|
||||
image: ghcr.io/immich-app/immich-server:release
|
||||
container_name: immich-server
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.role == photo
|
||||
depends_on:
|
||||
- immich-redis
|
||||
- immich-db
|
||||
volumes:
|
||||
- immich_upload:/usr/src/app/upload
|
||||
- /mnt/media/Photos:/usr/src/app/upload/library:rw
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
environment:
|
||||
- DB_HOSTNAME=immich-db
|
||||
- DB_USERNAME=immich
|
||||
- DB_PASSWORD=immich
|
||||
- DB_DATABASE_NAME=immich
|
||||
- REDIS_HOSTNAME=immich-redis
|
||||
- TZ=America/Chicago
|
||||
- IMMICH_MEDIA_LOCATION=/usr/src/app/upload/library
|
||||
networks:
|
||||
- traefik-public
|
||||
- media-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.immich.rule=Host(`immich.sterl.xyz`)"
|
||||
- "traefik.http.routers.immich.entrypoints=websecure"
|
||||
- "traefik.http.routers.immich.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.immich-svc.loadbalancer.server.port=2283"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=immich"
|
||||
- "docktail.container_port=2283"
|
||||
- "traefik.http.routers.immich.middlewares=immich-headers"
|
||||
- "traefik.http.middlewares.immich-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
|
||||
immich-server-haos:
|
||||
image: ghcr.io/immich-app/immich-server:release
|
||||
container_name: immich-server
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.hostname == homeassistant
|
||||
- node.labels.role == photo
|
||||
depends_on:
|
||||
- immich-redis
|
||||
- immich-db
|
||||
volumes:
|
||||
- immich_upload:/usr/src/app/upload
|
||||
- /mnt/data/supervisor/media/Media/Photos:/usr/src/app/upload/library:rw
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
environment:
|
||||
- DB_HOSTNAME=immich-db
|
||||
- DB_USERNAME=immich
|
||||
- DB_PASSWORD=immich
|
||||
- DB_DATABASE_NAME=immich
|
||||
- REDIS_HOSTNAME=immich-redis
|
||||
- TZ=America/Chicago
|
||||
- IMMICH_MEDIA_LOCATION=/usr/src/app/upload/library
|
||||
networks:
|
||||
- traefik-public
|
||||
- media-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.immich.rule=Host(`immich.sterl.xyz`)"
|
||||
- "traefik.http.routers.immich.entrypoints=websecure"
|
||||
- "traefik.http.routers.immich.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.immich-svc.loadbalancer.server.port=2283"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=immich"
|
||||
- "docktail.container_port=2283"
|
||||
- "traefik.http.routers.immich.middlewares=immich-headers"
|
||||
- "traefik.http.middlewares.immich-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
|
||||
|
||||
immich-machine-learning:
|
||||
image: ghcr.io/immich-app/immich-machine-learning:release
|
||||
container_name: immich-machine-learning
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.ai == true
|
||||
- node.labels.role == photo
|
||||
volumes:
|
||||
- immich_model_cache:/cache
|
||||
environment:
|
||||
- TZ=America/Chicago
|
||||
networks:
|
||||
- media-backend
|
||||
|
||||
immich-redis:
|
||||
image: redis:7-alpine
|
||||
container_name: immich-redis
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.role == photo
|
||||
volumes:
|
||||
- immich_redis:/data
|
||||
networks:
|
||||
- media-backend
|
||||
|
||||
immich-db:
|
||||
image: tensorchord/pgvecto-rs:pg14-v0.2.0
|
||||
container_name: immich-db
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.labels.role == photo
|
||||
volumes:
|
||||
- immich_db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_PASSWORD=immich
|
||||
- POSTGRES_USER=immich
|
||||
- POSTGRES_DB=immich
|
||||
networks:
|
||||
- media-backend
|
||||
1
services/standalone/Monitoring/.env.example
Normal file
1
services/standalone/Monitoring/.env.example
Normal file
@@ -0,0 +1 @@
|
||||
GRAFANA_ADMIN_PASSWORD=replace_me
|
||||
6
services/standalone/Monitoring/config/alertmanager.yml
Normal file
6
services/standalone/Monitoring/config/alertmanager.yml
Normal file
@@ -0,0 +1,6 @@
|
||||
route:
|
||||
receiver: "web.hook"
|
||||
receivers:
|
||||
- name: "web.hook"
|
||||
webhook_configs:
|
||||
- url: "http://127.0.0.1:5001/"
|
||||
15
services/standalone/Monitoring/config/prometheus.yml
Normal file
15
services/standalone/Monitoring/config/prometheus.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
global:
|
||||
scrape_interval: 15s
|
||||
|
||||
scrape_configs:
|
||||
- job_name: "prometheus"
|
||||
static_configs:
|
||||
- targets: ["localhost:9090"]
|
||||
|
||||
- job_name: "node-exporter"
|
||||
static_configs:
|
||||
- targets: ["node-exporter:9100"]
|
||||
|
||||
- job_name: "cadvisor"
|
||||
static_configs:
|
||||
- targets: ["cadvisor:8080"]
|
||||
110
services/standalone/Monitoring/docker-compose.yml
Normal file
110
services/standalone/Monitoring/docker-compose.yml
Normal file
@@ -0,0 +1,110 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
monitoring:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
prometheus_data:
|
||||
grafana_data:
|
||||
alertmanager_data:
|
||||
|
||||
services:
|
||||
prometheus:
|
||||
image: prom/prometheus:latest
|
||||
container_name: prometheus
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- prometheus_data:/prometheus
|
||||
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml:ro
|
||||
networks:
|
||||
- monitoring
|
||||
- traefik-public
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.prometheus.rule=Host(`prometheus.sterl.xyz`)"
|
||||
- "traefik.http.routers.prometheus.entrypoints=websecure"
|
||||
- "traefik.http.routers.prometheus.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=prometheus"
|
||||
- "docktail.container_port=9090"
|
||||
|
||||
grafana:
|
||||
image: grafana/grafana:latest
|
||||
container_name: grafana
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- grafana_data:/var/lib/grafana
|
||||
environment:
|
||||
- GF_SERVER_ROOT_URL=https://grafana.sterl.xyz
|
||||
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_ADMIN_PASSWORD}
|
||||
networks:
|
||||
- monitoring
|
||||
- traefik-public
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.grafana.rule=Host(`grafana.sterl.xyz`)"
|
||||
- "traefik.http.routers.grafana.entrypoints=websecure"
|
||||
- "traefik.http.routers.grafana.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=grafana"
|
||||
- "docktail.container_port=3000"
|
||||
|
||||
alertmanager:
|
||||
image: prom/alertmanager:latest
|
||||
container_name: alertmanager
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- alertmanager_data:/alertmanager
|
||||
- ./config/alertmanager.yml:/etc/alertmanager/config.yml:ro
|
||||
command:
|
||||
- '--config.file=/etc/alertmanager/config.yml'
|
||||
- '--storage.path=/alertmanager'
|
||||
networks:
|
||||
- monitoring
|
||||
- traefik-public
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.alertmanager.rule=Host(`alertmanager.sterl.xyz`)"
|
||||
- "traefik.http.routers.alertmanager.entrypoints=websecure"
|
||||
- "traefik.http.routers.alertmanager.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.alertmanager.loadbalancer.server.port=9093"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=alertmanager"
|
||||
- "docktail.container_port=9093"
|
||||
|
||||
node-exporter:
|
||||
image: prom/node-exporter:latest
|
||||
container_name: node-exporter
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /proc:/host/proc:ro
|
||||
- /sys:/host/sys:ro
|
||||
- /:/rootfs:ro
|
||||
command:
|
||||
- '--path.procfs=/host/proc'
|
||||
- '--path.rootfs=/rootfs'
|
||||
- '--path.sysfs=/host/sys'
|
||||
- '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
|
||||
networks:
|
||||
- monitoring
|
||||
|
||||
cadvisor:
|
||||
image: gcr.io/cadvisor/cadvisor:latest
|
||||
container_name: cadvisor
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /:/rootfs:ro
|
||||
- /var/run:/var/run:ro
|
||||
- /sys:/sys:ro
|
||||
- /var/lib/docker/:/var/lib/docker:ro
|
||||
- /dev/disk/:/dev/disk:ro
|
||||
command:
|
||||
- '--docker_only=true'
|
||||
- '--housekeeping_interval=30s'
|
||||
networks:
|
||||
- monitoring
|
||||
123
services/standalone/Networking/docker-compose.yml
Normal file
123
services/standalone/Networking/docker-compose.yml
Normal file
@@ -0,0 +1,123 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:latest
|
||||
container_name: traefik
|
||||
restart: unless-stopped
|
||||
deploy:
|
||||
mode: replicated
|
||||
replicas: 1
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
ports:
|
||||
- "80:80"
|
||||
- "443:443"
|
||||
- "8080:8080"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- traefik_letsencrypt:/letsencrypt
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
# Cloudflare API Token (with DNS edit permissions for your domain)
|
||||
- CF_DNS_API_TOKEN=WI8HAmOJhvDdhmm3XMpYPZs1o4uSG9gp4l66ncjr
|
||||
|
||||
# Optional: your Pi-hole DNS can stay
|
||||
dns:
|
||||
- 192.168.1.1
|
||||
- 192.168.1.245
|
||||
- 1.1.1.1
|
||||
|
||||
command:
|
||||
# Entrypoints
|
||||
- "--entrypoints.web.address=:80"
|
||||
- "--entrypoints.websecure.address=:443"
|
||||
|
||||
# Docker Provider (Standalone)
|
||||
- "--providers.docker=true"
|
||||
- "--providers.docker.network=traefik-public"
|
||||
- "--providers.docker.exposedbydefault=false"
|
||||
# Watch for changes in Docker events
|
||||
- "--providers.docker.watch=true"
|
||||
|
||||
# Dashboard
|
||||
- "--api.dashboard=true"
|
||||
- "--api.insecure=false"
|
||||
|
||||
# HTTP -> HTTPS
|
||||
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
|
||||
- "--entrypoints.web.http.redirections.entrypoint.scheme=https"
|
||||
|
||||
# Let's Encrypt / ACME Cloudflare DNS Challenge
|
||||
- "--certificatesresolvers.cfresolver.acme.email=sterlenjohnson6@gmail.com"
|
||||
- "--certificatesresolvers.cfresolver.acme.storage=/letsencrypt/acme.json"
|
||||
- "--certificatesresolvers.cfresolver.acme.dnschallenge=true"
|
||||
- "--certificatesresolvers.cfresolver.acme.dnschallenge.provider=cloudflare"
|
||||
|
||||
# Optional: increase delay for propagation
|
||||
- "--certificatesresolvers.cfresolver.acme.dnschallenge.propagation.delayBeforeChecks=60"
|
||||
# Logging
|
||||
- "--log.level=INFO"
|
||||
|
||||
labels:
|
||||
# Dashboard Router
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.traefik.rule=Host(`traefik.sterl.xyz`)"
|
||||
- "traefik.http.routers.traefik.entrypoints=websecure"
|
||||
- "traefik.http.routers.traefik.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
|
||||
- "traefik.http.routers.traefik.service=api@internal"
|
||||
|
||||
whoami:
|
||||
image: traefik/whoami
|
||||
container_name: whoami
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- traefik-public
|
||||
labels:
|
||||
# Whoami Router
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.whoami.rule=Host(`whoami.sterl.xyz`)"
|
||||
- "traefik.http.routers.whoami.entrypoints=websecure"
|
||||
- "traefik.http.routers.whoami.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.whoami.loadbalancer.server.port=80"
|
||||
|
||||
docktail:
|
||||
image: ghcr.io/marvinvr/docktail:latest
|
||||
container_name: docktail
|
||||
restart: on-failure:3
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
# Optional: Your Tailscale Auth Key if not using tag approval flow
|
||||
- TAILSCALE_AUTH_KEY=tskey-auth-ksGqv9DLDZ11CNTRL-TPrRcTiHWYUyuVskzy4nYUGwm2bxPM2d
|
||||
# Optional: Set log level
|
||||
# - DOCKTAIL_LOG_LEVEL=info
|
||||
# Optional: Specify a Tailnet IP for the container itself if needed
|
||||
# - DOCKTAIL_TAILNET_IP=100.x.y.z
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
cpus: '0.2'
|
||||
reservations:
|
||||
memory: 32M
|
||||
cpus: '0.05'
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
|
||||
volumes:
|
||||
traefik_letsencrypt:
|
||||
external: true
|
||||
@@ -0,0 +1,47 @@
|
||||
# =============================================================================
|
||||
# DNS Chain: Router(:53) → AdGuard(:53,DOH,DOT) → Pi-hole(:5353) → Unbound(:5335)
|
||||
# =============================================================================
|
||||
# NOTE: For HAOS, use the run_command file instead - compose doesn't work there
|
||||
# NOTE: Post-install: Configure AdGuard upstream to <host-ip>:5053
|
||||
# NOTE: Pi-hole handles blocking/caching, AdGuard handles DOH/DOT encryption
|
||||
# =============================================================================
|
||||
|
||||
services:
|
||||
pihole:
|
||||
image: pihole/pihole:latest
|
||||
container_name: pihole
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
WEBPASSWORD: "YOURPASSWORD"
|
||||
FTLCONF_webserver_enabled: "true"
|
||||
FTLCONF_webserver_port: "7300"
|
||||
WEB_BIND_ADDR: "0.0.0.0"
|
||||
FTLCONF_dns_port: "5053"
|
||||
# DNS1/DNS2 are deprecated in Pi-hole v6+, use FTLCONF_dns_upstreams
|
||||
FTLCONF_dns_upstreams: "127.0.0.1#5335"
|
||||
volumes:
|
||||
- pihole_etc:/etc/pihole:rw
|
||||
- pihole_dnsmasq:/etc/dnsmasq.d:rw
|
||||
restart: unless-stopped
|
||||
|
||||
adguardhome:
|
||||
image: adguard/adguardhome:latest
|
||||
container_name: adguardhome
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
volumes:
|
||||
- adguard_conf:/opt/adguardhome/conf:rw
|
||||
- adguard_work:/opt/adguardhome/work:rw
|
||||
- adguard_certs:/opt/adguardhome/conf/certs:ro
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- pihole
|
||||
|
||||
volumes:
|
||||
pihole_etc:
|
||||
pihole_dnsmasq:
|
||||
adguard_conf:
|
||||
adguard_work:
|
||||
adguard_certs:
|
||||
|
||||
@@ -1,3 +1,15 @@
|
||||
# =============================================================================
|
||||
# DNS Chain: Router(:53) → AdGuard(:53,DOH,DOT) → Pi-hole(:5353) → Unbound(:5335)
|
||||
# =============================================================================
|
||||
# BE9300 router points to this host on port 53
|
||||
# AdGuard handles DOH(443), DOT(853), and standard DNS(53)
|
||||
# Pi-hole runs on port 5353 to avoid conflict with AdGuard
|
||||
# Unbound provides recursive DNS on 5335 (installed locally)
|
||||
# =============================================================================
|
||||
|
||||
# Step 1: Start Pi-hole on port 5053 (5353 is used by mDNS/Avahi, 53 is AdGuard)
|
||||
# Configure upstream to Unbound on 127.0.0.1#5335
|
||||
# NOTE: DNS1/DNS2 are deprecated in Pi-hole v6+, use FTLCONF_dns_upstreams instead
|
||||
docker run -d \
|
||||
--name pihole \
|
||||
--network host \
|
||||
@@ -6,18 +18,24 @@ docker run -d \
|
||||
-e FTLCONF_webserver_enabled=true \
|
||||
-e FTLCONF_webserver_port=7300 \
|
||||
-e WEB_BIND_ADDR=0.0.0.0 \
|
||||
-e DNS1=127.0.0.1#5335 \
|
||||
-e DNS2=0.0.0.0 \
|
||||
-e FTLCONF_dns_port=5053 \
|
||||
-e FTLCONF_dns_upstreams=127.0.0.1#5335 \
|
||||
-v pihole_etc:/etc/pihole:rw \
|
||||
-v pihole_dnsmasq:/etc/dnsmasq.d:rw \
|
||||
--restart=unless-stopped \
|
||||
pihole/pihole:latest
|
||||
|
||||
# Step 2: Start AdGuard Home on port 53 (what router sees)
|
||||
# After first run, access http://<host-ip>:3000 to configure:
|
||||
# - Upstream DNS: 127.0.0.1:5353 (Pi-hole)
|
||||
# - DNS listen: 0.0.0.0:53
|
||||
# - Enable DOH (port 443) and DOT (port 853)
|
||||
docker run -d \
|
||||
--name adguardhome \
|
||||
--network host \
|
||||
-e TZ=America/Chicago \
|
||||
-v adguard_conf:/opt/adguardhome/conf:rw \
|
||||
-v adguard_work:/opt/adguardhome/work:rw \
|
||||
-v adguard_certs:/opt/adguardhome/conf/certs:ro \
|
||||
--restart=unless-stopped \
|
||||
adguard/adguardhome:latest
|
||||
37
services/standalone/Portainer/docker-compose.yml
Normal file
37
services/standalone/Portainer/docker-compose.yml
Normal file
@@ -0,0 +1,37 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
portainer_data:
|
||||
|
||||
services:
|
||||
portainer:
|
||||
image: portainer/portainer-ce:latest
|
||||
container_name: portainer
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "9000:9000"
|
||||
- "9443:9443"
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- portainer_data:/data
|
||||
networks:
|
||||
- traefik-public
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.portainer.rule=Host(`portainer.sterl.xyz`)"
|
||||
- "traefik.http.routers.portainer.entrypoints=websecure"
|
||||
- "traefik.http.routers.portainer.tls.certresolver=cfresolver"
|
||||
- "traefik.http.routers.portainer.service=portainer"
|
||||
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=portainer"
|
||||
- "docktail.container_port=9000"
|
||||
logging:
|
||||
driver: "json-file"
|
||||
options:
|
||||
max-size: "10m"
|
||||
max-file: "3"
|
||||
3
services/standalone/Productivity/.env.example
Normal file
3
services/standalone/Productivity/.env.example
Normal file
@@ -0,0 +1,3 @@
|
||||
POSTGRES_PASSWORD=replace_me
|
||||
NEXTCLOUD_ADMIN_USER=admin
|
||||
NEXTCLOUD_ADMIN_PASSWORD=replace_me
|
||||
83
services/standalone/Productivity/docker-compose.yml
Normal file
83
services/standalone/Productivity/docker-compose.yml
Normal file
@@ -0,0 +1,83 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
productivity-backend:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
nextcloud_data:
|
||||
nextcloud_db:
|
||||
nextcloud_redis:
|
||||
|
||||
services:
|
||||
nextcloud-db:
|
||||
image: postgres:15-alpine
|
||||
container_name: nextcloud-db
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- nextcloud_db:/var/lib/postgresql/data
|
||||
environment:
|
||||
- POSTGRES_DB=nextcloud
|
||||
- POSTGRES_USER=nextcloud
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD} # Set in .env
|
||||
networks:
|
||||
- productivity-backend
|
||||
|
||||
nextcloud-redis:
|
||||
image: redis:7-alpine
|
||||
container_name: nextcloud-redis
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- nextcloud_redis:/data
|
||||
networks:
|
||||
- productivity-backend
|
||||
|
||||
nextcloud:
|
||||
image: nextcloud:latest
|
||||
container_name: nextcloud
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- nextcloud_data:/var/www/html
|
||||
environment:
|
||||
- POSTGRES_HOST=nextcloud-db
|
||||
- POSTGRES_DB=nextcloud
|
||||
- POSTGRES_USER=nextcloud
|
||||
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
|
||||
- REDIS_HOST=nextcloud-redis
|
||||
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER}
|
||||
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD}
|
||||
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.sterl.xyz
|
||||
- OVERWRITEPROTOCOL=https
|
||||
- OVERWRITEHOST=nextcloud.sterl.xyz
|
||||
- TRUSTED_PROXIES=172.16.0.0/12
|
||||
depends_on:
|
||||
- nextcloud-db
|
||||
- nextcloud-redis
|
||||
networks:
|
||||
- traefik-public
|
||||
- productivity-backend
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.nextcloud.rule=Host(`nextcloud.sterl.xyz`)"
|
||||
- "traefik.http.routers.nextcloud.entrypoints=websecure"
|
||||
- "traefik.http.routers.nextcloud.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
|
||||
# Nextcloud-specific middlewares
|
||||
- "traefik.http.routers.nextcloud.middlewares=nextcloud-chain"
|
||||
- "traefik.http.middlewares.nextcloud-chain.chain.middlewares=nextcloud-caldav,nextcloud-headers"
|
||||
# CalDAV/CardDAV redirect
|
||||
- "traefik.http.middlewares.nextcloud-caldav.redirectregex.regex=^https://(.*)/.well-known/(card|cal)dav"
|
||||
- "traefik.http.middlewares.nextcloud-caldav.redirectregex.replacement=https://$$1/remote.php/dav/"
|
||||
- "traefik.http.middlewares.nextcloud-caldav.redirectregex.permanent=true"
|
||||
# Security headers
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.stsSeconds=31536000"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.stsIncludeSubdomains=true"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.stsPreload=true"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.forceSTSHeader=true"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.customFrameOptionsValue=SAMEORIGIN"
|
||||
- "traefik.http.middlewares.nextcloud-headers.headers.customResponseHeaders.X-Robots-Tag=noindex,nofollow"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=nextcloud"
|
||||
- "docktail.container_port=80"
|
||||
47
services/standalone/Technitium/docker-compose.yml
Normal file
47
services/standalone/Technitium/docker-compose.yml
Normal file
@@ -0,0 +1,47 @@
|
||||
version: "3"
|
||||
services:
|
||||
dns-server:
|
||||
container_name: technitium-dns
|
||||
hostname: technitium-dns
|
||||
image: technitium/dns-server:latest
|
||||
ports:
|
||||
- "5380:5380/tcp" # Web Console
|
||||
- "53:53/udp" # DNS
|
||||
- "53:53/tcp" # DNS
|
||||
- "853:853/tcp" # DNS-over-TLS
|
||||
- "8443:443/tcp" # DNS-over-HTTPS
|
||||
# Uncomment if using DHCP
|
||||
# - "67:67/udp"
|
||||
environment:
|
||||
- DNS_SERVER_DOMAIN=dns-server
|
||||
# - DNS_SERVER_ADMIN_PASSWORD=password # Set via UI on first login
|
||||
volumes:
|
||||
- ./config:/etc/dns/config
|
||||
# Mount AdGuard certs for migration/usage
|
||||
# Path in container: /etc/dns/certs
|
||||
- adguard_certs:/etc/dns/certs:ro
|
||||
restart: unless-stopped
|
||||
sysctls:
|
||||
- net.ipv4.ip_local_port_range=1024 65000
|
||||
networks:
|
||||
- traefik-public
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
# Web Console
|
||||
- "traefik.http.routers.technitium.rule=Host(`dns.sterl.xyz`)"
|
||||
- "traefik.http.routers.technitium.entrypoints=websecure"
|
||||
- "traefik.http.routers.technitium.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.technitium.loadbalancer.server.port=5380"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=technitium"
|
||||
- "docktail.container_port=5380"
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
adguard_certs:
|
||||
external: true
|
||||
# Volume created by docker run
|
||||
name: adguard_certs
|
||||
27
services/standalone/Tools/docker-compose.yml
Normal file
27
services/standalone/Tools/docker-compose.yml
Normal file
@@ -0,0 +1,27 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
services:
|
||||
dozzle:
|
||||
image: amir20/dozzle:latest
|
||||
container_name: dozzle
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
networks:
|
||||
- traefik-public
|
||||
environment:
|
||||
- DOZZLE_LEVEL=debug
|
||||
- DOZZLE_NO_ANALYTICS=true
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.dozzle.rule=Host(`dozzle.sterl.xyz`)"
|
||||
- "traefik.http.routers.dozzle.entrypoints=websecure"
|
||||
- "traefik.http.routers.dozzle.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.dozzle.loadbalancer.server.port=8080"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=logs"
|
||||
- "docktail.container_port=8080"
|
||||
57
services/standalone/n8n/docker-compose.yml
Normal file
57
services/standalone/n8n/docker-compose.yml
Normal file
@@ -0,0 +1,57 @@
|
||||
version: '3.8'
|
||||
|
||||
networks:
|
||||
traefik-public:
|
||||
external: true
|
||||
|
||||
volumes:
|
||||
n8n_data:
|
||||
|
||||
services:
|
||||
n8n:
|
||||
image: n8nio/n8n:latest
|
||||
container_name: n8n
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- n8n_data:/home/node/.n8n
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
networks:
|
||||
- traefik-public
|
||||
extra_hosts:
|
||||
- "gateway:192.168.1.1"
|
||||
- "proxmox:192.168.1.57"
|
||||
- "omv:192.168.1.70"
|
||||
- "swarm-manager:192.168.1.196"
|
||||
- "swarm-leader:192.168.1.245"
|
||||
- "swarm-worker-light:192.168.1.62"
|
||||
- "lm-studio:192.168.1.81"
|
||||
- "fedora:192.168.1.81"
|
||||
- "n8n.sterl.xyz:127.0.0.1" # Mapped to localhost since it's standalone
|
||||
environment:
|
||||
- N8N_HOST=n8n.sterl.xyz
|
||||
- N8N_PROTOCOL=https
|
||||
- NODE_ENV=production
|
||||
- WEBHOOK_URL=https://n8n.sterl.xyz/
|
||||
- N8N_EDITOR_BASE_URL=https://n8n.sterl.xyz/
|
||||
- N8N_PUSH_BACKEND=websocket
|
||||
- N8N_PROXY_HOPS=1
|
||||
- N8N_SECURE_COOKIE=false
|
||||
- N8N_METRICS=false
|
||||
- N8N_SKIP_WEBHOOK_CSRF_CHECK=true
|
||||
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
|
||||
- DB_SQLITE_POOL_SIZE=10
|
||||
- N8N_RUNNERS_ENABLED=true
|
||||
- N8N_BLOCK_ENV_ACCESS_IN_NODE=false
|
||||
- N8N_GIT_NODE_DISABLE_BARE_REPOS=true
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.n8n.rule=Host(`n8n.sterl.xyz`)"
|
||||
- "traefik.http.routers.n8n.entrypoints=websecure"
|
||||
- "traefik.http.routers.n8n.tls.certresolver=cfresolver"
|
||||
- "traefik.http.services.n8n.loadbalancer.server.port=5678"
|
||||
- "traefik.http.services.n8n.loadbalancer.sticky.cookie=true"
|
||||
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.name=n8n_sticky"
|
||||
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.secure=true"
|
||||
- "docktail.enable=true"
|
||||
- "docktail.name=n8n"
|
||||
- "docktail.container_port=5678"
|
||||
@@ -41,9 +41,8 @@ services:
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
replicas: 2
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 2G
|
||||
@@ -94,9 +93,8 @@ services:
|
||||
retries: 3
|
||||
start_period: 30s
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
replicas: 2
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 1G
|
||||
@@ -148,9 +146,8 @@ services:
|
||||
retries: 3
|
||||
start_period: 15s
|
||||
deploy:
|
||||
placement:
|
||||
constraints:
|
||||
- node.role == manager
|
||||
replicas: 2
|
||||
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
|
||||
Reference in New Issue
Block a user