Refactor: Reorganize services into standalone structure
This commit is contained in:
40
optimized/standalone/Caddy/README.md
Normal file
40
optimized/standalone/Caddy/README.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Caddy Fallback Server
|
||||
|
||||
This directory contains the `docker-compose.yml` for running a standalone Caddy server, potentially as a fallback or for specific local proxy needs.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Caddy
|
||||
```
|
||||
2. Ensure `Caddyfile` and `maintenance.html` exist in this directory as they are mounted as volumes.
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run the Caddy service directly with Podman. Note that for proper function, the `Caddyfile`, `maintenance.html`, and volume mounts are crucial.
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name caddy_fallback \
|
||||
--restart unless-stopped \
|
||||
-p "8080:80" \
|
||||
-p "8443:443" \
|
||||
-v ./Caddyfile:/etc/caddy/Caddyfile \
|
||||
-v ./maintenance.html:/srv/maintenance/maintenance.html \
|
||||
-v caddy_data:/data \
|
||||
-v caddy_config:/config \
|
||||
-v caddy_logs:/var/log/caddy \
|
||||
caddy:latest
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* Ensure the `Caddyfile` and `maintenance.html` are configured correctly for your use case.
|
||||
* The Caddy service was categorized as `standalone` because Traefik is designated for Swarm ingress, implying Caddy has a specialized, non-Swarm role here.
|
||||
27
optimized/standalone/Caddy/docker-compose.yml
Normal file
27
optimized/standalone/Caddy/docker-compose.yml
Normal file
@@ -0,0 +1,27 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
caddy:
|
||||
image: caddy:latest
|
||||
container_name: caddy_fallback
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "8080:80"
|
||||
- "8443:443"
|
||||
volumes:
|
||||
- ./Caddyfile:/etc/caddy/Caddyfile
|
||||
- ./maintenance.html:/srv/maintenance/maintenance.html
|
||||
- caddy_data:/data
|
||||
- caddy_config:/config
|
||||
- caddy_logs:/var/log/caddy
|
||||
networks:
|
||||
- caddy_net
|
||||
|
||||
volumes:
|
||||
caddy_data:
|
||||
caddy_config:
|
||||
caddy_logs:
|
||||
|
||||
networks:
|
||||
caddy_net:
|
||||
driver: bridge
|
||||
55
optimized/standalone/MacOS/README.md
Normal file
55
optimized/standalone/MacOS/README.md
Normal file
@@ -0,0 +1,55 @@
|
||||
# macOS VM
|
||||
|
||||
This directory contains the `docker-compose.yaml` for running a macOS virtual machine within Podman (or Docker). This setup is highly hardware-specific due to the use of `/dev/kvm` and direct device access, making it unsuitable for a Swarm environment.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. **Important**: Ensure your host system meets the requirements for running KVM-accelerated VMs (e.g., `/dev/kvm` is available and configured).
|
||||
2. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/MacOS
|
||||
```
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run the macOS VM directly with Podman. Pay close attention to the device mappings and network configuration.
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name macos \
|
||||
--restart always \
|
||||
-e VERSION="15" \
|
||||
-e DISK_SIZE="50G" \
|
||||
-e RAM_SIZE="6G" \
|
||||
-e CPU_CORES="4" \
|
||||
--device /dev/kvm \
|
||||
--device /dev/net/tun \
|
||||
--cap-add NET_ADMIN \
|
||||
-p 8006:8006 \
|
||||
-p 5900:5900/tcp \
|
||||
-p 5900:5900/udp \
|
||||
-v ./macos:/storage \
|
||||
dockurr/macos
|
||||
```
|
||||
**Note**: The original `docker-compose.yaml` defines a custom network with a specific `ipv4_address`. To replicate this with `podman run`, you would first need to create the network:
|
||||
```bash
|
||||
podman network create --subnet 172.70.20.0/29 macos
|
||||
```
|
||||
Then, you would need to attach the container to this network and specify the IP:
|
||||
```bash
|
||||
# ... (previous podman run command parts)
|
||||
--network macos --ip 172.70.20.3 \
|
||||
dockurr/macos
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This service requires significant host resources and direct hardware access.
|
||||
* The `stop_grace_period` is important for proper VM shutdown.
|
||||
* Ensure the `./macos` directory exists and has appropriate permissions for the VM storage.
|
||||
34
optimized/standalone/MacOS/docker-compose.yaml
Normal file
34
optimized/standalone/MacOS/docker-compose.yaml
Normal file
@@ -0,0 +1,34 @@
|
||||
# https://github.com/dockur/macos
|
||||
services:
|
||||
macos:
|
||||
image: dockurr/macos
|
||||
container_name: macos
|
||||
environment:
|
||||
VERSION: "15"
|
||||
DISK_SIZE: "50G"
|
||||
RAM_SIZE: "6G"
|
||||
CPU_CORES: "4"
|
||||
# DHCP: "Y" # if enabled you must create a macvlan
|
||||
devices:
|
||||
- /dev/kvm
|
||||
- /dev/net/tun
|
||||
cap_add:
|
||||
- NET_ADMIN
|
||||
ports:
|
||||
- 8006:8006
|
||||
- 5900:5900/tcp
|
||||
- 5900:5900/udp
|
||||
volumes:
|
||||
- ./macos:/storage
|
||||
restart: always
|
||||
stop_grace_period: 2m
|
||||
networks:
|
||||
macos:
|
||||
ipv4_address: 172.70.20.3
|
||||
|
||||
networks:
|
||||
macos:
|
||||
ipam:
|
||||
config:
|
||||
- subnet: 172.70.20.0/29
|
||||
name: macos
|
||||
45
optimized/standalone/Pihole/README.md
Normal file
45
optimized/standalone/Pihole/README.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Pi-hole DNS Blocker
|
||||
|
||||
This directory contains the `docker-compose.yml` for running a standalone Pi-hole DNS ad blocker.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Pihole
|
||||
```
|
||||
2. Ensure you have replaced placeholder values like `WEBPASSWORD` with your actual secure password.
|
||||
3. Ensure the necessary host directories for volumes (`./etc-pihole`, `./etc-dnsmasq.d`) exist or create them.
|
||||
4. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
Due to `network_mode: host`, this service shares the host's network namespace and directly uses the host's IP address.
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name pihole \
|
||||
--network host \
|
||||
--restart unless-stopped \
|
||||
-e TZ="America/Chicago" \
|
||||
-e WEBPASSWORD="YOURSECUREPASSWORD" \
|
||||
-e FTLCONF_webserver_enabled="true" \
|
||||
-e FTLCONF_webserver_port="7300" \
|
||||
-e WEB_BIND_ADDR="0.0.0.0" \
|
||||
-e DNS1="127.0.0.1#5335" \
|
||||
-e DNS2="0.0.0.0" \
|
||||
-v ./etc-pihole:/etc/pihole \
|
||||
-v ./etc-dnsmasq.d:/etc/dnsmasq.d \
|
||||
pihole/pihole:latest
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* `network_mode: host` is essential for Pi-hole to function correctly as a DNS server for your local network.
|
||||
* The `WEBPASSWORD` environment variable is critical for securing your Pi-hole web interface.
|
||||
* Ensure the volume bind mounts (`./etc-pihole`, `./etc-dnsmasq.d`) are pointing to correct and persistent locations on your host.
|
||||
17
optimized/standalone/Pihole/docker-compose.yml
Normal file
17
optimized/standalone/Pihole/docker-compose.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
services:
|
||||
pihole:
|
||||
image: pihole/pihole:latest
|
||||
container_name: pihole
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
WEBPASSWORD: "YOURPASSWORD"
|
||||
FTLCONF_webserver_enabled: "true"
|
||||
FTLCONF_webserver_port: "7300"
|
||||
WEB_BIND_ADDR: "0.0.0.0"
|
||||
DNS1: "127.0.0.1#5335"
|
||||
DNS2: "0.0.0.0"
|
||||
volumes:
|
||||
- ./etc-pihole:/etc/pihole
|
||||
- ./etc-dnsmasq.d:/etc/dnsmasq.d
|
||||
restart: unless-stopped
|
||||
25
optimized/standalone/Pihole_Adguard/README.md
Normal file
25
optimized/standalone/Pihole_Adguard/README.md
Normal file
@@ -0,0 +1,25 @@
|
||||
# Pi-hole and AdGuard Home Chained DNS
|
||||
|
||||
This directory contains the `docker-compose.yml` for running a chained DNS setup with Pi-hole and AdGuard Home. Both services utilize `network_mode: host`, making this stack suitable for standalone deployment on a dedicated host.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this stack using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Pihole_Adguard
|
||||
```
|
||||
2. Ensure you have replaced placeholder values like `WEBPASSWORD` with your actual secure password.
|
||||
3. Ensure the necessary host directories for volumes (`pihole_etc`, `pihole_dnsmasq`, `adguard_conf`, `adguard_work`, `adguard_certs`) exist or create them.
|
||||
4. Start the services:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This setup provides advanced DNS features, including ad-blocking (Pi-hole) and encrypted DNS (AdGuard Home).
|
||||
* `network_mode: host` is crucial for both services to integrate seamlessly with your host's network and act as primary DNS resolvers.
|
||||
* Careful configuration of upstream DNS in AdGuard Home (pointing to Pi-hole) is required post-installation.
|
||||
* Ensure the volume bind mounts are pointing to correct and persistent locations on your host.
|
||||
47
optimized/standalone/Pihole_Adguard/docker-compose.yml
Normal file
47
optimized/standalone/Pihole_Adguard/docker-compose.yml
Normal file
@@ -0,0 +1,47 @@
|
||||
# =============================================================================
|
||||
# DNS Chain: Router(:53) → AdGuard(:53,DOH,DOT) → Pi-hole(:5353) → Unbound(:5335)
|
||||
# =============================================================================
|
||||
# NOTE: For HAOS, use the run_command file instead - compose doesn't work there
|
||||
# NOTE: Post-install: Configure AdGuard upstream to <host-ip>:5053
|
||||
# NOTE: Pi-hole handles blocking/caching, AdGuard handles DOH/DOT encryption
|
||||
# =============================================================================
|
||||
|
||||
services:
|
||||
pihole:
|
||||
image: pihole/pihole:latest
|
||||
container_name: pihole
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
WEBPASSWORD: "YOURPASSWORD"
|
||||
FTLCONF_webserver_enabled: "true"
|
||||
FTLCONF_webserver_port: "7300"
|
||||
WEB_BIND_ADDR: "0.0.0.0"
|
||||
FTLCONF_dns_port: "5053"
|
||||
# DNS1/DNS2 are deprecated in Pi-hole v6+, use FTLCONF_dns_upstreams
|
||||
FTLCONF_dns_upstreams: "127.0.0.1#5335"
|
||||
volumes:
|
||||
- pihole_etc:/etc/pihole:rw
|
||||
- pihole_dnsmasq:/etc/dnsmasq.d:rw
|
||||
restart: unless-stopped
|
||||
|
||||
adguardhome:
|
||||
image: adguard/adguardhome:latest
|
||||
container_name: adguardhome
|
||||
network_mode: host
|
||||
environment:
|
||||
TZ: "America/Chicago"
|
||||
volumes:
|
||||
- adguard_conf:/opt/adguardhome/conf:rw
|
||||
- adguard_work:/opt/adguardhome/work:rw
|
||||
- adguard_certs:/opt/adguardhome/conf/certs:ro
|
||||
restart: unless-stopped
|
||||
depends_on:
|
||||
- pihole
|
||||
|
||||
volumes:
|
||||
pihole_etc:
|
||||
pihole_dnsmasq:
|
||||
adguard_conf:
|
||||
adguard_work:
|
||||
adguard_certs:
|
||||
39
optimized/standalone/Portainer_Agent_Standalone/README.md
Normal file
39
optimized/standalone/Portainer_Agent_Standalone/README.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# Portainer Agent (Standalone Host)
|
||||
|
||||
This directory contains the `docker-compose.yml` for deploying a Portainer Agent on a standalone Docker (or Podman) host. This agent allows a central Portainer instance (potentially running in a Swarm) to manage this individual host.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To deploy the Portainer Agent using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Portainer_Agent_Standalone
|
||||
```
|
||||
2. **Important**: Replace `192.168.1.81` with the actual IP address or resolvable hostname of your Portainer Server instance in the `docker-compose.yml`.
|
||||
3. Start the agent:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run the Portainer Agent directly with Podman:
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name portainer-agent \
|
||||
--restart always \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock \
|
||||
-v /var/lib/docker/volumes:/var/lib/docker/volumes \
|
||||
-e AGENT_CLUSTER_ADDR=192.168.1.81 \
|
||||
-e AGENT_PORT=9001 \
|
||||
-p "9001:9001" \
|
||||
portainer/agent:latest
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This agent is specifically for managing a *standalone* Docker/Podman host. If you intend to manage a Swarm cluster, the Portainer Swarm stack (found in `optimized/swarm/Portainer`) should be used, which typically deploys agents globally across the Swarm nodes.
|
||||
* The volumes `/var/run/docker.sock` and `/var/lib/docker/volumes` are critical for the agent to communicate with and manage the Docker/Podman daemon.
|
||||
* Ensure `AGENT_CLUSTER_ADDR` points to your actual Portainer Server.
|
||||
@@ -0,0 +1,14 @@
|
||||
version: '3.8'
|
||||
services:
|
||||
portainer-agent:
|
||||
image: portainer/agent:latest
|
||||
container_name: portainer-agent
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
- /var/lib/docker/volumes:/var/lib/docker/volumes
|
||||
environment:
|
||||
AGENT_CLUSTER_ADDR: 192.168.1.81 # Replace with the actual IP address
|
||||
AGENT_PORT: 9001
|
||||
ports:
|
||||
- "9001:9001" # Port for agent communication
|
||||
restart: always
|
||||
63
optimized/standalone/RustDesk/README.md
Normal file
63
optimized/standalone/RustDesk/README.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# RustDesk Server
|
||||
|
||||
This directory contains the `docker-compose.yml` for deploying the RustDesk hbbs (rendezvous) and hbbr (relay) servers. These servers facilitate peer-to-peer remote control connections.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run these services using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/RustDesk
|
||||
```
|
||||
2. **Important**: Review and update the `--relay-servers` IP address in `hbbs` command and other environment variables in `hbbr` if necessary.
|
||||
3. Start the services:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run each RustDesk component directly with Podman.
|
||||
|
||||
**For `rustdesk-hbbs`:**
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name rustdesk-hbbs \
|
||||
--restart unless-stopped \
|
||||
--platform linux/arm64 \
|
||||
-v rustdesk_data:/root \
|
||||
-p "21115:21115/tcp" \
|
||||
-p "21115:21115/udp" \
|
||||
-p "21116:21116/tcp" \
|
||||
-p "21116:21116/udp" \
|
||||
rustdesk/rustdesk-server:latest hbbs --relay-servers "192.168.1.245:21117"
|
||||
```
|
||||
|
||||
**For `rustdesk-hbbr`:**
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name rustdesk-hbbr \
|
||||
--restart unless-stopped \
|
||||
--platform linux/arm64 \
|
||||
-v rustdesk_data:/root \
|
||||
-p "21117:21117/tcp" \
|
||||
-p "21118:21118/udp" \
|
||||
-p "21119:21119/tcp" \
|
||||
-p "21119:21119/udp" \
|
||||
-e TOTAL_BANDWIDTH=20480 \
|
||||
-e SINGLE_BANDWIDTH=128 \
|
||||
-e LIMIT_SPEED="100Mb/s" \
|
||||
-e DOWNGRADE_START_CHECK=600 \
|
||||
-e DOWNGRADE_THRESHOLD=0.9 \
|
||||
rustdesk/rustdesk-server:latest hbbr
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* RustDesk servers are suitable for standalone deployment as they provide specific backend functionality for remote connections and don't inherently require Swarm orchestration for their core purpose.
|
||||
* Ensure the `rustdesk_data` volume is persistent for configuration and state.
|
||||
* Make sure the specified ports are open on your firewall.
|
||||
* The `--platform linux/arm64` is important if you are running on an ARM-based system.
|
||||
39
optimized/standalone/RustDesk/docker-compose.yml
Normal file
39
optimized/standalone/RustDesk/docker-compose.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
rustdesk-hbbs:
|
||||
image: rustdesk/rustdesk-server:latest
|
||||
container_name: rustdesk-hbbs
|
||||
restart: unless-stopped
|
||||
platform: linux/arm64
|
||||
command: ["hbbs", "--relay-servers", "192.168.1.245:21117"]
|
||||
volumes:
|
||||
- rustdesk_data:/root
|
||||
ports:
|
||||
- "21115:21115/tcp"
|
||||
- "21115:21115/udp"
|
||||
- "21116:21116/tcp"
|
||||
- "21116:21116/udp"
|
||||
|
||||
rustdesk-hbbr:
|
||||
image: rustdesk/rustdesk-server:latest
|
||||
container_name: rustdesk-hbbr
|
||||
restart: unless-stopped
|
||||
platform: linux/arm64
|
||||
command: ["hbbr"]
|
||||
volumes:
|
||||
- rustdesk_data:/root
|
||||
ports:
|
||||
- "21117:21117/tcp"
|
||||
- "21118:21118/udp"
|
||||
- "21119:21119/tcp"
|
||||
- "21119:21119/udp"
|
||||
environment:
|
||||
- TOTAL_BANDWIDTH=20480
|
||||
- SINGLE_BANDWIDTH=128
|
||||
- LIMIT_SPEED=100Mb/s
|
||||
- DOWNGRADE_START_CHECK=600
|
||||
- DOWNGRADE_THRESHOLD=0.9
|
||||
|
||||
volumes:
|
||||
rustdesk_data:
|
||||
57
optimized/standalone/Traefik_Standalone/README.md
Normal file
57
optimized/standalone/Traefik_Standalone/README.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Traefik (Standalone Docker/Podman Host)
|
||||
|
||||
This directory contains the `docker-compose.yml` for a Traefik instance configured to run on a single Docker or Podman host. It acts as a reverse proxy and load balancer for services running on that specific host, utilizing the local `docker.sock` for provider discovery.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this Traefik instance using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/Traefik_Standalone
|
||||
```
|
||||
2. **Important**: Replace `DUCKDNS_TOKEN` placeholder with your actual DuckDNS token in the `docker-compose.yml`.
|
||||
3. Ensure the `./letsencrypt` directory exists and has appropriate permissions for ACME certificate storage.
|
||||
4. Ensure `traefik_dynamic.yml` exists and contains your dynamic configurations.
|
||||
5. Start the services:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
You can run Traefik directly with Podman. Due to the extensive command-line arguments and volume mounts, using `podman-compose` is generally recommended for this setup.
|
||||
|
||||
A simplified `podman run` example for Traefik (you would need to adapt the command arguments and volumes fully):
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name traefik \
|
||||
--restart unless-stopped \
|
||||
-e DUCKDNS_TOKEN="YOUR_DUCKDNS_TOKEN" \
|
||||
-p "80:80" -p "443:443" -p "8089:8089" \
|
||||
-v /var/run/docker.sock:/var/run/docker.sock:ro \
|
||||
-v ./letsencrypt:/letsencrypt \
|
||||
-v ./traefik_dynamic.yml:/etc/traefik/traefik_dynamic.yml:ro \
|
||||
traefik:latest \
|
||||
--api.insecure=false \
|
||||
--api.dashboard=true \
|
||||
--entrypoints.web.address=:80 \
|
||||
--entrypoints.websecure.address=:443 \
|
||||
--entrypoints.dashboard.address=:8089 \
|
||||
--providers.docker=true \
|
||||
--providers.docker.endpoint=unix:///var/run/docker.sock \
|
||||
--providers.docker.exposedbydefault=false \
|
||||
--providers.file.filename=/etc/traefik/traefik_dynamic.yml \
|
||||
--providers.file.watch=true \
|
||||
--certificatesresolvers.duckdns.acme.email=your@email.com \
|
||||
--certificatesresolvers.duckdns.acme.storage=/letsencrypt/acme.json \
|
||||
--certificatesresolvers.duckdns.acme.dnschallenge.provider=duckdns \
|
||||
--certificatesresolvers.duckdns.acme.dnschallenge.disablepropagationcheck=true
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* This Traefik instance is for a single host. Your Swarm environment will have its own Traefik instance for cluster-wide routing.
|
||||
* Ensure that `traefik_dynamic.yml` and the `letsencrypt` directory are correctly configured and persistent.
|
||||
* The `whoami` service is a simple test service and will be automatically discovered by Traefik if correctly configured.
|
||||
53
optimized/standalone/Traefik_Standalone/docker-compose.yml
Normal file
53
optimized/standalone/Traefik_Standalone/docker-compose.yml
Normal file
@@ -0,0 +1,53 @@
|
||||
version: "3.9"
|
||||
|
||||
services:
|
||||
traefik:
|
||||
image: traefik:latest
|
||||
container_name: traefik
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
# Replace this placeholder with your DuckDNS token
|
||||
- DUCKDNS_TOKEN=03a4d8f7-695a-4f51-b66c-cc2fac555fc1
|
||||
networks:
|
||||
- web
|
||||
ports:
|
||||
- "80:80" # http
|
||||
- "443:443" # https
|
||||
- "8089:8089" # traefik dashboard (secure it if exposed)
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro
|
||||
- ./letsencrypt:/letsencrypt # <-- keep this directory inside WSL filesystem
|
||||
- ./traefik_dynamic.yml:/etc/traefik/traefik_dynamic.yml:ro
|
||||
command:
|
||||
|
||||
- --api.insecure=false
|
||||
- --api.dashboard=true
|
||||
- --entrypoints.web.address=:80
|
||||
- --entrypoints.websecure.address=:443
|
||||
- --entrypoints.dashboard.address=:8089
|
||||
- --providers.docker=true
|
||||
- --providers.docker.endpoint=unix:///var/run/docker.sock
|
||||
- --providers.docker.exposedbydefault=false
|
||||
- --providers.file.filename=/etc/traefik/traefik_dynamic.yml
|
||||
- --providers.file.watch=true
|
||||
- --certificatesresolvers.duckdns.acme.email=sterlenjohnson6@gmail.com
|
||||
- --certificatesresolvers.duckdns.acme.storage=/letsencrypt/acme.json
|
||||
- --certificatesresolvers.duckdns.acme.dnschallenge.provider=duckdns
|
||||
- --certificatesresolvers.duckdns.acme.dnschallenge.disablepropagationcheck=true
|
||||
|
||||
whoami:
|
||||
image: containous/whoami:latest
|
||||
container_name: whoami
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- web
|
||||
labels:
|
||||
- "traefik.enable=true"
|
||||
- "traefik.http.routers.whoami.rule=Host(`whoami.sj98.duckdns.org`)"
|
||||
- "traefik.http.routers.whoami.entrypoints=websecure"
|
||||
- "traefik.http.routers.whoami.tls=true"
|
||||
- "traefik.http.routers.whoami.tls.certresolver=duckdns"
|
||||
|
||||
networks:
|
||||
web:
|
||||
external: true
|
||||
45
optimized/standalone/alpine-unbound/README.md
Normal file
45
optimized/standalone/alpine-unbound/README.md
Normal file
@@ -0,0 +1,45 @@
|
||||
# Alpine Unbound
|
||||
|
||||
This directory contains the `docker-compose.yml` for building and running an Alpine-based Unbound DNS resolver.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/alpine-unbound
|
||||
```
|
||||
2. Build the image (if not already built by the original `build.sh`):
|
||||
```bash
|
||||
podman-compose build
|
||||
```
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman (if built elsewhere)
|
||||
|
||||
If you have already built the `alpine-unbound:latest` image, you can run it directly with Podman. Note that translating a full `docker-compose.yml` to a single `podman run` command can be complex due to network and volume declarations.
|
||||
|
||||
A simplified `podman run` example (adjust networks and volumes as needed for your specific setup):
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name alpine_unbound \
|
||||
--network dns_net \
|
||||
-p 5335:5335/tcp \
|
||||
-p 5335:5335/udp \
|
||||
-v unbound_config:/etc/unbound/unbound.conf.d \
|
||||
-v unbound_data:/var/lib/unbound \
|
||||
alpine-unbound:latest
|
||||
```
|
||||
|
||||
Ensure the `dns_net` network and necessary volumes exist before running.
|
||||
|
||||
## Notes
|
||||
|
||||
* Remember to replace any placeholder values (e.g., timezone, ports) with your actual configuration.
|
||||
* The original `build.sh` file might contain additional steps or configurations relevant to the build process.
|
||||
* For persistent configuration, ensure the `unbound_config` volume is correctly managed.
|
||||
42
optimized/standalone/alpine-unbound/docker-compose.yml
Normal file
42
optimized/standalone/alpine-unbound/docker-compose.yml
Normal file
@@ -0,0 +1,42 @@
|
||||
version: "3.9"
|
||||
|
||||
services:
|
||||
alpine-unbound:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
image: alpine-unbound:latest
|
||||
container_name: alpine_unbound
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
- TZ=America/New_York
|
||||
volumes:
|
||||
- unbound_config:/etc/unbound/unbound.conf.d
|
||||
- unbound_data:/var/lib/unbound
|
||||
ports:
|
||||
- "5335:5335/tcp"
|
||||
- "5335:5335/udp"
|
||||
networks:
|
||||
- dns_net
|
||||
healthcheck:
|
||||
test: [ "CMD", "/usr/local/bin/healthcheck.sh" ]
|
||||
interval: 30s
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
start_period: 5s
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 32M
|
||||
|
||||
networks:
|
||||
dns_net:
|
||||
driver: bridge
|
||||
|
||||
volumes:
|
||||
unbound_config:
|
||||
driver: local
|
||||
unbound_data:
|
||||
driver: local
|
||||
44
optimized/standalone/ubuntu-unbound/README.md
Normal file
44
optimized/standalone/ubuntu-unbound/README.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# Ubuntu Unbound
|
||||
|
||||
This directory contains the `docker-compose.yml` for building and running an Ubuntu-based server with Unbound DNS.
|
||||
|
||||
## Running with Podman Compose
|
||||
|
||||
To run this service using `podman-compose`:
|
||||
|
||||
1. Navigate to this directory:
|
||||
```bash
|
||||
cd optimized/standalone/ubuntu-unbound
|
||||
```
|
||||
2. Build the image (if not already built by the original `build.sh`):
|
||||
```bash
|
||||
podman-compose build
|
||||
```
|
||||
3. Start the service:
|
||||
```bash
|
||||
podman-compose up -d
|
||||
```
|
||||
|
||||
## Running with Podman
|
||||
|
||||
Due to `network_mode: host` and `privileged: true`, directly translating this `docker-compose.yml` into a single `podman run` command can be complex and may require manual setup of host network configuration.
|
||||
|
||||
A basic `podman run` example (adapt carefully, as `network_mode: host` has specific implications):
|
||||
|
||||
```bash
|
||||
podman run -d \
|
||||
--name ubuntu_server \
|
||||
--network host \
|
||||
--privileged \
|
||||
-e TZ=America/New_York \
|
||||
-v ubuntu_data:/data \
|
||||
-v ubuntu_config:/config \
|
||||
ubuntu-server:latest # Assuming 'ubuntu-server:latest' is the built image name
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
* Remember to replace any placeholder values (e.g., timezone) with your actual configuration.
|
||||
* The original `build.sh` file might contain additional steps or configurations relevant to the build process.
|
||||
* `network_mode: host` means the container shares the host's network namespace, using the host's IP address directly.
|
||||
* `privileged: true` grants the container nearly all capabilities of the host machine, which should be used with extreme caution.
|
||||
23
optimized/standalone/ubuntu-unbound/docker-compose.yml
Normal file
23
optimized/standalone/ubuntu-unbound/docker-compose.yml
Normal file
@@ -0,0 +1,23 @@
|
||||
version: "3.9"
|
||||
|
||||
services:
|
||||
ubuntu-server:
|
||||
build: .
|
||||
container_name: ubuntu_server
|
||||
restart: unless-stopped
|
||||
network_mode: host
|
||||
privileged: true
|
||||
environment:
|
||||
- TZ=America/New_York # Change to your timezone
|
||||
volumes:
|
||||
- ubuntu_data:/data
|
||||
- ubuntu_config:/config
|
||||
ports:
|
||||
- "2222:2222" # SSH
|
||||
- "5335:5335" # Unbound DNS
|
||||
|
||||
volumes:
|
||||
ubuntu_data:
|
||||
driver: local
|
||||
ubuntu_config:
|
||||
driver: local
|
||||
Reference in New Issue
Block a user