Add Pi-hole with AdGuard DOH/DOT integration, reorganize swarm stacks, add DNS/n8n docs

This commit is contained in:
2025-12-18 15:38:57 +00:00
parent 827f8bbf9d
commit f0c525d0df
44 changed files with 3013 additions and 486 deletions

View File

@@ -0,0 +1,237 @@
# Homelab n8n Monitoring Workflows
This directory contains intelligent n8n workflows for monitoring and integrating your homelab infrastructure using AI-powered analysis.
## 📋 Workflows
### 1. **Homelab Health Monitor** (`homelab-health-monitor.json`)
**Purpose:** Comprehensive health monitoring of all homelab services
**Schedule:** Every 15 minutes (or manual via webhook)
**Features:**
- Network connectivity checks (internet + internal DNS)
- Docker Swarm service status monitoring
- Service endpoint validation (Komodo, OpenWebUI, Paperless, Prometheus, LM Studio)
- AI-powered health analysis using LM Studio
- Health scoring (0-100) and automated alerting
**Webhook:** `POST https://n8n.sj98.duckdns.org/webhook/health-check`
### 2. **Homelab Log Analyzer** (`homelab-log-analyzer.json`)
**Purpose:** Automated AI analysis of Docker service logs
**Schedule:** Every 6 hours
**Features:**
- Collects logs from critical services (Traefik, n8n, OpenWebUI, Komodo, Prometheus)
- Parses ERROR, WARN, CRITICAL patterns
- AI analysis of log patterns and issues
- Generates actionable recommendations
- Alerts on high error counts
> **💡 For Manual Log Viewing:** Use **Dozzle** at your configured URL for real-time, interactive log viewing with a beautiful web interface. This workflow is for automated AI-powered analysis and alerting.
### 3. **Homelab Integration Advisor** (`homelab-integration-advisor.json`)
**Purpose:** AI-powered service integration recommendations
**Schedule:** Daily at 9 AM (or manual via webhook)
**Features:**
- Discovers all running services and capabilities
- Identifies integration opportunities
- AI generates specific n8n workflow patterns
- Prioritizes by complexity and value
- Provides step-by-step implementation guidance
**Webhook:** `POST https://n8n.sj98.duckdns.org/webhook/integration-advisor`
## 🚀 Installation
### 1. Import Workflows
```bash
# Option A: Via n8n UI
1. Open n8n at https://n8n.sj98.duckdns.org
2. Click "Workflows""Import from File"
3. Select each JSON file from this directory
# Option B: Via API (if API enabled)
cd /workspace/homelab/services/n8n/workflows
curl -X POST https://n8n.sj98.duckdns.org/api/v1/workflows \
-H "Content-Type: application/json" \
-H "X-N8N-API-KEY: your-api-key" \
-d @homelab-health-monitor.json
```
### 2. Configure AI Model
Edit each workflow and set your preferred LM Studio model:
- **Health Monitor:** Uses `deepseek-r1-distill-llama-8b` (reasoning)
- **Log Analyzer:** Uses `qwen2.5-coder-7b-instruct` (technical analysis)
- **Integration Advisor:** Uses `deepseek-r1-distill-llama-8b` (planning)
Available models on your LM Studio instance (.81:1234):
- `deepseek-r1-distill-llama-8b`
- `qwen2.5-coder-7b-instruct`
- `qwen/qwen3-coder-30b`
- `mistralai/codestral-22b-v0.1`
- `google/gemma-3-12b`
### 3. Activate Workflows
1. Open each workflow
2. Toggle "Active" switch in top right
3. Verify schedule trigger is enabled
## 🔧 Configuration
### LM Studio Connection
The workflows connect to LM Studio via the `lm-studio` hostname (mapped to 192.168.1.81:1234 via `extra_hosts` in n8n-stack.yml).
**Test connection:**
```bash
docker exec <n8n-container-id> curl http://lm-studio:1234/v1/models
```
### Notifications (Optional)
To enable alerts, add these nodes to each workflow:
- **Email:** Use n8n's Email node with SMTP credentials
- **Discord:** Use Webhook node with Discord webhook URL
- **Slack:** Use Slack node with OAuth credentials
- **Home Assistant:** Send to `http://homeassistant.local:8123/api/webhook/n8n-alert`
## 📊 Recommended Integration Patterns
Based on your homelab services, here are high-value integrations to implement:
### 1. **AI-Powered Document Processing**
**Services:** n8n → Paperless → OpenWebUI
**Pattern:** Auto-tag and summarize uploaded documents using AI
```
Trigger: Paperless webhook (new document)
→ Get document content
→ Send to LM Studio for tagging/summary
→ Update Paperless tags and notes
```
### 2. **Metric-Based Automation**
**Services:** Prometheus → n8n → Docker/Komodo
**Pattern:** Auto-restart services on high resource usage
```
Trigger: Prometheus AlertManager webhook
→ Parse alert (high CPU/memory)
→ Execute docker service update --force <service>
→ Send notification
```
### 3. **Smart Search Integration**
**Services:** SearXNG → OpenWebUI
**Pattern:** Enhanced AI chat with web search capability
```
Trigger: OpenWebUI webhook or manual
→ Query SearXNG for context
→ Send results + query to LM Studio
→ Return AI response with citations
```
### 4. **Backup Automation**
**Services:** n8n → All Services → Storage
**Pattern:** Automated backup verification and reporting
```
Schedule: Daily at 2 AM
→ Trigger OMV backup scripts
→ Verify backup completion
→ Calculate backup sizes
→ AI analysis of backup health
→ Send report
```
### 5. **Development Pipeline**
**Services:** Gitea → Komodo → n8n
**Pattern:** GitOps deployment automation
```
Trigger: Gitea webhook (push to main)
→ Parse commit info
→ Trigger Komodo deployment
→ Monitor deployment status
→ Run health checks
→ Send notification
```
## 🐛 Troubleshooting
### Connection to LM Studio Fails
```bash
# Check if extra_hosts is configured
docker service inspect n8n_n8n | grep -A 5 ExtraHosts
# Test from n8n container
docker exec $(docker ps -q -f name=n8n) curl http://lm-studio:1234/v1/models
# Verify LM Studio is running on .81
curl http://192.168.1.81:1234/v1/models
```
### Docker Commands Fail
```bash
# Verify Docker socket is mounted
docker service inspect n8n_n8n | grep -A 2 docker.sock
# Test from n8n container
docker exec $(docker ps -q -f name=n8n) docker ps
```
### Workflows Don't Execute
- Check n8n logs: `docker service logs n8n_n8n --tail 100`
- Verify workflow is activated (toggle in UI)
- Check schedule trigger settings
- Ensure n8n has sufficient resources (increase memory/CPU limits)
## <20> Log Viewing
### Interactive Log Viewing with Dozzle
For **manual, real-time log viewing**, use **Dozzle** - it's already part of your homelab:
**Access:** Check your Traefik/Portainer configuration for the Dozzle URL
**Features:**
- Real-time log streaming with color coding
- Multi-container view
- Search and filter logs
- No configuration needed - automatically discovers containers
- Beautiful, responsive web UI
**Use Dozzle when you need to:**
- Investigate specific issues in real-time
- Follow logs during deployments
- Debug container startup problems
- Search for specific error messages
### Automated Log Analysis (This Workflow)
The **Homelab Log Analyzer** workflow complements Dozzle by:
- Running periodically (every 6 hours) to catch issues you might miss
- Using AI to identify patterns across multiple services
- Sending proactive alerts before issues escalate
- Providing trend analysis over time
**Both tools serve different purposes and work great together!**
---
## <20>📈 Next Steps
1. **Import and test** each workflow manually
2. **Configure notifications** (email/Discord/Slack)
3. **Review AI recommendations** from Integration Advisor
4. **Implement priority integrations** suggested by AI
5. **Monitor health scores** and adjust thresholds
6. **Create custom workflows** based on your specific needs
## 🔗 Useful Links
- **n8n Documentation:** https://docs.n8n.io
- **LM Studio API:** http://lm-studio:1234 (OpenAI-compatible)
- **Prometheus API:** http://prometheus.sj98.duckdns.org/api/v1
- **Dozzle Logs:** Your Dozzle URL (real-time log viewer)
- **Docker API:** Unix socket at `/var/run/docker.sock`
## 💡 Tips
- **Use Dozzle for interactive debugging**, workflows for automated monitoring
- Start with manual triggers before enabling schedules
- Use AI model with appropriate context window for your data
- Monitor n8n resource usage - increase limits if needed
- Keep workflows modular - easier to debug and maintain
- Save successful execution results for reference

View File

@@ -0,0 +1,778 @@
{
"name": "Homelab Health Monitor",
"nodes": [
{
"parameters": {
"rule": {
"interval": [
{
"field": "minutes",
"minutesInterval": 15
}
]
}
},
"id": "schedule-trigger",
"name": "Every 15 Minutes",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1.2,
"position": [
250,
300
]
},
{
"parameters": {
"httpMethod": "POST",
"path": "health-check",
"responseMode": "responseNode",
"options": {}
},
"id": "webhook-trigger",
"name": "Manual Trigger Webhook",
"type": "n8n-nodes-base.webhook",
"typeVersion": 2,
"position": [
250,
500
],
"webhookId": "homelab-health"
},
{
"parameters": {
"url": "=https://www.google.com",
"options": {
"timeout": 5000
}
},
"id": "check-internet-dns",
"name": "Check Google DNS",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
500,
200
],
"continueOnFail": true
},
{
"parameters": {
"url": "=https://1.1.1.1",
"options": {
"timeout": 5000
}
},
"id": "check-cloudflare",
"name": "Check Cloudflare DNS",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
500,
350
],
"continueOnFail": true
},
{
"parameters": {
"url": "=http://192.168.1.196:80",
"options": {
"timeout": 3000
}
},
"id": "check-internal-dns-1",
"name": "Check Pi-hole .196",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
500,
500
],
"continueOnFail": true
},
{
"parameters": {
"url": "=http://192.168.1.245:80",
"options": {
"timeout": 3000
}
},
"id": "check-internal-dns-2",
"name": "Check Pi-hole .245",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
500,
650
],
"continueOnFail": true
},
{
"parameters": {
"url": "=http://192.168.1.62:80",
"options": {
"timeout": 3000
}
},
"id": "check-internal-dns-3",
"name": "Check Pi-hole .62",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
500,
800
],
"continueOnFail": true
},
{
"parameters": {
"command": "docker service ls --format '{{json .}}'"
},
"id": "docker-service-list",
"name": "Get Docker Services",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
750,
300
]
},
{
"parameters": {
"command": "docker node ls --format '{{json .}}'"
},
"id": "docker-node-list",
"name": "Get Swarm Nodes",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
750,
450
]
},
{
"parameters": {
"url": "=https://komodo.sj98.duckdns.org",
"options": {
"timeout": 5000
}
},
"id": "check-komodo",
"name": "Check Komodo",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1000,
200
],
"continueOnFail": true
},
{
"parameters": {
"url": "=https://ai.sj98.duckdns.org/health",
"options": {
"timeout": 5000
}
},
"id": "check-openwebui",
"name": "Check OpenWebUI",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1000,
350
],
"continueOnFail": true
},
{
"parameters": {
"url": "=https://paperless.sj98.duckdns.org/api",
"options": {
"timeout": 5000
}
},
"id": "check-paperless",
"name": "Check Paperless",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1000,
500
],
"continueOnFail": true
},
{
"parameters": {
"url": "=https://prometheus.sj98.duckdns.org/-/healthy",
"options": {
"timeout": 5000
}
},
"id": "check-prometheus",
"name": "Check Prometheus",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1000,
650
],
"continueOnFail": true
},
{
"parameters": {
"command": "curl -sf http://192.168.1.1 > /dev/null && echo '{\"node\": \"Gateway\", \"status\": \"healthy\"}' || echo '{\"node\": \"Gateway\", \"error\": \"unreachable\"}'"
},
"id": "check-gateway",
"name": "Check Gateway",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
500,
950
]
},
{
"parameters": {
"command": "metrics=$(curl -s --connect-timeout 2 http://192.168.1.57:9100/metrics | grep -E \"node_load1 |node_memory_MemAvailable_bytes |node_memory_MemTotal_bytes \" | tr '\\n' ',' || echo \"failed\"); echo \"{\\\"node\\\": \\\"Proxmox Host\\\", \\\"metrics\\\": \\\"$metrics\\\"}\""
},
"id": "check-proxmox",
"name": "Check Proxmox",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
1000,
950
]
},
{
"parameters": {
"url": "=http://lm-studio:1234/v1/models",
"options": {
"timeout": 5000
}
},
"id": "check-lm-studio",
"name": "Check LM Studio",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1000,
800
],
"continueOnFail": true
},
{
"parameters": {
"jsCode": "const items = $input.all();\n\nconst healthData = {\n timestamp: new Date().toISOString(),\n network: {\n internet: [],\n internal: [],\n gateway: {}\n },\n docker: {\n services: [],\n nodes: []\n },\n infrastructure: {\n proxmox: {}\n },\n services: []\n};\n\n// Process all health check results\nfor (const item of items) {\n let nodeName = item.json.node || 'unknown';\n const success = !item.json.error;\n \n // Handle Execute Command JSON output (Gateway/Proxmox)\n if (item.json.stdout && item.json.stdout.trim().startsWith('{')) {\n try {\n const parsed = JSON.parse(item.json.stdout);\n if (parsed.node) nodeName = parsed.node;\n if (parsed.metrics) item.json.metrics = parsed.metrics;\n if (parsed.status) item.json.status = parsed.status;\n } catch (e) {}\n }\n\n if (nodeName.includes('DNS') || nodeName.includes('Cloudflare')) {\n healthData.network.internet.push({\n name: nodeName,\n status: success ? 'healthy' : 'unhealthy',\n error: item.json.error || null\n });\n } else if (nodeName.includes('Gateway')) {\n healthData.network.gateway = {\n status: item.json.status || 'unhealthy',\n error: item.json.error || null\n };\n } else if (nodeName.includes('Pi-hole')) {\n healthData.network.internal.push({\n name: nodeName,\n status: success ? 'healthy' : 'unhealthy',\n error: item.json.error || null\n });\n } else if (nodeName.includes('Proxmox')) {\n healthData.infrastructure.proxmox = {\n status: item.json.metrics ? 'healthy' : 'unhealthy',\n metrics: item.json.metrics || null,\n error: item.json.error || null\n };\n } else if (nodeName.includes('Docker')) {\n try {\n const data = JSON.parse(item.json.stdout || '[]');\n if (nodeName.includes('Services')) {\n healthData.docker.services = data;\n } else if (nodeName.includes('Nodes')) {\n healthData.docker.nodes = data;\n }\n } catch (e) {\n healthData.docker.error = e.message;\n }\n } else {\n healthData.services.push({\n name: nodeName,\n status: success ? 'healthy' : 'unhealthy',\n statusCode: item.json.statusCode,\n error: item.json.error || null\n });\n }\n}\n\n// Calculate overall health score (0-100)\nlet totalChecks = 0;\nlet passedChecks = 0;\n\nhealthData.network.internet.forEach(check => {\n totalChecks++;\n if (check.status === 'healthy') passedChecks++;\n});\n\nif (healthData.network.gateway.status === 'healthy') {\n totalChecks++;\n passedChecks++;\n} else if (healthData.network.gateway.status) {\n totalChecks++;\n}\n\nhealthData.network.internal.forEach(check => {\n totalChecks++;\n if (check.status === 'healthy') passedChecks++;\n});\n\nif (healthData.infrastructure.proxmox.status === 'healthy') {\n totalChecks++;\n passedChecks++;\n} else if (healthData.infrastructure.proxmox.status) {\n totalChecks++;\n}\n\nhealthData.services.forEach(service => {\n totalChecks++;\n if (service.status === 'healthy') passedChecks++;\n});\n\nhealthData.healthScore = totalChecks > 0 ? Math.round((passedChecks / totalChecks) * 100) : 0;\nhealthData.summary = `${passedChecks}/${totalChecks} checks passed`;\n\nreturn [{ json: healthData }];"
},
"id": "aggregate-health",
"name": "Aggregate Health Data",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
1250,
500
]
},
{
"parameters": {
"method": "POST",
"url": "=http://lm-studio:1234/v1/chat/completions",
"sendBody": true,
"bodyParameters": {
"parameters": [
{
"name": "model",
"value": "=deepseek-r1-distill-llama-8b"
},
{
"name": "messages",
"value": "={{ [{\"role\":\"system\",\"content\":\"You are a homelab infrastructure analyst. Analyze health check data and provide concise insights about system status, potential issues, and recommendations. Respond in JSON format with fields: overall_status, critical_issues (array), warnings (array), recommendations (array).\"}, {\"role\":\"user\",\"content\":\"Analyze this homelab health data:\\n\\n\" + JSON.stringify($json, null, 2)}] }}"
},
{
"name": "temperature",
"value": "=0.3"
},
{
"name": "max_tokens",
"value": "=1000"
}
]
},
"options": {
"timeout": 30000
}
},
"id": "ai-analysis",
"name": "AI Health Analysis",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1500,
500
]
},
{
"parameters": {
"jsCode": "const healthData = $('Aggregate Health Data').item.json;\nconst aiResponse = $json.choices[0].message.content;\n\nlet analysis;\ntry {\n // Try to parse AI response as JSON\n analysis = JSON.parse(aiResponse);\n} catch (e) {\n // If not JSON, structure it\n analysis = {\n overall_status: aiResponse.includes('healthy') ? 'healthy' : 'needs attention',\n raw_response: aiResponse\n };\n}\n\nconst report = {\n generated_at: new Date().toISOString(),\n health_score: healthData.healthScore,\n summary: healthData.summary,\n network_status: {\n internet: healthData.network.internet,\n internal_dns: healthData.network.internal\n },\n docker_swarm: {\n nodes: healthData.docker.nodes.length || 0,\n services: healthData.docker.services.length || 0,\n services_list: healthData.docker.services\n },\n service_endpoints: healthData.services,\n ai_analysis: analysis,\n alert_level: healthData.healthScore < 70 ? 'critical' : healthData.healthScore < 90 ? 'warning' : 'normal'\n};\n\nreturn [{ json: report }];"
},
"id": "build-report",
"name": "Build Final Report",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
1750,
500
]
},
{
"parameters": {
"conditions": {
"options": {
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "or",
"conditions": [
{
"id": "alert-critical",
"leftValue": "={{ $json.alert_level }}",
"rightValue": "critical",
"operator": {
"type": "string",
"operation": "equals"
}
},
{
"id": "alert-warning",
"leftValue": "={{ $json.health_score }}",
"rightValue": 80,
"operator": {
"type": "number",
"operation": "lt"
}
}
]
}
},
"id": "should-alert",
"name": "Should Alert?",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [
2000,
500
]
},
{
"parameters": {
"content": "🚨 **Homelab Health Alert**\\n\\n**Health Score:** {{ $json.health_score }}/100\\n**Status:** {{ $json.alert_level }}\\n**Time:** {{ $json.generated_at }}\\n\\n**Summary:** {{ $json.summary }}\\n\\n**AI Analysis:**\\n{{ $json.ai_analysis.overall_status }}\\n\\n{% if $json.ai_analysis.critical_issues %}**Critical Issues:**\\n{% for issue in $json.ai_analysis.critical_issues %}- {{ issue }}\\n{% endfor %}{% endif %}\\n\\n{% if $json.ai_analysis.recommendations %}**Recommendations:**\\n{% for rec in $json.ai_analysis.recommendations %}- {{ rec }}\\n{% endfor %}{% endif %}",
"options": {}
},
"id": "format-alert",
"name": "Format Alert Message",
"type": "n8n-nodes-base.markdown",
"typeVersion": 1,
"position": [
2250,
400
]
},
{
"parameters": {
"respondWith": "json",
"responseBody": "={{ $json }}"
},
"id": "webhook-response",
"name": "Webhook Response",
"type": "n8n-nodes-base.respondToWebhook",
"typeVersion": 1,
"position": [
2250,
600
]
}
],
"pinData": {},
"connections": {
"Every 15 Minutes": {
"main": [
[
{
"node": "Check Google DNS",
"type": "main",
"index": 0
},
{
"node": "Check Cloudflare DNS",
"type": "main",
"index": 0
},
{
"node": "Check Pi-hole .196",
"type": "main",
"index": 0
},
{
"node": "Check Pi-hole .245",
"type": "main",
"index": 0
},
{
"node": "Check Pi-hole .62",
"type": "main",
"index": 0
},
{
"node": "Get Docker Services",
"type": "main",
"index": 0
},
{
"node": "Get Swarm Nodes",
"type": "main",
"index": 0
},
{
"node": "Check Komodo",
"type": "main",
"index": 0
},
{
"node": "Check OpenWebUI",
"type": "main",
"index": 0
},
{
"node": "Check Paperless",
"type": "main",
"index": 0
},
{
"node": "Check Prometheus",
"type": "main",
"index": 0
},
{
"node": "Check LM Studio",
"type": "main",
"index": 0
},
{
"node": "Check Gateway",
"type": "main",
"index": 0
},
{
"node": "Check Proxmox",
"type": "main",
"index": 0
}
]
]
},
"Manual Trigger Webhook": {
"main": [
[
{
"node": "Check Google DNS",
"type": "main",
"index": 0
},
{
"node": "Check Cloudflare DNS",
"type": "main",
"index": 0
},
{
"node": "Check Pi-hole .196",
"type": "main",
"index": 0
},
{
"node": "Check Pi-hole .245",
"type": "main",
"index": 0
},
{
"node": "Check Pi-hole .62",
"type": "main",
"index": 0
},
{
"node": "Get Docker Services",
"type": "main",
"index": 0
},
{
"node": "Get Swarm Nodes",
"type": "main",
"index": 0
},
{
"node": "Check Komodo",
"type": "main",
"index": 0
},
{
"node": "Check OpenWebUI",
"type": "main",
"index": 0
},
{
"node": "Check Paperless",
"type": "main",
"index": 0
},
{
"node": "Check Prometheus",
"type": "main",
"index": 0
},
{
"node": "Check LM Studio",
"type": "main",
"index": 0
},
{
"node": "Check Gateway",
"type": "main",
"index": 0
},
{
"node": "Check Proxmox",
"type": "main",
"index": 0
}
]
]
},
"Check Google DNS": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Cloudflare DNS": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Pi-hole .196": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Pi-hole .245": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Pi-hole .62": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Get Docker Services": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Get Swarm Nodes": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Komodo": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check OpenWebUI": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Paperless": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Prometheus": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Gateway": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check Proxmox": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Check LM Studio": {
"main": [
[
{
"node": "Aggregate Health Data",
"type": "main",
"index": 0
}
]
]
},
"Aggregate Health Data": {
"main": [
[
{
"node": "AI Health Analysis",
"type": "main",
"index": 0
}
]
]
},
"AI Health Analysis": {
"main": [
[
{
"node": "Build Final Report",
"type": "main",
"index": 0
}
]
]
},
"Build Final Report": {
"main": [
[
{
"node": "Should Alert?",
"type": "main",
"index": 0
}
]
]
},
"Should Alert?": {
"main": [
[
{
"node": "Format Alert Message",
"type": "main",
"index": 0
}
],
[
{
"node": "Webhook Response",
"type": "main",
"index": 0
}
]
]
},
"Format Alert Message": {
"main": [
[
{
"node": "Webhook Response",
"type": "main",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "1",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "homelab"
},
"id": "homelab-health-monitor",
"tags": []
}

View File

@@ -0,0 +1,288 @@
{
"name": "Homelab Integration Advisor",
"nodes": [
{
"parameters": {
"rule": {
"interval": [
{
"field": "days",
"daysInterval": 1,
"triggerAtHour": 9
}
]
}
},
"id": "daily-trigger",
"name": "Daily at 9 AM",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1.2,
"position": [
250,
400
]
},
{
"parameters": {
"httpMethod": "POST",
"path": "integration-advisor",
"responseMode": "responseNode",
"options": {}
},
"id": "webhook-trigger",
"name": "Manual Trigger",
"type": "n8n-nodes-base.webhook",
"typeVersion": 2,
"position": [
250,
600
],
"webhookId": "integration-advisor"
},
{
"parameters": {
"command": "docker service ls --format '{{.Name}}|{{.Mode}}|{{.Replicas}}|{{.Image}}|{{.Ports}}'"
},
"id": "get-services",
"name": "Get All Services",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
500,
400
]
},
{
"parameters": {
"url": "=http://prometheus:9090/api/v1/query?query=up",
"options": {
"timeout": 5000
}
},
"id": "get-prometheus-metrics",
"name": "Get Prometheus Metrics",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
500,
550
],
"continueOnFail": true
},
{
"parameters": {
"url": "=http://lm-studio:1234/v1/models",
"options": {}
},
"id": "get-ai-models",
"name": "Get Available AI Models",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
500,
700
],
"continueOnFail": true
},
{
"parameters": {
"jsCode": "const items = $input.all();\n\nconst inventory = {\n timestamp: new Date().toISOString(),\n services: [],\n capabilities: {\n ai: [],\n monitoring: [],\n automation: [],\n storage: [],\n productivity: [],\n media: [],\n development: []\n },\n integration_potential: []\n};\n\n// Parse service list\nconst serviceData = items.find(i => i.json.stdout);\nif (serviceData && serviceData.json.stdout) {\n const lines = serviceData.json.stdout.split('\\n').filter(l => l.trim());\n lines.forEach(line => {\n const [name, mode, replicas, image, ports] = line.split('|');\n const service = { name, mode, replicas, image, ports };\n inventory.services.push(service);\n \n // Categorize by capability\n if (name.includes('openwebui') || name.includes('lm-studio') || name.includes('ollama')) {\n inventory.capabilities.ai.push(name);\n } else if (name.includes('prometheus') || name.includes('grafana') || name.includes('alert')) {\n inventory.capabilities.monitoring.push(name);\n } else if (name.includes('n8n') || name.includes('komodo')) {\n inventory.capabilities.automation.push(name);\n } else if (name.includes('paperless') || name.includes('stirling') || name.includes('nextcloud')) {\n inventory.capabilities.productivity.push(name);\n } else if (name.includes('plex') || name.includes('jellyfin') || name.includes('immich')) {\n inventory.capabilities.media.push(name);\n } else if (name.includes('gitea') || name.includes('code-server')) {\n inventory.capabilities.development.push(name);\n } else if (name.includes('omv') || name.includes('samba')) {\n inventory.capabilities.storage.push(name);\n }\n });\n}\n\n// Get AI models\nconst aiModels = items.find(i => i.json.data);\nif (aiModels && aiModels.json.data) {\n inventory.ai_models = aiModels.json.data.map(m => m.id);\n}\n\n// Define integration opportunities\nconst integrations = [\n { from: 'n8n', to: 'paperless', type: 'Document automation', potential: 'high' },\n { from: 'n8n', to: 'prometheus', type: 'Metric-based triggers', potential: 'high' },\n { from: 'n8n', to: 'openwebui', type: 'AI-powered workflows', potential: 'high' },\n { from: 'openwebui', to: 'searxng', type: 'Enhanced search', potential: 'medium' },\n { from: 'prometheus', to: 'grafana', type: 'Visualization', potential: 'existing' },\n { from: 'gitea', to: 'komodo', type: 'CI/CD automation', potential: 'high' },\n { from: 'paperless', to: 'nextcloud', type: 'Document storage', potential: 'medium' },\n { from: 'immich', to: 'openwebui', type: 'Photo analysis', potential: 'medium' },\n { from: 'home-assistant', to: 'all', type: 'Smart home integration', potential: 'high' }\n];\n\ninventory.integration_potential = integrations.filter(i => {\n const fromExists = inventory.services.some(s => s.name.includes(i.from.split('-')[0]));\n const toExists = i.to === 'all' || inventory.services.some(s => s.name.includes(i.to.split('-')[0]));\n return fromExists && toExists;\n});\n\nreturn [{ json: inventory }];"
},
"id": "build-inventory",
"name": "Build Service Inventory",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
750,
500
]
},
{
"parameters": {
"method": "POST",
"url": "=http://lm-studio:1234/v1/chat/completions",
"sendBody": true,
"bodyParameters": {
"parameters": [
{
"name": "model",
"value": "=deepseek-r1-distill-llama-8b"
},
{
"name": "messages",
"value": "={{ [{\"role\":\"system\",\"content\":\"You are a homelab integration expert specializing in service orchestration with n8n, Docker, and modern DevOps tools. Analyze the provided service inventory and recommend specific integration workflows. For each recommendation provide: 1) Services involved 2) Integration type 3) Specific n8n workflow pattern 4) Expected benefits 5) Complexity (low/medium/high). Respond in JSON format with an array of recommendations.\"}, {\"role\":\"user\",\"content\":\"Analyze this homelab and recommend integration workflows:\\n\\nServices: \" + JSON.stringify($json.capabilities, null, 2) + \"\\n\\nAvailable AI Models: \" + JSON.stringify($json.ai_models || [], null, 2) + \"\\n\\nPotential Integrations Identified: \" + JSON.stringify($json.integration_potential, null, 2)}] }}"
},
{
"name": "temperature",
"value": "=0.4"
},
{
"name": "max_tokens",
"value": "=2000"
}
]
},
"options": {
"timeout": 40000
}
},
"id": "ai-integration-advisor",
"name": "AI Integration Advisor",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1000,
500
]
},
{
"parameters": {
"jsCode": "const inventory = $('Build Service Inventory').item.json;\nconst aiResponse = $json.choices[0].message.content;\n\nlet recommendations;\ntry {\n const jsonMatch = aiResponse.match(/\\{[\\s\\S]*\\}|\\[[\\s\\S]*\\]/);\n recommendations = jsonMatch ? JSON.parse(jsonMatch[0]) : { raw: aiResponse };\n} catch (e) {\n recommendations = { raw: aiResponse, error: e.message };\n}\n\nconst report = {\n generated_at: new Date().toISOString(),\n homelab_summary: {\n total_services: inventory.services.length,\n capabilities: inventory.capabilities,\n ai_models_available: inventory.ai_models?.length || 0\n },\n integration_opportunities: inventory.integration_potential,\n ai_recommendations: recommendations,\n priority_integrations: [],\n quick_wins: []\n};\n\n// Extract priority integrations from AI response\nif (Array.isArray(recommendations)) {\n report.priority_integrations = recommendations\n .filter(r => r.complexity === 'low' || r.complexity === 'medium')\n .slice(0, 5);\n report.quick_wins = recommendations\n .filter(r => r.complexity === 'low')\n .slice(0, 3);\n} else if (recommendations.recommendations) {\n report.priority_integrations = recommendations.recommendations.slice(0, 5);\n}\n\nreturn [{ json: report }];"
},
"id": "build-integration-report",
"name": "Build Integration Report",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
1250,
500
]
},
{
"parameters": {
"respondWith": "json",
"responseBody": "={{ $json }}"
},
"id": "webhook-response",
"name": "Webhook Response",
"type": "n8n-nodes-base.respondToWebhook",
"typeVersion": 1,
"position": [
1500,
500
]
}
],
"pinData": {},
"connections": {
"Daily at 9 AM": {
"main": [
[
{
"node": "Get All Services",
"type": "main",
"index": 0
},
{
"node": "Get Prometheus Metrics",
"type": "main",
"index": 0
},
{
"node": "Get Available AI Models",
"type": "main",
"index": 0
}
]
]
},
"Manual Trigger": {
"main": [
[
{
"node": "Get All Services",
"type": "main",
"index": 0
},
{
"node": "Get Prometheus Metrics",
"type": "main",
"index": 0
},
{
"node": "Get Available AI Models",
"type": "main",
"index": 0
}
]
]
},
"Get All Services": {
"main": [
[
{
"node": "Build Service Inventory",
"type": "main",
"index": 0
}
]
]
},
"Get Prometheus Metrics": {
"main": [
[
{
"node": "Build Service Inventory",
"type": "main",
"index": 0
}
]
]
},
"Get Available AI Models": {
"main": [
[
{
"node": "Build Service Inventory",
"type": "main",
"index": 0
}
]
]
},
"Build Service Inventory": {
"main": [
[
{
"node": "AI Integration Advisor",
"type": "main",
"index": 0
}
]
]
},
"AI Integration Advisor": {
"main": [
[
{
"node": "Build Integration Report",
"type": "main",
"index": 0
}
]
]
},
"Build Integration Report": {
"main": [
[
{
"node": "Webhook Response",
"type": "main",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "1",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "homelab"
},
"id": "homelab-integration-advisor",
"tags": []
}

View File

@@ -0,0 +1,332 @@
{
"name": "Homelab Log Analyzer",
"nodes": [
{
"parameters": {
"rule": {
"interval": [
{
"field": "hours",
"hoursInterval": 6
}
]
}
},
"id": "schedule-trigger",
"name": "Every 6 Hours",
"type": "n8n-nodes-base.scheduleTrigger",
"typeVersion": 1.2,
"position": [
250,
400
]
},
{
"parameters": {
"command": "docker service logs --tail 100 --timestamps traefik_traefik 2>&1 || echo 'Service not found'"
},
"id": "logs-traefik",
"name": "Get Traefik Logs",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
500,
200
],
"continueOnFail": true
},
{
"parameters": {
"command": "docker service logs --tail 100 --timestamps n8n_n8n 2>&1 || echo 'Service not found'"
},
"id": "logs-n8n",
"name": "Get n8n Logs",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
500,
350
],
"continueOnFail": true
},
{
"parameters": {
"command": "docker service logs --tail 100 --timestamps ai_openwebui 2>&1 || echo 'Service not found'"
},
"id": "logs-openwebui",
"name": "Get OpenWebUI Logs",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
500,
500
],
"continueOnFail": true
},
{
"parameters": {
"command": "docker service logs --tail 100 --timestamps infrastructure_komodo-core 2>&1 || echo 'Service not found'"
},
"id": "logs-komodo",
"name": "Get Komodo Logs",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
500,
650
],
"continueOnFail": true
},
{
"parameters": {
"command": "docker service logs --tail 100 --timestamps monitoring_prometheus 2>&1 || echo 'Service not found'"
},
"id": "logs-prometheus",
"name": "Get Prometheus Logs",
"type": "n8n-nodes-base.executeCommand",
"typeVersion": 1,
"position": [
500,
800
],
"continueOnFail": true
},
{
"parameters": {
"jsCode": "const items = $input.all();\n\nconst logAnalysis = {\n timestamp: new Date().toISOString(),\n services: [],\n errors: [],\n warnings: [],\n summary: {}\n};\n\nconst errorPatterns = [\n /ERROR/gi,\n /FATAL/gi,\n /CRITICAL/gi,\n /FAIL/gi,\n /panic:/gi,\n /exception/gi\n];\n\nconst warningPatterns = [\n /WARN/gi,\n /WARNING/gi,\n /deprecated/gi,\n /timeout/gi,\n /retry/gi\n];\n\nfor (const item of items) {\n const nodeName = item.json.node || 'unknown';\n const stdout = item.json.stdout || '';\n const lines = stdout.split('\\n').filter(l => l.trim());\n \n const serviceLog = {\n name: nodeName,\n totalLines: lines.length,\n errors: [],\n warnings: [],\n recentEntries: lines.slice(-10) // Last 10 lines\n };\n \n // Scan for errors and warnings\n lines.forEach(line => {\n const matchesError = errorPatterns.some(pattern => pattern.test(line));\n const matchesWarning = warningPatterns.some(pattern => pattern.test(line));\n \n if (matchesError) {\n const errorEntry = {\n service: nodeName,\n line: line,\n timestamp: line.match(/^\\d{4}-\\d{2}-\\d{2}T[\\d:]+\\.\\d+Z/) ? line.split(' ')[0] : null\n };\n serviceLog.errors.push(errorEntry);\n logAnalysis.errors.push(errorEntry);\n } else if (matchesWarning) {\n const warningEntry = {\n service: nodeName,\n line: line,\n timestamp: line.match(/^\\d{4}-\\d{2}-\\d{2}T[\\d:]+\\.\\d+Z/) ? line.split(' ')[0] : null\n };\n serviceLog.warnings.push(warningEntry);\n logAnalysis.warnings.push(warningEntry);\n }\n });\n \n logAnalysis.services.push(serviceLog);\n}\n\n// Generate summary\nlogAnalysis.summary = {\n totalServices: logAnalysis.services.length,\n totalErrors: logAnalysis.errors.length,\n totalWarnings: logAnalysis.warnings.length,\n servicesWithErrors: logAnalysis.services.filter(s => s.errors.length > 0).map(s => s.name),\n servicesWithWarnings: logAnalysis.services.filter(s => s.warnings.length > 0).map(s => s.name)\n};\n\nreturn [{ json: logAnalysis }];"
},
"id": "parse-logs",
"name": "Parse and Analyze Logs",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
750,
500
]
},
{
"parameters": {
"method": "POST",
"url": "=http://lm-studio:1234/v1/chat/completions",
"sendBody": true,
"bodyParameters": {
"parameters": [
{
"name": "model",
"value": "=qwen2.5-coder-7b-instruct"
},
{
"name": "messages",
"value": "={{ [{\"role\":\"system\",\"content\":\"You are a Docker/Kubernetes expert and log analyzer. Analyze these Docker service logs and identify: 1) Critical issues requiring immediate attention 2) Performance concerns 3) Configuration problems 4) Recommended actions. Respond in JSON format with: critical_issues (array), performance_concerns (array), config_issues (array), recommendations (array).\"}, {\"role\":\"user\",\"content\":\"Analyze these homelab service logs:\\n\\nSummary: \" + JSON.stringify($json.summary, null, 2) + \"\\n\\nErrors Found: \" + JSON.stringify($json.errors.slice(0, 20), null, 2) + \"\\n\\nWarnings Found: \" + JSON.stringify($json.warnings.slice(0, 20), null, 2)}] }}"
},
{
"name": "temperature",
"value": "=0.2"
},
{
"name": "max_tokens",
"value": "=1500"
}
]
},
"options": {
"timeout": 30000
}
},
"id": "ai-log-analysis",
"name": "AI Log Analysis",
"type": "n8n-nodes-base.httpRequest",
"typeVersion": 4.2,
"position": [
1000,
500
]
},
{
"parameters": {
"jsCode": "const logData = $('Parse and Analyze Logs').item.json;\nconst aiResponse = $json.choices[0].message.content;\n\nlet aiAnalysis;\ntry {\n // Extract JSON from response (AI might wrap it in markdown)\n const jsonMatch = aiResponse.match(/\\{[\\s\\S]*\\}/);\n aiAnalysis = jsonMatch ? JSON.parse(jsonMatch[0]) : { raw: aiResponse };\n} catch (e) {\n aiAnalysis = { raw: aiResponse };\n}\n\nconst report = {\n generated_at: new Date().toISOString(),\n period: '6 hours',\n summary: logData.summary,\n top_errors: logData.errors.slice(0, 10),\n top_warnings: logData.warnings.slice(0, 10),\n ai_analysis: aiAnalysis,\n action_required: logData.summary.totalErrors > 10 || (aiAnalysis.critical_issues && aiAnalysis.critical_issues.length > 0)\n};\n\nreturn [{ json: report }];"
},
"id": "build-log-report",
"name": "Build Log Report",
"type": "n8n-nodes-base.code",
"typeVersion": 2,
"position": [
1250,
500
]
},
{
"parameters": {
"conditions": {
"options": {
"leftValue": "",
"caseSensitive": true,
"typeValidation": "strict"
},
"combinator": "or",
"conditions": [
{
"id": "has-action-required",
"leftValue": "={{ $json.action_required }}",
"rightValue": true,
"operator": {
"type": "boolean",
"operation": "true"
}
},
{
"id": "many-errors",
"leftValue": "={{ $json.summary.totalErrors }}",
"rightValue": 5,
"operator": {
"type": "number",
"operation": "gt"
}
}
]
}
},
"id": "should-alert-logs",
"name": "Should Alert?",
"type": "n8n-nodes-base.if",
"typeVersion": 2,
"position": [
1500,
500
]
}
],
"pinData": {},
"connections": {
"Every 6 Hours": {
"main": [
[
{
"node": "Get Traefik Logs",
"type": "main",
"index": 0
},
{
"node": "Get n8n Logs",
"type": "main",
"index": 0
},
{
"node": "Get OpenWebUI Logs",
"type": "main",
"index": 0
},
{
"node": "Get Komodo Logs",
"type": "main",
"index": 0
},
{
"node": "Get Prometheus Logs",
"type": "main",
"index": 0
}
]
]
},
"Get Traefik Logs": {
"main": [
[
{
"node": "Parse and Analyze Logs",
"type": "main",
"index": 0
}
]
]
},
"Get n8n Logs": {
"main": [
[
{
"node": "Parse and Analyze Logs",
"type": "main",
"index": 0
}
]
]
},
"Get OpenWebUI Logs": {
"main": [
[
{
"node": "Parse and Analyze Logs",
"type": "main",
"index": 0
}
]
]
},
"Get Komodo Logs": {
"main": [
[
{
"node": "Parse and Analyze Logs",
"type": "main",
"index": 0
}
]
]
},
"Get Prometheus Logs": {
"main": [
[
{
"node": "Parse and Analyze Logs",
"type": "main",
"index": 0
}
]
]
},
"Parse and Analyze Logs": {
"main": [
[
{
"node": "AI Log Analysis",
"type": "main",
"index": 0
}
]
]
},
"AI Log Analysis": {
"main": [
[
{
"node": "Build Log Report",
"type": "main",
"index": 0
}
]
]
},
"Build Log Report": {
"main": [
[
{
"node": "Should Alert?",
"type": "main",
"index": 0
}
]
]
}
},
"active": false,
"settings": {
"executionOrder": "v1"
},
"versionId": "1",
"meta": {
"templateCredsSetupCompleted": true,
"instanceId": "homelab"
},
"id": "homelab-log-analyzer",
"tags": []
}

View File

@@ -0,0 +1,17 @@
services:
pihole:
image: pihole/pihole:latest
container_name: pihole
network_mode: host
environment:
TZ: "America/Chicago"
WEBPASSWORD: "YOURPASSWORD"
FTLCONF_webserver_enabled: "true"
FTLCONF_webserver_port: "7300"
WEB_BIND_ADDR: "0.0.0.0"
DNS1: "127.0.0.1#5335"
DNS2: "0.0.0.0"
volumes:
- ./etc-pihole:/etc/pihole
- ./etc-dnsmasq.d:/etc/dnsmasq.d
restart: unless-stopped

View File

@@ -0,0 +1,23 @@
docker run -d \
--name pihole \
--network host \
-e TZ=America/Chicago \
-e WEBPASSWORD=YOURPASSWORD \
-e FTLCONF_webserver_enabled=true \
-e FTLCONF_webserver_port=7300 \
-e WEB_BIND_ADDR=0.0.0.0 \
-e DNS1=127.0.0.1#5335 \
-e DNS2=0.0.0.0 \
-v pihole_etc:/etc/pihole:rw \
-v pihole_dnsmasq:/etc/dnsmasq.d:rw \
--restart=unless-stopped \
pihole/pihole:latest
docker run -d \
--name adguardhome \
--network host \
-e TZ=America/Chicago \
-v adguard_conf:/opt/adguardhome/conf:rw \
-v adguard_work:/opt/adguardhome/work:rw \
--restart=unless-stopped \
adguard/adguardhome:latest

View File

@@ -0,0 +1 @@
docker run -d --name pihole --network host -e TZ=America/Chicago -e WEBPASSWORD=YOURPASSWORD -e FTLCONF_webserver_enabled=true -e FTLCONF_webserver_port=7300 -e WEB_BIND_ADDR=0.0.0.0 -e DNS1=127.0.0.1#5335 -e DNS2=0.0.0.0 -v pihole_etc:/etc/pihole -v pihole_dnsmasq:/etc/dnsmasq.d --restart=unless-stopped pihole/pihole:latest

View File

@@ -0,0 +1,92 @@
; This file holds the information on root name servers needed to
; initialize cache of Internet domain name servers
; (e.g. reference this file in the "cache . <file>"
; configuration file of BIND domain name servers).
;
; This file is made available by InterNIC
; under anonymous FTP as
; file /domain/named.cache
; on server FTP.INTERNIC.NET
; -OR- RS.INTERNIC.NET
;
; last update: November 20, 2025
; related version of root zone: 2025112001
;
; FORMERLY NS.INTERNIC.NET
;
. 3600000 NS A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4
A.ROOT-SERVERS.NET. 3600000 AAAA 2001:503:ba3e::2:30
;
; FORMERLY NS1.ISI.EDU
;
. 3600000 NS B.ROOT-SERVERS.NET.
B.ROOT-SERVERS.NET. 3600000 A 170.247.170.2
B.ROOT-SERVERS.NET. 3600000 AAAA 2801:1b8:10::b
;
; FORMERLY C.PSI.NET
;
. 3600000 NS C.ROOT-SERVERS.NET.
C.ROOT-SERVERS.NET. 3600000 A 192.33.4.12
C.ROOT-SERVERS.NET. 3600000 AAAA 2001:500:2::c
;
; FORMERLY TERP.UMD.EDU
;
. 3600000 NS D.ROOT-SERVERS.NET.
D.ROOT-SERVERS.NET. 3600000 A 199.7.91.13
D.ROOT-SERVERS.NET. 3600000 AAAA 2001:500:2d::d
;
; FORMERLY NS.NASA.GOV
;
. 3600000 NS E.ROOT-SERVERS.NET.
E.ROOT-SERVERS.NET. 3600000 A 192.203.230.10
E.ROOT-SERVERS.NET. 3600000 AAAA 2001:500:a8::e
;
; FORMERLY NS.ISC.ORG
;
. 3600000 NS F.ROOT-SERVERS.NET.
F.ROOT-SERVERS.NET. 3600000 A 192.5.5.241
F.ROOT-SERVERS.NET. 3600000 AAAA 2001:500:2f::f
;
; FORMERLY NS.NIC.DDN.MIL
;
. 3600000 NS G.ROOT-SERVERS.NET.
G.ROOT-SERVERS.NET. 3600000 A 192.112.36.4
G.ROOT-SERVERS.NET. 3600000 AAAA 2001:500:12::d0d
;
; FORMERLY AOS.ARL.ARMY.MIL
;
. 3600000 NS H.ROOT-SERVERS.NET.
H.ROOT-SERVERS.NET. 3600000 A 198.97.190.53
H.ROOT-SERVERS.NET. 3600000 AAAA 2001:500:1::53
;
; FORMERLY NIC.NORDU.NET
;
. 3600000 NS I.ROOT-SERVERS.NET.
I.ROOT-SERVERS.NET. 3600000 A 192.36.148.17
I.ROOT-SERVERS.NET. 3600000 AAAA 2001:7fe::53
;
; OPERATED BY VERISIGN, INC.
;
. 3600000 NS J.ROOT-SERVERS.NET.
J.ROOT-SERVERS.NET. 3600000 A 192.58.128.30
J.ROOT-SERVERS.NET. 3600000 AAAA 2001:503:c27::2:30
;
; OPERATED BY RIPE NCC
;
. 3600000 NS K.ROOT-SERVERS.NET.
K.ROOT-SERVERS.NET. 3600000 A 193.0.14.129
K.ROOT-SERVERS.NET. 3600000 AAAA 2001:7fd::1
;
; OPERATED BY ICANN
;
. 3600000 NS L.ROOT-SERVERS.NET.
L.ROOT-SERVERS.NET. 3600000 A 199.7.83.42
L.ROOT-SERVERS.NET. 3600000 AAAA 2001:500:9f::42
;
; OPERATED BY WIDE
;
. 3600000 NS M.ROOT-SERVERS.NET.
M.ROOT-SERVERS.NET. 3600000 A 202.12.27.33
M.ROOT-SERVERS.NET. 3600000 AAAA 2001:dc3::35
; End of file

View File

@@ -0,0 +1,56 @@
server:
# Listener (Pi-hole runs in host mode and queries localhost:5335)
interface: 127.0.0.1@5335
access-control: 127.0.0.1/32 allow
access-control: ::1 allow
# Protocols
do-ip4: yes
do-ip6: yes
do-udp: yes
do-tcp: yes
# Threads: match physical cores (not hyperthreads)
num-threads: 2
so-reuseport: yes
# Concurrency tuning
outgoing-range: 1024
incoming-num-tcp: 32
outgoing-num-tcp: 64
num-queries-per-thread: 4096
# Cache sizing (right-sized for ~200k Q/day, 4 GiB VM)
msg-cache-size: 128m
rrset-cache-size: 256m
infra-cache-numhosts: 10000
# TTL and prefetch to avoid cold-cache spikes
cache-min-ttl: 300
cache-max-ttl: 86400
prefetch: yes
prefetch-key: yes
serve-expired: yes # optional but smooths client behavior on slow upstreams
# Network socket buffers for bursts
so-rcvbuf: 16m
so-sndbuf: 16m
# DNSSEC (keep enabled)
root-hints: "/var/lib/unbound/root.hints"
auto-trust-anchor-file: "/var/lib/unbound/root.key"
# Hardening (lightweight)
harden-glue: yes
harden-dnssec-stripped: yes
harden-referral-path: yes
harden-algo-downgrade: yes
use-caps-for-id: yes
unwanted-reply-threshold: 10000
# Logging / verbosity (low in production)
verbosity: 1
logfile: "" # empty = syslog (or leave unset to avoid disk logs)
log-queries: no
log-replies: no
log-servfail: yes

View File

@@ -0,0 +1,5 @@
remote-control:
control-enable: yes
# by default the control interface is is 127.0.0.1 and ::1 and port 8953
# it is possible to use a unix socket too
control-interface: /run/unbound.ctl

View File

@@ -0,0 +1,4 @@
server:
# The following line will configure unbound to perform cryptographic
# DNSSEC validation using the root trust anchor.
auto-trust-anchor-file: "/var/lib/unbound/root.key"

View File

@@ -9,7 +9,7 @@ volumes:
services:
openwebui:
image: ghcr.io/open-webui/open-webui:0.3.32
image: ghcr.io/open-webui/open-webui:main
volumes:
- openwebui_data:/app/backend/data
networks:
@@ -41,15 +41,15 @@ services:
failure_action: rollback
labels:
- "traefik.enable=true"
- "traefik.http.routers.openwebui.rule=Host(`ai.sj98.duckdns.org`)"
- "traefik.http.routers.openwebui.rule=Host(`ai.sterl.xyz`)"
- "traefik.http.routers.openwebui.entrypoints=websecure"
- "traefik.http.routers.openwebui.tls.certresolver=leresolver"
- "traefik.http.routers.openwebui.tls.certresolver=cfresolver"
- "traefik.http.services.openwebui.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=openwebui"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"

View File

@@ -17,8 +17,7 @@ volumes:
secrets:
paperless_db_password:
external: true
paperless_secret_key:
external: true
services:
paperless-redis:
@@ -47,11 +46,11 @@ services:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
paperless-db:
image: postgres:15-alpine
@@ -85,14 +84,14 @@ services:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
paperless:
image: ghcr.io/paperless-ngx/paperless-ngx:2.19.3
image: ghcr.io/paperless-ngx/paperless-ngx:latest
volumes:
- paperless_data:/usr/src/paperless/data
- paperless_media:/usr/src/paperless/media
@@ -102,12 +101,12 @@ services:
- PAPERLESS_DBNAME=paperless
- PAPERLESS_DBUSER=paperless
- PAPERLESS_DBPASS_FILE=/run/secrets/paperless_db_password
- PAPERLESS_URL=https://paperless.sj98.duckdns.org
- PAPERLESS_SECRET_KEY_FILE=/run/secrets/paperless_secret_key
- PAPERLESS_URL=https://paperless.sterl.xyz
- PAPERLESS_SECRET_KEY=e83bed4e4604e760c0429188e1781b0a8f89de936336a53609340f6b3e2182b8
- TZ=America/Chicago
secrets:
- paperless_db_password
- paperless_secret_key
depends_on:
- paperless-redis
- paperless-db
@@ -141,21 +140,22 @@ services:
failure_action: rollback
labels:
- "traefik.enable=true"
- "traefik.http.routers.paperless.rule=Host(`paperless.sj98.duckdns.org`)"
- "traefik.http.routers.paperless.rule=Host(`paperless.sterl.xyz`)"
- "traefik.http.routers.paperless.entrypoints=websecure"
- "traefik.http.routers.paperless.tls.certresolver=leresolver"
- "traefik.http.routers.paperless.tls.certresolver=cfresolver"
- "traefik.http.services.paperless.loadbalancer.server.port=8000"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=paperless"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
- "tsdproxy.container_port=8000"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
stirling-pdf:
image: frooodle/s-pdf:0.18.1
image: stirlingtools/stirling-pdf:latest
volumes:
- stirling_pdf_data:/configs
environment:
@@ -191,25 +191,26 @@ services:
failure_action: rollback
labels:
- "traefik.enable=true"
- "traefik.http.routers.pdf.rule=Host(`pdf.sj98.duckdns.org`)"
- "traefik.http.routers.pdf.rule=Host(`pdf.sterl.xyz`)"
- "traefik.http.routers.pdf.entrypoints=websecure"
- "traefik.http.routers.pdf.tls.certresolver=leresolver"
- "traefik.http.routers.pdf.tls.certresolver=cfresolver"
- "traefik.http.services.pdf.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=pdf"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
- "tsdproxy.container_port=8080"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
searxng:
image: searxng/searxng:2024.11.20-e9f6095cc
image: searxng/searxng:latest
volumes:
- searxng_data:/etc/searxng
environment:
- SEARXNG_BASE_URL=https://search.sj98.duckdns.org/
- SEARXNG_BASE_URL=https://search.sterl.xyz/
networks:
- traefik-public
healthcheck:
@@ -239,15 +240,16 @@ services:
failure_action: rollback
labels:
- "traefik.enable=true"
- "traefik.http.routers.searxng.rule=Host(`search.sj98.duckdns.org`)"
- "traefik.http.routers.searxng.rule=Host(`search.sterl.xyz`)"
- "traefik.http.routers.searxng.entrypoints=websecure"
- "traefik.http.routers.searxng.tls.certresolver=leresolver"
- "traefik.http.routers.searxng.tls.certresolver=cfresolver"
- "traefik.http.services.searxng.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=search"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
- "tsdproxy.container_port=8080"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"

View File

@@ -30,11 +30,36 @@ services:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik-public
extra_hosts:
- "gateway:192.168.1.1"
- "proxmox:192.168.1.57"
- "omv:192.168.1.70"
- "swarm-manager:192.168.1.196"
- "swarm-leader:192.168.1.245"
- "swarm-worker-light:192.168.1.62"
- "lm-studio:192.168.1.81"
- "fedora:192.168.1.81"
- "n8n.sj98.duckdns.org:192.168.1.196"
environment:
- N8N_HOST=n8n.sj98.duckdns.org
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://n8n.sj98.duckdns.org/
- N8N_EDITOR_BASE_URL=https://n8n.sj98.duckdns.org/
- N8N_PUSH_BACKEND=websocket
# Fix X-Forwarded-For validation errors (trust Traefik proxy)
- N8N_PROXY_HOPS=1
- N8N_SECURE_COOKIE=false
- N8N_METRICS=false
- N8N_SKIP_WEBHOOK_CSRF_CHECK=true
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
# Database configuration (fix deprecation warning)
- DB_SQLITE_POOL_SIZE=10
# Task runners (fix deprecation warning)
- N8N_RUNNERS_ENABLED=true
# Security settings (fix deprecation warnings)
- N8N_BLOCK_ENV_ACCESS_IN_NODE=false
- N8N_GIT_NODE_DISABLE_BARE_REPOS=true
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://localhost:5678/healthz || exit 1"]
interval: 30s
@@ -46,11 +71,11 @@ services:
- node.role == manager
resources:
limits:
memory: 1G
cpus: '0.5'
memory: 4G
cpus: '2.0'
reservations:
memory: 256M
cpus: '0.1'
memory: 512M
cpus: '0.5'
restart_policy:
condition: on-failure
delay: 5s
@@ -61,7 +86,10 @@ services:
- "traefik.http.routers.n8n.entrypoints=websecure"
- "traefik.http.routers.n8n.tls.certresolver=leresolver"
- "traefik.http.services.n8n.loadbalancer.server.port=5678"
- "traefik.docker.network=traefik-public"
- "traefik.http.services.n8n.loadbalancer.sticky.cookie=true"
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.name=n8n_sticky"
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.secure=true"
- "traefik.swarm.network=traefik-public"
logging:
driver: "json-file"
options:
@@ -105,7 +133,7 @@ services:
- "traefik.http.routers.openwebui.entrypoints=websecure"
- "traefik.http.routers.openwebui.tls.certresolver=leresolver"
- "traefik.http.services.openwebui.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=openwebui"
logging:
@@ -238,7 +266,7 @@ services:
- "traefik.http.routers.paperless.entrypoints=websecure"
- "traefik.http.routers.paperless.tls.certresolver=leresolver"
- "traefik.http.services.paperless.loadbalancer.server.port=8000"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=paperless"
logging:
@@ -288,7 +316,7 @@ services:
- "traefik.http.routers.pdf.entrypoints=websecure"
- "traefik.http.routers.pdf.tls.certresolver=leresolver"
- "traefik.http.services.pdf.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=pdf"
logging:
@@ -336,7 +364,7 @@ services:
- "traefik.http.routers.searxng.entrypoints=websecure"
- "traefik.http.routers.searxng.tls.certresolver=leresolver"
- "traefik.http.services.searxng.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=search"
logging:
@@ -399,7 +427,7 @@ services:
- "traefik.http.routers.tsdproxy.entrypoints=websecure"
- "traefik.http.routers.tsdproxy.tls.certresolver=leresolver"
- "traefik.http.services.tsdproxy.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=tsdproxy"
logging:

View File

@@ -7,11 +7,22 @@ networks:
volumes:
tsdproxydata:
configs:
tsdproxy-config:
external: true
name: tsdproxy.yaml
services:
tsdproxy:
image: almeidapaulopt/tsdproxy:latest
image: almeidapaulopt/tsdproxy:1.1.0
configs:
- source: tsdproxy-config
target: /config/tsdproxy.yaml
uid: "0"
gid: "0"
mode: 0444
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/run/docker.sock:/var/run/docker.sock:ro
- tsdproxydata:/data
environment:
- TSDPROXY_AUTHKEY=${TSDPROXY_AUTHKEY}
@@ -26,7 +37,7 @@ services:
- node.role == manager
labels:
- "traefik.enable=true"
- "traefik.http.routers.tsdproxy.rule=Host(`proxy.sj98.duckdns.org`)"
- "traefik.http.routers.tsdproxy.rule=Host(`proxy.sterl.xyz`)"
- "traefik.http.routers.tsdproxy.entrypoints=websecure"
- "traefik.http.routers.tsdproxy.tls.certresolver=leresolver"
- "traefik.http.routers.tsdproxy.tls.certresolver=cfresolver"
- "traefik.http.services.tsdproxy.loadbalancer.server.port=8080"

View File

@@ -68,11 +68,11 @@ services:
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.komodo.rule=Host(`komodo.sj98.duckdns.org`)"
- "traefik.http.routers.komodo.rule=Host(`komodo.sterl.xyz`)"
- "traefik.http.routers.komodo.entrypoints=websecure"
- "traefik.http.routers.komodo.tls.certresolver=leresolver"
- "traefik.http.routers.komodo.tls.certresolver=cfresolver"
- "traefik.http.services.komodo.loadbalancer.server.port=9120"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=komodo"
logging:
@@ -156,11 +156,11 @@ services:
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.tsdproxy.rule=Host(`tsdproxy.sj98.duckdns.org`)"
- "traefik.http.routers.tsdproxy.rule=Host(`tsdproxy.sterl.xyz`)"
- "traefik.http.routers.tsdproxy.entrypoints=websecure"
- "traefik.http.routers.tsdproxy.tls.certresolver=leresolver"
- "traefik.http.routers.tsdproxy.tls.certresolver=cfresolver"
- "traefik.http.services.tsdproxy.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=tsdproxy"
logging:

View File

@@ -5,6 +5,7 @@ networks:
external: true
media-backend:
driver: overlay
attachable: true
volumes:
plex_config:
@@ -16,8 +17,12 @@ volumes:
homarr_config:
services:
############################################
# HOMARR
############################################
homarr:
image: ghcr.io/homarr-labs/homarr:1.43.0
image: ghcr.io/ajnart/homarr:latest
networks:
- traefik-public
- media-backend
@@ -29,26 +34,28 @@ services:
deploy:
placement:
constraints:
- node.labels.leader == true
- node.role == manager
- node.labels.leader == true
labels:
- "traefik.enable=true"
- "traefik.http.routers.homarr-router.rule=Host(`homarr.sj98.duckdns.org`)"
- "traefik.http.routers.homarr-router.entrypoints=websecure"
- "traefik.http.routers.homarr-router.tls.certresolver=leresolver"
- "traefik.http.services.homarr.loadbalancer.server.port=7575"
- "traefik.docker.network=traefik-public"
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 128M
cpus: '0.2'
- "traefik.swarm.network=traefik-public"
- "traefik.http.routers.homarr.rule=Host(`homarr.sterl.xyz`)"
- "traefik.http.routers.homarr.entrypoints=websecure"
- "traefik.http.routers.homarr.tls.certresolver=cfresolver"
- "traefik.http.services.homarr-svc.loadbalancer.server.port=7575"
- "tsdproxy.enable=true"
- "tsdproxy.name=homarr"
- "tsdproxy.container_port=7575"
restart_policy:
condition: on-failure
max_attempts: 3
############################################
# PLEX
############################################
plex:
image: plexinc/pms-docker:latest
hostname: plex
@@ -60,7 +67,7 @@ services:
- /mnt/media:/media:ro
environment:
- TZ=America/Chicago
- PLEX_CLAIM=${PLEX_CLAIM}
- PLEX_CLAIM=claim-xxxxxxxxxxxx
- ADVERTISE_IP=http://192.168.1.196:32400/
deploy:
placement:
@@ -68,22 +75,24 @@ services:
- node.role == manager
labels:
- "traefik.enable=true"
- "traefik.http.routers.plex-router.rule=Host(`plex.sj98.duckdns.org`)"
- "traefik.http.routers.plex-router.entrypoints=websecure"
- "traefik.http.routers.plex-router.tls.certresolver=leresolver"
- "traefik.http.services.plex.loadbalancer.server.port=32400"
- "traefik.docker.network=traefik-public"
resources:
limits:
memory: 1G
cpus: '2.0'
reservations:
memory: 512M
cpus: '0.5'
- "traefik.swarm.network=traefik-public"
- "traefik.http.routers.plex.rule=Host(`plex.sterl.xyz`)"
- "traefik.http.routers.plex.entrypoints=websecure"
- "traefik.http.routers.plex.tls.certresolver=cfresolver"
- "traefik.http.services.plex-svc.loadbalancer.server.port=32400"
- "tsdproxy.enable=true"
- "tsdproxy.name=plex"
- "tsdproxy.container_port=32400"
restart_policy:
condition: on-failure
max_attempts: 3
############################################
# JELLYFIN
############################################
jellyfin:
image: jellyfin/jellyfin:latest
networks:
@@ -100,22 +109,24 @@ services:
- node.role == manager
labels:
- "traefik.enable=true"
- "traefik.http.routers.jellyfin-router.rule=Host(`jellyfin.sj98.duckdns.org`)"
- "traefik.http.routers.jellyfin-router.entrypoints=websecure"
- "traefik.http.routers.jellyfin-router.tls.certresolver=leresolver"
- "traefik.http.services.jellyfin.loadbalancer.server.port=8096"
- "traefik.docker.network=traefik-public"
resources:
limits:
memory: 1G
cpus: '2.0'
reservations:
memory: 512M
cpus: '0.5'
- "traefik.swarm.network=traefik-public"
- "traefik.http.routers.jellyfin.rule=Host(`jellyfin.sterl.xyz`)"
- "traefik.http.routers.jellyfin.entrypoints=websecure"
- "traefik.http.routers.jellyfin.tls.certresolver=cfresolver"
- "traefik.http.services.jellyfin-svc.loadbalancer.server.port=8096"
- "tsdproxy.enable=true"
- "tsdproxy.name=jellyfin"
- "tsdproxy.container_port=8096"
restart_policy:
condition: on-failure
max_attempts: 3
############################################
# IMMICH SERVER
############################################
immich-server:
image: ghcr.io/immich-app/immich-server:release
networks:
@@ -142,26 +153,27 @@ services:
- node.role == manager
labels:
- "traefik.enable=true"
- "traefik.http.routers.immich-server-router.rule=Host(`immich.sj98.duckdns.org`)"
- "traefik.http.routers.immich-server-router.entrypoints=websecure"
- "traefik.http.routers.immich-server-router.tls.certresolver=leresolver"
- "traefik.http.services.immich-server.loadbalancer.server.port=2283"
- "traefik.docker.network=traefik-public"
# Immich-specific headers and settings
- "traefik.http.routers.immich-server-router.middlewares=immich-headers"
- "traefik.swarm.network=traefik-public"
- "traefik.http.routers.immich.rule=Host(`immich.sterl.xyz`)"
- "traefik.http.routers.immich.entrypoints=websecure"
- "traefik.http.routers.immich.tls.certresolver=cfresolver"
- "traefik.http.services.immich-svc.loadbalancer.server.port=2283"
- "tsdproxy.enable=true"
- "tsdproxy.name=immich"
- "tsdproxy.container_port=2283"
- "traefik.http.routers.immich.middlewares=immich-headers"
- "traefik.http.middlewares.immich-headers.headers.customrequestheaders.X-Forwarded-Proto=https"
- "traefik.http.services.immich-server.loadbalancer.passhostheader=true"
resources:
limits:
memory: 2G
cpus: '2.0'
reservations:
memory: 1G
cpus: '0.5'
restart_policy:
condition: on-failure
max_attempts: 3
############################################
# IMMICH MACHINE LEARNING
############################################
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:release
networks:
@@ -175,19 +187,16 @@ services:
deploy:
placement:
constraints:
- node.labels.heavy == true
- node.labels.ai == true
resources:
limits:
memory: 4G
cpus: '4.0'
reservations:
memory: 2G
cpus: '2.0'
- node.labels.heavy == true
- node.labels.ai == true
restart_policy:
condition: on-failure
max_attempts: 3
############################################
# IMMICH REDIS
############################################
immich-redis:
image: redis:7-alpine
networks:
@@ -198,17 +207,14 @@ services:
placement:
constraints:
- node.role == manager
resources:
limits:
memory: 256M
cpus: '0.5'
reservations:
memory: 64M
cpus: '0.1'
restart_policy:
condition: on-failure
max_attempts: 3
############################################
# IMMICH DATABASE
############################################
immich-db:
image: tensorchord/pgvecto-rs:pg14-v0.2.0
networks:
@@ -223,13 +229,6 @@ services:
placement:
constraints:
- node.role == manager
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.25'
restart_policy:
condition: on-failure
max_attempts: 3

View File

@@ -0,0 +1,14 @@
global:
resolve_timeout: 5m
route:
group_by: ['alertname']
group_wait: 10s
group_interval: 10s
repeat_interval: 1h
receiver: 'web.hook'
receivers:
- name: 'web.hook'
webhook_configs:
- url: 'http://127.0.0.1:5001/'

View File

@@ -19,10 +19,13 @@ configs:
prometheus_config:
external: true
name: prometheus.yml
alertmanager_config:
external: true
name: alertmanager.yml
services:
prometheus:
image: prom/prometheus:v3.0.1
image: prom/prometheus:latest
volumes:
- prometheus_data:/prometheus
configs:
@@ -58,23 +61,26 @@ services:
failure_action: rollback
labels:
- "traefik.enable=true"
- "traefik.http.routers.prometheus.rule=Host(`prometheus.sj98.duckdns.org`)"
- "traefik.http.routers.prometheus.rule=Host(`prometheus.sterl.xyz`)"
- "traefik.http.routers.prometheus.entrypoints=websecure"
- "traefik.http.routers.prometheus.tls.certresolver=leresolver"
- "traefik.http.routers.prometheus.tls.certresolver=cfresolver"
- "traefik.http.services.prometheus.loadbalancer.server.port=9090"
- "traefik.docker.network=traefik-public"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=prometheus"
- "tsdproxy.container_port=9090"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
grafana:
image: grafana/grafana:11.3.1
image: grafana/grafana:latest
volumes:
- grafana_data:/var/lib/grafana
environment:
- GF_SERVER_ROOT_URL=https://grafana.sj98.duckdns.org
- GF_SERVER_ROOT_URL=https://grafana.sterl.xyz
- GF_SECURITY_ADMIN_PASSWORD__FILE=/run/secrets/grafana_admin_password
secrets:
- grafana_admin_password
@@ -108,21 +114,27 @@ services:
failure_action: rollback
labels:
- "traefik.enable=true"
- "traefik.http.routers.grafana.rule=Host(`grafana.sj98.duckdns.org`)"
- "traefik.http.routers.grafana.rule=Host(`grafana.sterl.xyz`)"
- "traefik.http.routers.grafana.entrypoints=websecure"
- "traefik.http.routers.grafana.tls.certresolver=leresolver"
- "traefik.http.routers.grafana.tls.certresolver=cfresolver"
- "traefik.http.services.grafana.loadbalancer.server.port=3000"
- "traefik.docker.network=traefik-public"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=grafana"
- "tsdproxy.container_port=3000"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
alertmanager:
image: prom/alertmanager:v0.27.0
image: prom/alertmanager:latest
volumes:
- alertmanager_data:/alertmanager
configs:
- source: alertmanager_config
target: /etc/alertmanager/config.yml
command:
- '--config.file=/etc/alertmanager/config.yml'
- '--storage.path=/alertmanager'
@@ -152,19 +164,22 @@ services:
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.alertmanager.rule=Host(`alertmanager.sj98.duckdns.org`)"
- "traefik.http.routers.alertmanager.rule=Host(`alertmanager.sterl.xyz`)"
- "traefik.http.routers.alertmanager.entrypoints=websecure"
- "traefik.http.routers.alertmanager.tls.certresolver=leresolver"
- "traefik.http.routers.alertmanager.tls.certresolver=cfresolver"
- "traefik.http.services.alertmanager.loadbalancer.server.port=9093"
- "traefik.docker.network=traefik-public"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=alertmanager"
- "tsdproxy.container_port=9093"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
node-exporter:
image: prom/node-exporter:v1.8.2
image: prom/node-exporter:latest
volumes:
- /proc:/host/proc:ro
- /sys:/host/sys:ro
@@ -189,14 +204,14 @@ services:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
cadvisor:
image: gcr.io/cadvisor/cadvisor:v0.50.0
image: gcr.io/cadvisor/cadvisor:latest
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
@@ -226,8 +241,8 @@ services:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"

View File

@@ -31,8 +31,8 @@ services:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"

View File

@@ -1,54 +0,0 @@
version: '3.8'
networks:
traefik-public:
external: true
volumes:
n8n_data:
services:
n8n:
image: n8nio/n8n:latest
volumes:
- n8n_data:/home/node/.n8n
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik-public
environment:
- N8N_HOST=n8n.sj98.duckdns.org
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://n8n.sj98.duckdns.org/
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://localhost:5678/healthz || exit 1"]
interval: 30s
timeout: 10s
retries: 3
deploy:
placement:
constraints:
- node.role == manager
resources:
limits:
memory: 1G
cpus: '0.5'
reservations:
memory: 256M
cpus: '0.1'
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.n8n.rule=Host(`n8n.sj98.duckdns.org`)"
- "traefik.http.routers.n8n.entrypoints=websecure"
- "traefik.http.routers.n8n.tls.certresolver=leresolver"
- "traefik.http.services.n8n.loadbalancer.server.port=5678"
- "traefik.docker.network=traefik-public"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"

View File

@@ -1,110 +0,0 @@
version: '3.8'
networks:
traefik-public:
external: true
secrets:
duckdns_token:
external: true
volumes:
traefik_letsencrypt:
external: true
configs:
traefik_yml:
external: true
name: traefik.yml
services:
traefik:
image: traefik:v3.2.3
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik_letsencrypt:/letsencrypt
networks:
- traefik-public
secrets:
- duckdns_token
configs:
- source: traefik_yml
target: /etc/traefik/traefik.yml
healthcheck:
test: ["CMD", "traefik", "healthcheck", "--ping"]
interval: 30s
timeout: 5s
retries: 3
start_period: 10s
deploy:
mode: replicated
replicas: 2
placement:
constraints:
- node.role == manager
resources:
limits:
memory: 512M
cpus: '0.5'
reservations:
memory: 128M
cpus: '0.1'
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
update_config:
parallelism: 1
delay: 10s
failure_action: rollback
order: start-first
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.sj98.duckdns.org`)"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls.certresolver=leresolver"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
whoami:
image: traefik/whoami:v1.10
networks:
- traefik-public
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:80/health"]
interval: 30s
timeout: 5s
retries: 3
deploy:
resources:
limits:
memory: 64M
cpus: '0.1'
reservations:
memory: 16M
cpus: '0.01'
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`whoami.sj98.duckdns.org`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
- "traefik.http.routers.whoami.tls.certresolver=leresolver"
- "traefik.http.services.whoami.loadbalancer.server.port=80"
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"

View File

@@ -0,0 +1,151 @@
version: '3.8'
networks:
traefik-public:
external: true
volumes:
traefik_letsencrypt:
external: true
tsdproxydata:
external: true
configs:
traefik_dynamic:
external: true
tsdproxy-config:
external: true
name: tsdproxy.yaml
secrets:
cf_api_token:
external: true
tsdproxy_authkey:
external: true
services:
traefik:
image: traefik:latest
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik_letsencrypt:/letsencrypt
networks:
- traefik-public
secrets:
- cf_api_token
environment:
# Cloudflare API Token (with DNS edit permissions for your domain)
- CF_DNS_API_TOKEN_FILE=/run/secrets/cf_api_token
- CF_ZONE_API_TOKEN_FILE=/run/secrets/cf_api_token
# Optional: your Pi-hole DNS can stay
dns:
- 192.168.1.196
- 192.168.1.245
- 1.1.1.1
command:
# Entrypoints
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
# SWARM Provider
- "--providers.swarm=true"
- "--providers.swarm.network=traefik-public"
- "--providers.swarm.exposedbydefault=false"
# File Provider (Dynamic Config)
- "--providers.file.filename=/dynamic.yml"
- "--providers.file.watch=true"
# Dashboard
- "--api.dashboard=true"
- "--api.insecure=false"
# HTTP -> HTTPS
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
- "--entrypoints.web.http.redirections.entrypoint.scheme=https"
# Let's Encrypt / ACME Cloudflare DNS Challenge
- "--certificatesresolvers.cfresolver.acme.email=sterlenjohnson6@gmail.com"
- "--certificatesresolvers.cfresolver.acme.storage=/letsencrypt/acme.json"
- "--certificatesresolvers.cfresolver.acme.dnschallenge=true"
- "--certificatesresolvers.cfresolver.acme.dnschallenge.provider=cloudflare"
# Optional: increase delay for propagation
- "--certificatesresolvers.cfresolver.acme.dnschallenge.propagation.delayBeforeChecks=60"
# Logging
- "--log.level=INFO"
deploy:
placement:
constraints:
- node.role == manager
labels:
# Dashboard Router
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.sterl.xyz`)"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls.certresolver=cfresolver"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
- "traefik.http.routers.traefik.service=api@internal"
whoami:
image: traefik/whoami
networks:
- traefik-public
deploy:
labels:
# Whoami Router
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`whoami.sterl.xyz`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
- "traefik.http.routers.whoami.tls.certresolver=cfresolver"
- "traefik.http.services.whoami.loadbalancer.server.port=80"
tsdproxy:
image: almeidapaulopt/tsdproxy:1.1.0
networks:
- traefik-public
configs:
- source: tsdproxy-config
target: /config/tsdproxy.yaml
uid: "0"
gid: "0"
mode: 0444
secrets:
- tsdproxy_authkey
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- tsdproxydata:/data
environment:
- TSDPROXY_AUTHKEYFILE=/run/secrets/tsdproxy_authkey
- DOCKER_HOST=unix:///var/run/docker.sock
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 20s
deploy:
placement:
constraints:
- node.labels.leader == true
resources:
limits:
memory: 512M
reservations:
memory: 256M
labels:
- "traefik.enable=true"
- "traefik.http.routers.tsdproxy.rule=Host(`tsdproxy.sterl.xyz`)"
- "traefik.http.routers.tsdproxy.entrypoints=websecure"
- "traefik.http.routers.tsdproxy.tls.certresolver=cfresolver"
- "traefik.http.services.tsdproxy.loadbalancer.server.port=8080"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=tsdproxy"

View File

@@ -0,0 +1 @@
vxrT1xXkioj3Iw3D-emU0I_FcaMb-PeYs_TLiOma

View File

@@ -0,0 +1,77 @@
version: '3.8'
networks:
traefik-public:
external: true
volumes:
n8n_data:
services:
n8n:
image: n8nio/n8n:latest
volumes:
- n8n_data:/home/node/.n8n
- /var/run/docker.sock:/var/run/docker.sock
networks:
- traefik-public
extra_hosts:
- "gateway:192.168.1.1"
- "proxmox:192.168.1.57"
- "omv:192.168.1.70"
- "swarm-manager:192.168.1.196"
- "swarm-leader:192.168.1.245"
- "swarm-worker-light:192.168.1.62"
- "lm-studio:192.168.1.81"
- "fedora:192.168.1.81"
- "n8n.sterl.xyz:192.168.1.196"
environment:
- N8N_HOST=n8n.sterl.xyz
- N8N_PROTOCOL=https
- NODE_ENV=production
- WEBHOOK_URL=https://n8n.sterl.xyz/
- N8N_EDITOR_BASE_URL=https://n8n.sterl.xyz/
- N8N_PUSH_BACKEND=websocket
- N8N_PROXY_HOPS=1
- N8N_SECURE_COOKIE=false
- N8N_METRICS=false
- N8N_SKIP_WEBHOOK_CSRF_CHECK=true
- N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true
# Database configuration (fix deprecation warning)
- DB_SQLITE_POOL_SIZE=10
# Task runners (fix deprecation warning)
- N8N_RUNNERS_ENABLED=true
# Security settings (fix deprecation warnings)
- N8N_BLOCK_ENV_ACCESS_IN_NODE=false
- N8N_GIT_NODE_DISABLE_BARE_REPOS=true
healthcheck:
test: ["CMD-SHELL", "wget -q --spider http://localhost:5678/healthz || exit 1"]
interval: 30s
timeout: 10s
retries: 3
deploy:
placement:
constraints:
- node.role == manager
resources:
limits:
memory: 4G
cpus: '2.0'
reservations:
memory: 512M
cpus: '0.5'
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.n8n.rule=Host(`n8n.sterl.xyz`)"
- "traefik.http.routers.n8n.entrypoints=websecure"
- "traefik.http.routers.n8n.tls.certresolver=cfresolver"
- "traefik.http.services.n8n.loadbalancer.server.port=5678"
- "traefik.http.services.n8n.loadbalancer.sticky.cookie=true"
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.name=n8n_sticky"
- "traefik.http.services.n8n.loadbalancer.sticky.cookie.secure=true"
- "traefik.swarm.network=traefik-public"

View File

@@ -57,7 +57,7 @@ services:
condition: on-failure
nextcloud:
image: nextcloud:30.0.8
image: nextcloud:latest
volumes:
- nextcloud_data:/var/www/html
environment:
@@ -68,9 +68,9 @@ services:
- REDIS_HOST=nextcloud-redis
- NEXTCLOUD_ADMIN_USER=${NEXTCLOUD_ADMIN_USER} # Replace with your desired admin username
- NEXTCLOUD_ADMIN_PASSWORD=${NEXTCLOUD_ADMIN_PASSWORD} # Replace with a secure password
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.sj98.duckdns.org
- NEXTCLOUD_TRUSTED_DOMAINS=nextcloud.sterl.xyz
- OVERWRITEPROTOCOL=https
- OVERWRITEHOST=nextcloud.sj98.duckdns.org
- OVERWRITEHOST=nextcloud.sterl.xyz
- TRUSTED_PROXIES=172.16.0.0/12
depends_on:
- nextcloud-db
@@ -91,11 +91,11 @@ services:
condition: on-failure
labels:
- "traefik.enable=true"
- "traefik.http.routers.nextcloud.rule=Host(`nextcloud.sj98.duckdns.org`)"
- "traefik.http.routers.nextcloud.rule=Host(`nextcloud.sterl.xyz`)"
- "traefik.http.routers.nextcloud.entrypoints=websecure"
- "traefik.http.routers.nextcloud.tls.certresolver=leresolver"
- "traefik.http.routers.nextcloud.tls.certresolver=cfresolver"
- "traefik.http.services.nextcloud.loadbalancer.server.port=80"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
# Nextcloud-specific middlewares
- "traefik.http.routers.nextcloud.middlewares=nextcloud-chain"
- "traefik.http.middlewares.nextcloud-chain.chain.middlewares=nextcloud-caldav,nextcloud-headers"
@@ -109,4 +109,7 @@ services:
- "traefik.http.middlewares.nextcloud-headers.headers.stsPreload=true"
- "traefik.http.middlewares.nextcloud-headers.headers.forceSTSHeader=true"
- "traefik.http.middlewares.nextcloud-headers.headers.customFrameOptionsValue=SAMEORIGIN"
- "traefik.http.middlewares.nextcloud-headers.headers.customResponseHeaders.X-Robots-Tag=noindex,nofollow"
- "traefik.http.middlewares.nextcloud-headers.headers.customResponseHeaders.X-Robots-Tag=noindex,nofollow"
- "tsdproxy.enable=true"
- "tsdproxy.name=nextcloud"
- "tsdproxy.container_port=80"

View File

@@ -1,45 +0,0 @@
version: '3.8'
networks:
traefik-public:
external: true
services:
dozzle:
image: amir20/dozzle:v8.14.6
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- traefik-public
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/healthcheck"]
interval: 30s
timeout: 5s
retries: 3
deploy:
placement:
constraints:
- node.role == manager
resources:
limits:
memory: 256M
cpus: '0.25'
reservations:
memory: 64M
cpus: '0.05'
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.dozzle.rule=Host(`dozzle.sj98.duckdns.org`)"
- "traefik.http.routers.dozzle.entrypoints=websecure"
- "traefik.http.routers.dozzle.tls.certresolver=leresolver"
- "traefik.http.services.dozzle.loadbalancer.server.port=8080"
- "traefik.docker.network=traefik-public"
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"

View File

@@ -33,9 +33,9 @@ services:
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD_FILE=/run/secrets/gitea_db_password
- GITEA__server__DOMAIN=git.sj98.duckdns.org
- GITEA__server__ROOT_URL=https://git.sj98.duckdns.org
- GITEA__server__SSH_DOMAIN=git.sj98.duckdns.org
- GITEA__server__DOMAIN=git.sterl.xyz
- GITEA__server__ROOT_URL=https://git.sterl.xyz
- GITEA__server__SSH_DOMAIN=git.sterl.xyz
- GITEA__server__SSH_PORT=2222
- GITEA__service__DISABLE_REGISTRATION=false
secrets:
@@ -64,11 +64,14 @@ services:
max_attempts: 3
labels:
- "traefik.enable=true"
- "traefik.http.routers.gitea.rule=Host(`git.sj98.duckdns.org`)"
- "traefik.http.routers.gitea.rule=Host(`git.sterl.xyz`)"
- "traefik.http.routers.gitea.entrypoints=websecure"
- "traefik.http.routers.gitea.tls.certresolver=leresolver"
- "traefik.http.routers.gitea.tls.certresolver=cfresolver"
- "traefik.http.services.gitea.loadbalancer.server.port=3000"
- "traefik.docker.network=traefik-public"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=gitea"
- "tsdproxy.container_port=3000"
gitea-db:
image: postgres:15-alpine

View File

@@ -12,8 +12,11 @@ volumes:
services:
portainer:
image: portainer/portainer-ce:2.21.4
command: -H tcp://tasks.agent:9001 --tlsskipverify
image: portainer/portainer-ce:latest
command:
- "-H"
- "tcp://tasks.agent:9001"
- "--tlsskipverify"
ports:
- "9000:9000"
- "9443:9443"
@@ -51,20 +54,27 @@ services:
failure_action: rollback
labels:
- "traefik.enable=true"
- "traefik.http.routers.portainer.rule=Host(`portainer.sj98.duckdns.org`)"
- "traefik.http.routers.portainer.rule=Host(`portainer.sterl.xyz`)"
- "traefik.http.routers.portainer.entrypoints=websecure"
- "traefik.http.routers.portainer.tls.certresolver=leresolver"
- "traefik.http.routers.portainer.tls.certresolver=cfresolver"
- "traefik.http.routers.portainer.service=portainer"
- "traefik.http.routers.portainer.tls=true"
- "traefik.http.services.portainer.loadbalancer.server.port=9000"
- "traefik.http.services.portainer.loadbalancer.sticky.cookie=true"
- "traefik.swarm.network=traefik-public"
- "traefik.docker.network=traefik-public"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
- "tsdproxy.enable=true"
- "tsdproxy.name=portainer"
- "tsdproxy.container_port=9000"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Linux agent
agent:
image: portainer/agent:2.21.4
image: portainer/agent:latest
environment:
AGENT_CLUSTER_ADDR: tasks.agent
volumes:
@@ -88,15 +98,15 @@ services:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
# Windows agent (optional - only deploys if Windows node exists)
agent-windows:
image: portainer/agent:2.21.4
image: portainer/agent:latest
environment:
AGENT_CLUSTER_ADDR: tasks.agent
volumes:
@@ -126,8 +136,8 @@ services:
condition: on-failure
delay: 5s
max_attempts: 3
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"

View File

@@ -0,0 +1,52 @@
version: '3.8'
networks:
traefik-public:
external: true
services:
dozzle:
image: amir20/dozzle:latest
user: "0:0"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- traefik-public
environment:
- DOZZLE_MODE=swarm
- DOZZLE_LEVEL=debug
- DOZZLE_NO_ANALYTICS=true
logging:
driver: "json-file"
options:
max-size: "5m"
max-file: "2"
deploy:
mode: global
resources:
limits:
memory: 256M
cpus: '0.25'
reservations:
memory: 64M
cpus: '0.05'
restart_policy:
condition: any
delay: 5s
labels:
- "traefik.enable=true"
- "traefik.http.routers.dozzle.rule=Host(`dozzle.sterl.xyz`)"
- "traefik.http.routers.dozzle.entrypoints=websecure"
- "traefik.http.routers.dozzle.tls.certresolver=cfresolver"
- "traefik.http.services.dozzle.loadbalancer.server.port=8080"
- "traefik.swarm.network=traefik-public"
- "tsdproxy.enable=true"
- "tsdproxy.name=logs"
- "tsdproxy.container_port=8080"
healthcheck:
test: ["CMD-SHELL", "if [ -S /var/run/docker.sock ]; then exit 0; else exit 1; fi"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

View File

@@ -0,0 +1,123 @@
http:
middlewares:
# Middleware to redirect non-www to www (optional, valid for steril.xyz if needed)
# my-www-redirect:
# redirectRegex:
# regex: "^https?://(?:www\\.)?(.+)"
# replacement: "https://www.$${1}"
# Secure Headers Middleware
security-headers:
headers:
customResponseHeaders:
X-Robots-Tag: "none,noarchive,nosnippet,notranslate,noimageindex"
server: ""
sslProxyHeaders:
X-Forwarded-Proto: https
referrerPolicy: "same-origin"
hostsProxyHeaders:
- "X-Forwarded-Host"
customRequestHeaders:
X-Forwarded-Proto: "https"
contentTypeNosniff: true
browserXssFilter: true
forceSTSHeader: true
stsIncludeSubdomains: true
stsSeconds: 63072000
stsPreload: true
# Basic Auth Middleware (Example)
# my-basic-auth:
# basicAuth:
# users:
# - "admin:$apr1$..."
tls:
options:
default:
minVersion: VersionTLS12
cipherSuites:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305
routers:
# Pi-hole
pihole:
rule: "Host(`pihole.sterl.xyz`)"
service: pihole
entryPoints:
- websecure
tls:
certResolver: cfresolver
# Pi-hole 2
pihole2:
rule: "Host(`pihole2.sterl.xyz`)"
service: pihole2
entryPoints:
- websecure
tls:
certResolver: cfresolver
# Proxmox (HTTPS)
proxmox:
rule: "Host(`proxmox.sterl.xyz`)"
service: proxmox
entryPoints:
- websecure
tls:
certResolver: cfresolver
# Proxmox Monitor
proxmox-monitor:
rule: "Host(`proxmox-monitor.sterl.xyz`)"
service: proxmox-monitor
entryPoints:
- websecure
tls:
certResolver: cfresolver
# OpenMediaVault (OMV)
omv:
rule: "Host(`omv.sterl.xyz`)"
service: omv
entryPoints:
- websecure
tls:
certResolver: cfresolver
services:
pihole:
loadBalancer:
servers:
- url: "http://192.168.1.196:7300"
pihole2:
loadBalancer:
servers:
- url: "http://192.168.1.245:7300"
proxmox:
loadBalancer:
servers:
# Proxmox typically runs on HTTPS with self-signed certs
- url: "https://192.168.1.57:8006"
serversTransport: "insecureSkipVerify"
proxmox-monitor:
loadBalancer:
servers:
- url: "http://192.168.1.57:8008"
omv:
loadBalancer:
servers:
- url: "http://192.168.1.70:80"
serversTransports:
insecureSkipVerify:
insecureSkipVerify: true

View File

@@ -1,54 +1,98 @@
# traefik.yml - static configuration (file provider)
checkNewVersion: true
sendAnonymousUsage: false
version: '3.8'
log:
level: INFO
networks:
traefik-public:
external: true
api:
dashboard: true
insecure: false # set to true only for quick local testing (not recommended for public)
volumes:
traefik_letsencrypt:
external: true
# single entryPoints section (merged)
entryPoints:
web:
address: ":80"
http:
redirections:
entryPoint:
to: websecure
scheme: https
# optional timeouts can live under transport as well (kept only on websecure below)
configs:
traefik_dynamic:
external: true
websecure:
address: ":443"
http:
tls:
certResolver: leresolver
transport:
respondingTimeouts:
# keep these large if you expect long uploads/downloads or long-lived requests
readTimeout: 600s
writeTimeout: 600s
idleTimeout: 600s
services:
traefik:
image: traefik:v3.6.4
ports:
- "80:80"
- "443:443"
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- traefik_letsencrypt:/letsencrypt
networks:
- traefik-public
configs:
- source: traefik_dynamic
target: /etc/traefik/dynamic.yml
environment:
# Cloudflare API Token (with DNS edit permissions for your domain)
- CF_DNS_API_TOKEN=vxrT1xXkioj3Iw3D-emU0I_FcaMb-PeYs_TLiOma
- CF_ZONE_API_TOKEN=vxrT1xXkioj3Iw3D-emU0I_FcaMb-PeYs_TLiOma
providers:
swarm:
endpoint: "unix:///var/run/docker.sock"
# Optional: your Pi-hole DNS can stay
dns:
- 192.168.1.196
- 192.168.1.245
- 1.1.1.1
certificatesResolvers:
leresolver:
acme:
email: "sterlenjohnson6@gmail.com"
storage: "/letsencrypt/acme.json"
# DNS-01, using DuckDNS provider
dnsChallenge:
provider: duckdns
delayBeforeCheck: 60s
# Usually unnecessary to specify "resolvers" unless you have special internal resolvers.
# If you DO need Traefik to use specific DNS servers for the challenge, make sure
# the container has network access to them and that they will answer public DNS queries.
resolvers:
- "192.168.1.196:53"
- "192.168.1.245:53"
- "192.168.1.62:53"
command:
# Entrypoints
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
# SWARM Provider
- "--providers.swarm=true"
- "--providers.swarm.network=traefik-public"
- "--providers.swarm.exposedbydefault=false"
# File Provider (Dynamic Config)
- "--providers.file.filename=/etc/traefik/dynamic.yml"
- "--providers.file.watch=true"
# Dashboard
- "--api.dashboard=true"
- "--api.insecure=false"
# HTTP -> HTTPS
- "--entrypoints.web.http.redirections.entrypoint.to=websecure"
- "--entrypoints.web.http.redirections.entrypoint.scheme=https"
# Let's Encrypt / ACME Cloudflare DNS Challenge
- "--certificatesresolvers.cfresolver.acme.email=sterlenjohnson6@gmail.com"
- "--certificatesresolvers.cfresolver.acme.storage=/letsencrypt/acme.json"
- "--certificatesresolvers.cfresolver.acme.dnschallenge=true"
- "--certificatesresolvers.cfresolver.acme.dnschallenge.provider=cloudflare"
# Optional: increase delay for propagation
- "--certificatesresolvers.cfresolver.acme.dnschallenge.propagation.delayBeforeChecks=60"
# Logging
- "--log.level=INFO"
deploy:
placement:
constraints:
- node.role == manager
labels:
# Dashboard Router
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`traefik.sterl.xyz`)"
- "traefik.http.routers.traefik.entrypoints=websecure"
- "traefik.http.routers.traefik.tls.certresolver=cfresolver"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
- "traefik.http.routers.traefik.service=api@internal"
whoami:
image: traefik/whoami
networks:
- traefik-public
deploy:
labels:
# Whoami Router
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`whoami.sterl.xyz`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
- "traefik.http.routers.whoami.tls.certresolver=cfresolver"
- "traefik.http.services.whoami.loadbalancer.server.port=80"