238 lines
7.8 KiB
Markdown
238 lines
7.8 KiB
Markdown
# Homelab n8n Monitoring Workflows
|
||
|
||
This directory contains intelligent n8n workflows for monitoring and integrating your homelab infrastructure using AI-powered analysis.
|
||
|
||
## 📋 Workflows
|
||
|
||
### 1. **Homelab Health Monitor** (`homelab-health-monitor.json`)
|
||
**Purpose:** Comprehensive health monitoring of all homelab services
|
||
**Schedule:** Every 15 minutes (or manual via webhook)
|
||
**Features:**
|
||
- Network connectivity checks (internet + internal DNS)
|
||
- Docker Swarm service status monitoring
|
||
- Service endpoint validation (Komodo, OpenWebUI, Paperless, Prometheus, LM Studio)
|
||
- AI-powered health analysis using LM Studio
|
||
- Health scoring (0-100) and automated alerting
|
||
|
||
**Webhook:** `POST https://n8n.sj98.duckdns.org/webhook/health-check`
|
||
|
||
### 2. **Homelab Log Analyzer** (`homelab-log-analyzer.json`)
|
||
**Purpose:** Automated AI analysis of Docker service logs
|
||
**Schedule:** Every 6 hours
|
||
**Features:**
|
||
- Collects logs from critical services (Traefik, n8n, OpenWebUI, Komodo, Prometheus)
|
||
- Parses ERROR, WARN, CRITICAL patterns
|
||
- AI analysis of log patterns and issues
|
||
- Generates actionable recommendations
|
||
- Alerts on high error counts
|
||
|
||
> **💡 For Manual Log Viewing:** Use **Dozzle** at your configured URL for real-time, interactive log viewing with a beautiful web interface. This workflow is for automated AI-powered analysis and alerting.
|
||
|
||
### 3. **Homelab Integration Advisor** (`homelab-integration-advisor.json`)
|
||
**Purpose:** AI-powered service integration recommendations
|
||
**Schedule:** Daily at 9 AM (or manual via webhook)
|
||
**Features:**
|
||
- Discovers all running services and capabilities
|
||
- Identifies integration opportunities
|
||
- AI generates specific n8n workflow patterns
|
||
- Prioritizes by complexity and value
|
||
- Provides step-by-step implementation guidance
|
||
|
||
**Webhook:** `POST https://n8n.sj98.duckdns.org/webhook/integration-advisor`
|
||
|
||
## 🚀 Installation
|
||
|
||
### 1. Import Workflows
|
||
```bash
|
||
# Option A: Via n8n UI
|
||
1. Open n8n at https://n8n.sj98.duckdns.org
|
||
2. Click "Workflows" → "Import from File"
|
||
3. Select each JSON file from this directory
|
||
|
||
# Option B: Via API (if API enabled)
|
||
cd /workspace/homelab/services/n8n/workflows
|
||
curl -X POST https://n8n.sj98.duckdns.org/api/v1/workflows \
|
||
-H "Content-Type: application/json" \
|
||
-H "X-N8N-API-KEY: your-api-key" \
|
||
-d @homelab-health-monitor.json
|
||
```
|
||
|
||
### 2. Configure AI Model
|
||
Edit each workflow and set your preferred LM Studio model:
|
||
- **Health Monitor:** Uses `deepseek-r1-distill-llama-8b` (reasoning)
|
||
- **Log Analyzer:** Uses `qwen2.5-coder-7b-instruct` (technical analysis)
|
||
- **Integration Advisor:** Uses `deepseek-r1-distill-llama-8b` (planning)
|
||
|
||
Available models on your LM Studio instance (.81:1234):
|
||
- `deepseek-r1-distill-llama-8b`
|
||
- `qwen2.5-coder-7b-instruct`
|
||
- `qwen/qwen3-coder-30b`
|
||
- `mistralai/codestral-22b-v0.1`
|
||
- `google/gemma-3-12b`
|
||
|
||
### 3. Activate Workflows
|
||
1. Open each workflow
|
||
2. Toggle "Active" switch in top right
|
||
3. Verify schedule trigger is enabled
|
||
|
||
## 🔧 Configuration
|
||
|
||
### LM Studio Connection
|
||
The workflows connect to LM Studio via the `lm-studio` hostname (mapped to 192.168.1.81:1234 via `extra_hosts` in n8n-stack.yml).
|
||
|
||
**Test connection:**
|
||
```bash
|
||
docker exec <n8n-container-id> curl http://lm-studio:1234/v1/models
|
||
```
|
||
|
||
### Notifications (Optional)
|
||
To enable alerts, add these nodes to each workflow:
|
||
- **Email:** Use n8n's Email node with SMTP credentials
|
||
- **Discord:** Use Webhook node with Discord webhook URL
|
||
- **Slack:** Use Slack node with OAuth credentials
|
||
- **Home Assistant:** Send to `http://homeassistant.local:8123/api/webhook/n8n-alert`
|
||
|
||
## 📊 Recommended Integration Patterns
|
||
|
||
Based on your homelab services, here are high-value integrations to implement:
|
||
|
||
### 1. **AI-Powered Document Processing**
|
||
**Services:** n8n → Paperless → OpenWebUI
|
||
**Pattern:** Auto-tag and summarize uploaded documents using AI
|
||
```
|
||
Trigger: Paperless webhook (new document)
|
||
→ Get document content
|
||
→ Send to LM Studio for tagging/summary
|
||
→ Update Paperless tags and notes
|
||
```
|
||
|
||
### 2. **Metric-Based Automation**
|
||
**Services:** Prometheus → n8n → Docker/Komodo
|
||
**Pattern:** Auto-restart services on high resource usage
|
||
```
|
||
Trigger: Prometheus AlertManager webhook
|
||
→ Parse alert (high CPU/memory)
|
||
→ Execute docker service update --force <service>
|
||
→ Send notification
|
||
```
|
||
|
||
### 3. **Smart Search Integration**
|
||
**Services:** SearXNG → OpenWebUI
|
||
**Pattern:** Enhanced AI chat with web search capability
|
||
```
|
||
Trigger: OpenWebUI webhook or manual
|
||
→ Query SearXNG for context
|
||
→ Send results + query to LM Studio
|
||
→ Return AI response with citations
|
||
```
|
||
|
||
### 4. **Backup Automation**
|
||
**Services:** n8n → All Services → Storage
|
||
**Pattern:** Automated backup verification and reporting
|
||
```
|
||
Schedule: Daily at 2 AM
|
||
→ Trigger OMV backup scripts
|
||
→ Verify backup completion
|
||
→ Calculate backup sizes
|
||
→ AI analysis of backup health
|
||
→ Send report
|
||
```
|
||
|
||
### 5. **Development Pipeline**
|
||
**Services:** Gitea → Komodo → n8n
|
||
**Pattern:** GitOps deployment automation
|
||
```
|
||
Trigger: Gitea webhook (push to main)
|
||
→ Parse commit info
|
||
→ Trigger Komodo deployment
|
||
→ Monitor deployment status
|
||
→ Run health checks
|
||
→ Send notification
|
||
```
|
||
|
||
## 🐛 Troubleshooting
|
||
|
||
### Connection to LM Studio Fails
|
||
```bash
|
||
# Check if extra_hosts is configured
|
||
docker service inspect n8n_n8n | grep -A 5 ExtraHosts
|
||
|
||
# Test from n8n container
|
||
docker exec $(docker ps -q -f name=n8n) curl http://lm-studio:1234/v1/models
|
||
|
||
# Verify LM Studio is running on .81
|
||
curl http://192.168.1.81:1234/v1/models
|
||
```
|
||
|
||
### Docker Commands Fail
|
||
```bash
|
||
# Verify Docker socket is mounted
|
||
docker service inspect n8n_n8n | grep -A 2 docker.sock
|
||
|
||
# Test from n8n container
|
||
docker exec $(docker ps -q -f name=n8n) docker ps
|
||
```
|
||
|
||
### Workflows Don't Execute
|
||
- Check n8n logs: `docker service logs n8n_n8n --tail 100`
|
||
- Verify workflow is activated (toggle in UI)
|
||
- Check schedule trigger settings
|
||
- Ensure n8n has sufficient resources (increase memory/CPU limits)
|
||
|
||
## <20> Log Viewing
|
||
|
||
### Interactive Log Viewing with Dozzle
|
||
For **manual, real-time log viewing**, use **Dozzle** - it's already part of your homelab:
|
||
|
||
**Access:** Check your Traefik/Portainer configuration for the Dozzle URL
|
||
|
||
**Features:**
|
||
- Real-time log streaming with color coding
|
||
- Multi-container view
|
||
- Search and filter logs
|
||
- No configuration needed - automatically discovers containers
|
||
- Beautiful, responsive web UI
|
||
|
||
**Use Dozzle when you need to:**
|
||
- Investigate specific issues in real-time
|
||
- Follow logs during deployments
|
||
- Debug container startup problems
|
||
- Search for specific error messages
|
||
|
||
### Automated Log Analysis (This Workflow)
|
||
The **Homelab Log Analyzer** workflow complements Dozzle by:
|
||
- Running periodically (every 6 hours) to catch issues you might miss
|
||
- Using AI to identify patterns across multiple services
|
||
- Sending proactive alerts before issues escalate
|
||
- Providing trend analysis over time
|
||
|
||
**Both tools serve different purposes and work great together!**
|
||
|
||
---
|
||
|
||
## <20>📈 Next Steps
|
||
|
||
1. **Import and test** each workflow manually
|
||
2. **Configure notifications** (email/Discord/Slack)
|
||
3. **Review AI recommendations** from Integration Advisor
|
||
4. **Implement priority integrations** suggested by AI
|
||
5. **Monitor health scores** and adjust thresholds
|
||
6. **Create custom workflows** based on your specific needs
|
||
|
||
## 🔗 Useful Links
|
||
|
||
- **n8n Documentation:** https://docs.n8n.io
|
||
- **LM Studio API:** http://lm-studio:1234 (OpenAI-compatible)
|
||
- **Prometheus API:** http://prometheus.sj98.duckdns.org/api/v1
|
||
- **Dozzle Logs:** Your Dozzle URL (real-time log viewer)
|
||
- **Docker API:** Unix socket at `/var/run/docker.sock`
|
||
|
||
## 💡 Tips
|
||
|
||
- **Use Dozzle for interactive debugging**, workflows for automated monitoring
|
||
- Start with manual triggers before enabling schedules
|
||
- Use AI model with appropriate context window for your data
|
||
- Monitor n8n resource usage - increase limits if needed
|
||
- Keep workflows modular - easier to debug and maintain
|
||
- Save successful execution results for reference
|