Compare commits

...

16 Commits

Author SHA1 Message Date
595f9916ac minor qol 2026-02-18 10:17:30 +09:00
37a09ef66b silent alarms available 2026-02-17 12:45:21 +09:00
7ec68fec84 new alarm scheme 2026-02-17 12:39:14 +09:00
7cd683722b removed PLAN.md 2026-02-17 12:37:31 +09:00
e05773450a proper uv management 2026-02-17 12:18:13 +09:00
af7fb2beaf systemd management scripts 2026-02-17 12:13:39 +09:00
60d99a993f alarm image persist for 1 second longer 2026-02-16 23:42:16 +09:00
Mikkeli Matlock
c602cfde1a pi gitignore 2026-02-16 23:35:09 +09:00
Mikkeli Matlock
a836ee43ec pi gitignore 2026-02-16 23:33:18 +09:00
Mikkeli Matlock
8fc7ed1327 untracked alarm config 2026-02-16 23:32:32 +09:00
Mikkeli Matlock
ce13fb23a8 gitignore 2026-02-16 22:37:38 +09:00
c2fbb2b69a new client connection logic
- esp32 requests for image when ready to receive
- server serves initial image on request
2026-02-16 21:56:28 +09:00
6e633c9367 pi status server update 2026-02-16 21:08:40 +09:00
71c79eb3ab docker services real 2026-02-16 20:47:44 +09:00
Mikkeli Matlock
9ace7be32b changed rx/tx to kByte/s 2026-02-16 16:40:28 +09:00
Mikkeli Matlock
227f66dbff new layout + 200px status images 2026-02-16 14:52:20 +09:00
13 changed files with 229 additions and 87 deletions

8
.gitignore vendored
View File

@@ -3,3 +3,11 @@
__pycache__/
*.pyo
*.pyc
.venv/
# sacrificial
uv.lock
# configs
config/
!config/alarms.sample.json

36
PLAN.md
View File

@@ -1,36 +0,0 @@
# Pi Servers -- Roadmap
## Docker Compose
Containerize the pi servers for easier deployment.
### Options
1. **Single service** -- `run_all.py` as the entrypoint, both servers in one container
2. **Split services** -- separate containers for `stats_server.py` and `contents_server.py`
Single service is simpler. Split services allow independent scaling and restarts.
### Configuration
- Volume mount `assets/` and `config/alarms.json` so they're editable without rebuilding
- Expose ports 8765 and 8766
- Network mode `host` or a bridge with known IPs for ESP32 discovery
- Restart policy: `unless-stopped`
## Repository Extraction
The `pi/` directory will become its own git repository.
### Steps
1. Extract `pi/` into a standalone repo with its own `README.md`, `requirements.txt`, and CI
2. Add it back to this project as a git submodule
3. The interface contract between the two repos is the WebSocket protocol -- JSON schemas and binary frame formats documented in `README.md`
### Benefits
- Independent versioning and release cycle
- Pi-side contributors don't need the ESP-IDF toolchain
- CI can test the Python servers in isolation
- Cleaner separation of concerns between embedded firmware and host services

View File

@@ -5,20 +5,23 @@ WebSocket servers that feed system stats, alarm audio, and status images to the
## File Structure
```
pi/
run_all.py # Launches both servers as child processes
stats_server.py # Real system stats over WebSocket (port 8765)
contents_server.py # Alarm audio + status images over WebSocket (port 8766)
mock_server.py # Drop-in replacement for stats_server with random data
audio_handler.py # WAV loading, PCM chunking, alarm streaming
image_handler.py # PNG to 1-bit monochrome conversion, alpha compositing
alarm_scheduler.py # Loads and validates alarm config, checks firing schedule
requirements.txt
config/
alarms.json # Alarm schedule configuration
assets/
alarm/ # WAV files for alarm audio
img/ # Status images (idle.png, on_alarm.png)
run_all.py # Launches both servers as child processes
stats_server.py # Real system stats over WebSocket (port 8765)
contents_server.py # Alarm audio + status images over WebSocket (port 8766)
mock_server.py # Drop-in replacement for stats_server with random data
audio_handler.py # WAV loading, PCM chunking, alarm streaming
image_handler.py # PNG to 1-bit monochrome conversion, alpha compositing
alarm_scheduler.py # Loads and validates alarm config, checks firing schedule
requirements.txt
config/
alarms.json # Alarm schedule configuration
assets/
alarm/ # WAV files for alarm audio
img/ # Status images (idle.png, on_alarm.png, sleep.png)
scripts/
setup.sh # Install deps + create and enable systemd service
edit.sh # Edit alarm config and restart service
remove.sh # Stop, disable, and remove systemd service
```
## Requirements
@@ -48,6 +51,16 @@ python contents_server.py --config path/to.json # port 8766, custom config
python mock_server.py # port 8765, random data (no psutil needed)
```
### Running as a systemd service
Use the helper scripts in `scripts/` to manage a `pi-dashboard` systemd service:
```bash
bash scripts/setup.sh # install deps, create + enable service
bash scripts/edit.sh # edit alarm config, restart service
bash scripts/remove.sh # stop + remove service
```
## Servers
### stats_server.py -- port 8765
@@ -56,8 +69,8 @@ Pushes a JSON object every 2 seconds with real system metrics from `psutil`:
- `cpu_pct`, `mem_pct`, `mem_used_mb`, `disk_pct`
- `cpu_temp` (reads `/sys/class/thermal/` as fallback)
- `uptime_hrs`, `net_rx_kbps`, `net_tx_kbps`
- `services` (mocked until systemd integration)
- `uptime_hrs`, `net_rx_kbps`, `net_tx_kbps` (values are in kB/s despite the field names)
- `services` — live Docker container statuses via `docker ps -a`, with a ternary status model (`running`, `warning`, `stopped`). Monitored containers: gitea, samba, pihole, qbittorrent, frpc (ny), pinepods, frpc (ssh), jellyfin.
- `local_time` fields for RTC sync (`y`, `mo`, `d`, `h`, `m`, `s`)
### contents_server.py -- port 8766
@@ -65,8 +78,8 @@ Pushes a JSON object every 2 seconds with real system metrics from `psutil`:
Serves alarm audio and status images. Protocol:
**Status image:**
1. Text frame: `{"type":"status_image","width":120,"height":120}`
2. Binary frame: 1-bit monochrome bitmap (1800 bytes)
1. Text frame: `{"type":"status_image","width":200,"height":200}`
2. Binary frame: 1-bit monochrome bitmap (5000 bytes)
**Alarm audio:**
1. Text frame: `{"type":"alarm_start","sample_rate":N,"channels":N,"bits":N}`
@@ -108,8 +121,8 @@ Example with two alarms:
| `alarm_time` | `string` | Yes | 4-digit HHMM, 24-hour. Fires on the matched minute. |
| `alarm_days` | `string[]` | No | 3-letter abbreviations: `Mon``Sun`. If omitted, fires every day. |
| `alarm_dates` | `string[]` | No | `MM/DD` strings. Ignored if `alarm_days` is also set. |
| `alarm_audio` | `string` | No | WAV path, relative to `pi/`. Default: `assets/alarm/alarm_test.wav`. |
| `alarm_image` | `string` | No | Status PNG path, relative to `pi/`. Default: `assets/img/on_alarm.png`. |
| `alarm_audio` | `string` | No | WAV path, relative to project root. Silent if not set. "default" (case-insensitive) uses `assets/alarm/alarm.wav`. |
| `alarm_image` | `string` | No | Status PNG path, relative to project root. Default: `assets/img/on_alarm.png`. |
If both `alarm_days` and `alarm_dates` are present, `alarm_days` takes priority.
@@ -118,12 +131,12 @@ If both `alarm_days` and `alarm_dates` are present, `alarm_days` takes priority.
### audio_handler.py
- `find_wav(path=None)` -- uses the given path if it exists, otherwise falls back to glob in `assets/alarm/`
- `read_wav(path)` -- reads WAV, returns `(pcm_bytes, sample_rate, channels, bits)`
- `read_wav(path)` -- reads WAV, normalizes audio to 0 dBFS, returns `(pcm_bytes, sample_rate, channels, bits)`
- `stream_alarm(ws, pcm, sr, ch, bits)` -- streams one alarm cycle over WebSocket
### image_handler.py
- `load_status_image(path)` -- loads PNG, composites transparency onto white, converts to 1-bit 120x120 monochrome bitmap (black=1, MSB-first)
- `load_status_image(path)` -- loads PNG, composites transparency onto white, converts to 1-bit 200x200 monochrome bitmap (black=1, MSB-first)
- `send_status_image(ws, img_bytes)` -- sends status image header + binary over WebSocket
### alarm_scheduler.py

View File

@@ -9,5 +9,10 @@
"alarm_time": "2330",
"alarm_audio": "assets/alarm/sleep.wav",
"alarm_image": "assets/img/sleep.png"
},
{
"alarm_time": "0800",
"alarm_days": ["Sat", "Sun"],
"alarm_image": "assets/img/on_alarm.png"
}
]

View File

@@ -6,7 +6,7 @@ connected ESP32 dashboard client on port 8766.
Protocol:
Status image:
1. Text frame: {"type":"status_image","width":120,"height":120}
1. Text frame: {"type":"status_image","width":200,"height":200}
2. Binary frame: 1-bit monochrome bitmap
Alarm audio:
@@ -17,6 +17,7 @@ Protocol:
import argparse
import asyncio
import json
import logging
from datetime import datetime
from pathlib import Path
@@ -47,12 +48,22 @@ def _resolve_path(relative: str) -> Path:
return p
def _prepare_alarm(entry: dict) -> dict:
def _prepare_alarm(entry: dict, audio_cache: dict[Path, tuple]) -> dict:
"""Pre-resolve paths and load resources for a single alarm entry."""
audio_path = find_wav(_resolve_path(entry.get("alarm_audio", "assets/alarm/alarm_test.wav")))
alarm_img_path = _resolve_path(entry.get("alarm_image", "assets/img/on_alarm.png"))
pcm, sr, ch, bits = read_wav(audio_path)
img = load_status_image(alarm_img_path)
pcm = sr = ch = bits = None
raw_audio = entry.get("alarm_audio")
if raw_audio is not None:
audio_path = find_wav(_resolve_path(raw_audio))
if audio_path in audio_cache:
log.info("Reusing cached audio for %s", audio_path)
pcm, sr, ch, bits = audio_cache[audio_path]
else:
pcm, sr, ch, bits = read_wav(audio_path)
audio_cache[audio_path] = (pcm, sr, ch, bits)
return {
"config": entry,
"pcm": pcm, "sr": sr, "ch": ch, "bits": bits,
@@ -68,17 +79,18 @@ async def handler(ws):
configs = load_config(_config_path)
img_idle = load_status_image(IMG_DIR / "idle.png")
current_img = img_idle
try:
await send_status_image(ws, img_idle)
audio_cache: dict[Path, tuple] = {}
alarms = [_prepare_alarm(entry, audio_cache) for entry in configs] if configs else []
if not configs:
async def alarm_ticker():
nonlocal current_img
if not alarms:
log.info("No alarms configured — idling forever")
await asyncio.Future()
return
alarms = [_prepare_alarm(entry) for entry in configs]
while True:
for alarm in alarms:
if should_fire(alarm["config"]):
@@ -88,13 +100,35 @@ async def handler(ws):
alarm["last_fired"] = current_minute
log.info("Alarm firing: %s at %s",
alarm["config"]["alarm_time"], current_minute)
await send_status_image(ws, alarm["img"])
await stream_alarm(ws, alarm["pcm"], alarm["sr"],
alarm["ch"], alarm["bits"])
await send_status_image(ws, img_idle)
current_img = alarm["img"]
await send_status_image(ws, current_img)
if alarm["pcm"] is not None:
await stream_alarm(ws, alarm["pcm"], alarm["sr"],
alarm["ch"], alarm["bits"])
# let the image persist a bit more
await asyncio.sleep(1)
else:
# longer image persistence when no audio
await asyncio.sleep(3)
current_img = img_idle
await send_status_image(ws, current_img)
await asyncio.sleep(TICK_INTERVAL)
async def receiver():
async for msg in ws:
try:
data = json.loads(msg)
except (json.JSONDecodeError, TypeError):
continue
if data.get("type") == "request_image":
log.info("Client requested image — sending current (%d bytes)",
len(current_img))
await send_status_image(ws, current_img)
try:
await asyncio.gather(alarm_ticker(), receiver())
except websockets.exceptions.ConnectionClosed:
log.info("Client disconnected: %s:%d", remote[0], remote[1])

View File

@@ -9,12 +9,12 @@ from PIL import Image
log = logging.getLogger(__name__)
IMG_DIR = Path(__file__).parent / "assets" / "img"
STATUS_IMG_SIZE = 120
STATUS_IMG_SIZE = 200
MONOCHROME_THRESHOLD = 180
def load_status_image(path: Path) -> bytes:
"""Load a PNG, convert to 1-bit 120x120 monochrome bitmap (MSB-first, black=1).
"""Load a PNG, convert to 1-bit 200x200 monochrome bitmap (MSB-first, black=1).
Transparent pixels are composited onto white so they don't render as black.
"""

10
pyproject.toml Normal file
View File

@@ -0,0 +1,10 @@
[project]
name = "pi-dashboard-server"
version = "0.1.0"
description = "WebSocket servers for the ESP32-S3 RLCD dashboard"
requires-python = ">=3.10"
dependencies = [
"websockets>=12.0",
"psutil>=5.9.0",
"Pillow>=10.0",
]

13
scripts/edit.sh Executable file
View File

@@ -0,0 +1,13 @@
#!/usr/bin/env bash
# Open the alarm config in an editor, then restart the service.
set -euo pipefail
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
CONFIG="${PROJECT_DIR}/config/alarms.json"
${EDITOR:-nano} "${CONFIG}"
echo "==> Restarting pi-dashboard service..."
sudo systemctl restart pi-dashboard
echo "==> Done. Check status with: systemctl status pi-dashboard"

20
scripts/remove.sh Executable file
View File

@@ -0,0 +1,20 @@
#!/usr/bin/env bash
# Stop, disable, and remove the pi-dashboard systemd service.
set -euo pipefail
SERVICE_NAME="pi-dashboard"
UNIT_FILE="/etc/systemd/system/${SERVICE_NAME}.service"
echo "==> Stopping ${SERVICE_NAME}..."
sudo systemctl stop "${SERVICE_NAME}" || true
echo "==> Disabling ${SERVICE_NAME}..."
sudo systemctl disable "${SERVICE_NAME}" || true
echo "==> Removing unit file..."
sudo rm -f "${UNIT_FILE}"
echo "==> Reloading systemd..."
sudo systemctl daemon-reload
echo "==> Done. Service removed."

40
scripts/setup.sh Executable file
View File

@@ -0,0 +1,40 @@
#!/usr/bin/env bash
# Install dependencies, generate a systemd unit, and enable the pi-dashboard service.
set -euo pipefail
SERVICE_NAME="pi-dashboard"
PROJECT_DIR="$(cd "$(dirname "$0")/.." && pwd)"
UNIT_FILE="/etc/systemd/system/${SERVICE_NAME}.service"
RUN_USER="$(whoami)"
echo "==> Syncing Python dependencies..."
uv sync --project "${PROJECT_DIR}"
echo "==> Generating systemd unit file..."
cat > "/tmp/${SERVICE_NAME}.service" <<EOF
[Unit]
Description=Pi Dashboard WebSocket Servers
After=network-online.target docker.service
Wants=network-online.target docker.service
[Service]
Type=simple
User=${RUN_USER}
WorkingDirectory=${PROJECT_DIR}
ExecStart=$(command -v uv) run python run_all.py
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
echo "==> Installing unit file to ${UNIT_FILE}..."
sudo cp "/tmp/${SERVICE_NAME}.service" "${UNIT_FILE}"
rm "/tmp/${SERVICE_NAME}.service"
echo "==> Reloading systemd and enabling service..."
sudo systemctl daemon-reload
sudo systemctl enable --now "${SERVICE_NAME}"
echo "==> Done. Check status with: systemctl status ${SERVICE_NAME}"

View File

@@ -7,7 +7,7 @@ same 2s push interval. Services remain mocked until systemd integration is added
import asyncio
import json
import random
import subprocess
import time
from datetime import datetime
from pathlib import Path
@@ -60,17 +60,52 @@ def _get_net_throughput() -> tuple[float, float]:
return rx_kbps, tx_kbps
# only services that matter
SERVICES_ALIASES = {
"gitea": "gitea",
"samba": "samba",
"pihole": "pihole",
"qbittorrent": "qbittorrent",
"frpc-primary": "frpc (ny)",
"pinepods": "pinepods",
"frpc-ssh": "frpc (ssh)",
"jellyfin": "jellyfin",
}
def _get_docker_services() -> list[dict]:
"""Query Docker for real container statuses with ternary status model."""
try:
result = subprocess.run(
["docker", "ps", "-a", "--format", "{{.Names}}\t{{.Status}}"],
capture_output=True, text=True, timeout=5,
)
except (subprocess.TimeoutExpired, FileNotFoundError, OSError):
return []
def _mock_services() -> list[dict]:
"""Mocked service status — same logic as mock_server.py."""
return [
{"name": "docker", "status": random.choice(["running", "running", "running", "stopped"])},
{"name": "pihole", "status": random.choice(["running", "running", "running", "stopped"])},
{"name": "nginx", "status": random.choice(["running", "running", "stopped"])},
{"name": "sshd", "status": "running"},
{"name": "ph1", "status": "running"},
{"name": "ph2", "status": "stopped"},
]
if result.returncode != 0:
return []
services = []
for line in result.stdout.strip().splitlines():
parts = line.split("\t", 1)
if len(parts) != 2:
continue
name, raw_status = parts
if (name in SERVICES_ALIASES):
if raw_status.startswith("Up"):
if "unhealthy" in raw_status or "Restarting" in raw_status:
status = "warning"
else:
status = "running"
else:
status = "stopped"
services.append({"name": SERVICES_ALIASES[name], "status": status})
# Sort: warnings first, then stopped, then running (problems float to top)
order = {"warning": 0, "stopped": 1, "running": 2}
services.sort(key=lambda s: order.get(s["status"], 3))
return services
def _local_time_fields() -> dict:
@@ -98,9 +133,9 @@ def generate_stats() -> dict:
"disk_pct": round(disk.percent, 1),
"cpu_temp": _get_cpu_temp(),
"uptime_hrs": round((time.time() - psutil.boot_time()) / 3600, 1),
"net_rx_kbps": rx_kbps,
"net_tx_kbps": tx_kbps,
"services": _mock_services(),
"net_rx_kbps": rx_kbps / 8,
"net_tx_kbps": tx_kbps / 8, # kByte/s for humans
"services": _get_docker_services(),
"timestamp": int(time.time()),
"local_time": _local_time_fields(),
}