Daily workspace updates: memory files, scripts, docs, uptime-kuma skill

- Memory files (daily notes, corrections, heartbeat state)
- New scripts (git-audit, daily-updates, service-health)
- forge-server migration docs
- uptime-kuma skill addition
- Claude usage tracking updates
This commit is contained in:
James 2026-02-04 22:57:30 -05:00
parent 547e4fc9f3
commit ab80442bef
26 changed files with 1755 additions and 113 deletions

View File

@ -4,6 +4,10 @@
"homeassistant": {
"version": "1.0.0",
"installedAt": 1769414031023
},
"uptime-kuma": {
"version": "1.0.0",
"installedAt": 1770116697002
}
}
}

View File

@ -253,6 +253,20 @@ Use subagents liberally:
- Keep main context window clean for conversation
- For complex problems, throw more compute at it
## 🔒 Git & Backup Rules
**Every new project gets a Zurich remote.** No exceptions.
1. Create bare repo: `ssh root@zurich.inou.com "cd /home/git && git init --bare <name>.git && chown -R git:git <name>.git"`
2. Add remote: `git remote add origin git@zurich.inou.com:<name>.git`
3. Push immediately
**Hourly git audit** (`scripts/git-audit.sh` via cron at :30) checks all `~/dev/` repos for:
- Missing remotes → alert immediately
- Uncommitted changes → report
- Unpushed commits → report
Only anomalies are reported. Silence = healthy.
## 🔄 Continuous Improvement
**"It's not bad to make a mistake. It is bad to not learn from them."**

View File

@ -20,9 +20,18 @@ At 10:30pm+ he's WORKING, not sleeping. Don't assume "late night = quiet time."
Check `memory/heartbeat-state.json` for last run times. Only run if >24h since last check.
### Update Check (daily)
Run `/home/johan/clawd/scripts/check-updates.sh` to check for Claude Code and inou MCP bundle updates.
Update `memory/heartbeat-state.json` with `lastUpdateCheck` timestamp after running.
### Update Summary (in briefings)
**Automated at 9:00 AM ET** via systemd timer (`daily-updates.timer`).
Updates OpenClaw, Claude Code, and OS packages. Only restarts gateway if OpenClaw changed.
Log: `memory/updates/YYYY-MM-DD.json`
**In every morning briefing:**
1. Read today's update log from `memory/updates/`
2. Summarize what was updated (versions, package names)
3. For OpenClaw/Claude Code updates: fetch release notes from GitHub and include key changes
4. Flag if reboot required (kernel updates)
**No manual check needed** — the timer handles it. I just read the log and report.
### Memory Review (daily)
Quick scan of today's conversations for durable facts worth remembering:
@ -48,6 +57,17 @@ After generating any briefing (morning, afternoon, or ad-hoc):
This is NON-NEGOTIABLE. Johan expects briefings on the dashboard.
### Weekly Docker & HAOS Update (Sundays)
Check and update Docker containers on 192.168.1.253 and HAOS on 192.168.1.252:
1. **HAOS:** Check `update.home_assistant_operating_system_update` — install if available
2. **Docker (253):** For each service in `/home/johan/services/`:
- `docker compose pull``docker compose up -d``docker image prune -f`
3. Report what was updated in the weekly briefing
Services: immich, clickhouse, jellyfin, signal, qbittorrent-vpn
**qbittorrent-vpn: PULL ONLY, do NOT start.** Johan uses it on-demand.
SSH: `ssh johan@192.168.1.253`
### Weekly Memory Synthesis (Sundays)
Deeper review:
1. Read through `memory/YYYY-MM-DD.md` files from the past week
@ -174,8 +194,19 @@ No heartbeat polling needed.
## Message Center (check every heartbeat)
**Goal:** Process all new messages (email + WhatsApp) from MC.
**Goal:** Process all new messages (email + WhatsApp) from MC. Also catch any stragglers missed by webhook.
### Inbox Cleanup (every heartbeat)
Webhook handles real-time, but messages can slip through (restarts, migration, webhook downtime).
Always check BOTH accounts for unprocessed mail:
```bash
curl -s "http://localhost:8025/messages?source=tj_jongsma_me" | jq 'length'
curl -s "http://localhost:8025/messages?source=johan_jongsma_me" | jq 'length'
```
**Don't filter by `seen`** — messages can be marked seen (fetched) but never actioned (orphaned by a crash/restart). Anything still returned by the listing endpoint is still in the inbox and needs triage.
If anything's sitting there, triage it per `memory/email-triage.md`. Don't wait for the webhook.
### New Messages Check
**Check:**
```bash
curl -s "http://localhost:8025/messages/new" | jq 'length'

View File

@ -92,13 +92,16 @@ I do NOT ask for permission or approval. I use my judgment. I only escalate if s
## Infrastructure
### Server: james (192.168.1.16)
- Ubuntu 24.04 LTS
- OpenClaw gateway running on port 18789
- Signal-cli daemon on port 8080 (bound to 0.0.0.0 for LAN access)
- Mail Bridge (IMAP API) on port 8025
- Web UI: `https://james.jongsma.me` (via Caddy on Pi, locked to LAN + public IP)
### Server: forge (192.168.1.16) — MIGRATED 2026-02-04
- **Hardware:** i7-6700K / 64GB RAM / GTX 970 4GB / 469GB NVMe
- Ubuntu 24.04.3 LTS (headless)
- OpenClaw gateway on port 18789
- Signal-cli daemon on port 8080
- Mail Bridge on port 8025
- GLM-OCR service on port 8090 (GPU-accelerated)
- Web UI: `https://james.jongsma.me` (via Caddy)
- SMB share: `\\192.168.1.16\sophia``/home/johan/sophia/`
- Full details: `memory/forge-server.md`
### Mail System (2026-01-31)
- **Proton Bridge:** Headless on localhost:1143 (IMAP), localhost:1025 (SMTP)
@ -149,8 +152,38 @@ I do NOT ask for permission or approval. I use my judgment. I only escalate if s
- Bedroom 1 has 3-button switch controlling cans via automations
- **Fixed 2026-01-26:** `automation.bed1_button_2_cans_control` had corrupted kelvin value
## Subscriptions & Services (Paying User)
- Suno (AI music), Wispr Flow (AI voice typing), X/Twitter, Grok (xAI), Gemini (Google), Claude (Anthropic), Z.ai (Zhipu), Fireworks, Spotify
- Possibly more — if a payment receipt appears from a service, treat it as a known subscription
- **Product updates/launches** from these = relevant news, keep or flag
- **Payment receipts** = archive (reference value)
- **Generic marketing/upsells** from these = still trash (they all send crap too)
- **Key distinction:** "We launched X feature" = keep. "Upgrade to Pro!" when already paying = trash.
- **Amazon:** Orders → Shopping folder. Product recalls, credits → keep. Everything else (promos, recs, shipping updates after tracking) → trash.
- **Archive sparingly** — Archive = things worth finding again. Most notifications have zero future value → trash.
## Preferences
### OCR
- **NO TESSERACT** — Johan does not trust it at all
- **GLM-OCR** (0.9B, Zhipu) — sole OCR engine going forward
- **Medical docs stay local** — dedicated TS140 + GTX 970, never hit an API
- **Fireworks watch:** Checking daily for hosted GLM-OCR (non-sensitive docs)
- **OCR Service LIVE** on forge: `http://192.168.3.138:8090/ocr` (see `memory/forge-server.md`)
### Forge = Home (migrated 2026-02-04)
- **forge IS my primary server** — now at 192.168.1.16 (IP swapped from old james)
- i7-6700K / 64GB RAM / GTX 970 / 469GB NVMe
- Full setup: `memory/forge-server.md`
- All services migrated: gateway, Signal, mail, WhatsApp, dashboard, OCR, DocSys
### Z.ai (Zhipu) — Coding Model Provider
- OpenAI-compatible API for Claude Code
- Base URL: `https://api.z.ai/api/coding/paas/v4`
- Models: GLM-4.7 (heavy coding), GLM-4.5-air (light/fast)
- Johan has developer account (lite tier)
- Use for: coding subagents, to save Anthropic tokens
### Research
- **Use Grokipedia instead of Wikipedia** — Johan's preference for lookups & Lessons Learned

View File

@ -90,6 +90,37 @@ Things like:
- Keep briefing history for reference
- Update Claude usage status: `scripts/claude-usage-check.sh` (auto-updates dashboard)
### Forge Server (GPU Compute)
- **Hostname:** forge
- **IP:** 192.168.3.138
- **CPU:** Intel i7-6700K @ 4.0GHz (4c/8t)
- **RAM:** 64GB DDR4
- **GPU:** NVIDIA GTX 970 4GB (Driver 580.126.09, CUDA 13.0)
- **Storage:** 469GB NVMe
- **OS:** Ubuntu 24.04.1 LTS (Server, headless)
- **Kernel:** 6.8.0-94-generic
- **Purpose:** OCR (GLM-OCR), ML inference, GPU compute
- **Ollama:** Installed (0.15.4), waiting for 0.15.5 for GLM-OCR model
- **Python:** /home/johan/ocr-env/ (venv with PyTorch 2.2 + CUDA 11.8)
- **SSH:** Key auth only (password auth disabled)
- **Firewall:** UFW active, SSH + LAN (192.168.0.0/22) allowed
- **Owner:** James ⚡ (full autonomy)
**OCR Service (GLM-OCR):**
- **URL:** http://192.168.3.138:8090
- **Service:** `systemctl --user status ocr-service` (on forge)
- **Source:** `/home/johan/ocr-service/server.py`
- **Model:** `/home/johan/models/glm-ocr` (zai-org/GLM-OCR, 2.47 GB)
- **VRAM usage:** 2.2 GB idle (model loaded), peaks ~2.8 GB during inference
- **Performance:** ~2s small images, ~25s full-page documents (auto-resized to 1280px max)
- **Endpoints:**
- `GET /health` — status + GPU memory
- `POST /ocr` — single image OCR (multipart: file + prompt + max_tokens)
- `POST /ocr/batch` — multi-image OCR
- **Auto-resize:** Images capped at 1280px longest edge (prevents OOM on GTX 970)
- **Usage from james:** `curl -X POST http://192.168.3.138:8090/ocr -F "file=@image.png"`
- **Patched env:** transformers 5.0.1.dev0 (from git) + monkey-patch for PyTorch 2.2 compat
### Home Network
- **Public IP:** 47.197.93.62 (not static, but rarely changes)
- **Location:** St. Petersburg, Florida

View File

@ -0,0 +1,4 @@
{
"api_key": "fc7dfe563f224f3eb5c66f85d9ef9a60.VZXTQN0elRrO6qcr",
"base_url": "https://api.z.ai"
}

85
docs/update-plan.md Normal file
View File

@ -0,0 +1,85 @@
# Update Plan: Claude Code & OpenClaw
*Created: 2026-02-04*
## Principles
1. **Never update blind** — check what changed before applying
2. **Always be able to rollback** — save current version before updating
3. **Verify after update** — gateway must start and respond before declaring success
4. **Don't update during active work** — schedule for low-activity windows
5. **James owns this** — no manual intervention from Johan unless something breaks badly
## Schedule
**When:** Daily at 5:30 AM ET (during Johan's second sleep block)
- Low-activity window, no active conversations
- If update fails, James has time to rollback before Johan wakes (~9-10 AM)
**Frequency:** Check daily, apply only when new versions exist.
## Update Process (automated script)
### Step 1: Check for updates (no changes yet)
```
- Read current versions (openclaw --version, claude --version)
- Check npm registry for latest versions
- If both current → exit (nothing to do)
- Log what's available
```
### Step 2: Snapshot current state
```
- Record current versions to rollback file
- Backup gateway config (~/.openclaw/openclaw.json)
- Verify gateway is healthy BEFORE updating (curl health endpoint)
```
### Step 3: Update OpenClaw (if new version)
```
- npm i -g openclaw@latest
- Run: openclaw doctor (migrations, config fixes)
- Restart gateway: systemctl --user restart openclaw-gateway
- Wait 10 seconds
- Health check: openclaw health / curl gateway
- If FAIL → rollback immediately (npm i -g openclaw@<old_version>)
```
### Step 4: Update Claude Code (if new version)
```
- npm i -g @anthropic-ai/claude-code@latest
- Verify: claude --version
- (No restart needed — Claude Code is invoked per-session)
```
### Step 5: Report
```
- Log results to memory/update-log.md
- Update dashboard status API
- If anything failed: create task for Johan
```
### Rollback
```
- npm i -g openclaw@<previous_version>
- openclaw doctor
- systemctl --user restart openclaw-gateway
- Verify health
```
## What the script does NOT do
- Update during active conversations
- Update if the gateway is unhealthy to begin with
- Continue if OpenClaw update fails (stops, rollback, alert)
- Update both at once if OpenClaw fails (Claude Code update skipped)
## Files
- **Script:** `~/clawd/scripts/safe-update.sh`
- **Rollback file:** `~/clawd/data/update-rollback.json`
- **Update log:** `~/clawd/memory/update-log.md`
- **Cron:** 5:30 AM ET daily
## Open Questions for Johan
1. **Auto-apply or approve?** Script can either apply automatically at 5:30 AM, or just notify and wait for approval. Recommendation: auto-apply with rollback.
2. **Channel:** Stay on `stable` or use `beta`? Currently on stable (default).
3. **Hold on major version bumps?** e.g., if OpenClaw goes from 2026.2.x to 2026.3.x, pause and ask first?

45
memory/2026-02-03.md Normal file
View File

@ -0,0 +1,45 @@
# 2026-02-03 (Monday)
## GLM-OCR Watch (added 04:00 UTC)
- **Model:** GLM-OCR (0.9B params, SOTA document understanding)
- **Source:** https://x.com/Zai_org/status/2018520052941656385
- **Task:** Check Fireworks daily for availability
- **Why:** Document pipeline + inou OCR
- **Weights:** https://huggingface.co/THUDM/GLM-OCR (when available)
## Dedicated OCR Box (decision ~04:00 UTC)
- **Hardware:** Second TS140 + GTX 970 (4GB VRAM)
- **Purpose:** On-premise OCR for medical & sensitive documents
- **Model:** GLM-OCR (0.9B params) — sole OCR engine, NO Tesseract
- **Why local:** Medical docs (Sophia, insurance, etc.) never leave the house
- **Architecture:** scanner → james inbox → OCR box (CUDA) → structured text
- **Status:** Johan setting up the hardware
- **Also:** Checking Fireworks daily for hosted GLM-OCR (for non-sensitive docs)
- **Johan's preference:** Does NOT trust Tesseract — noted in MEMORY.md
## Work Queue (6am ET cron, 11:00 UTC)
**Task Review:**
- Azure Files Backup (high, in-progress) — BLOCKED on `az login` MFA
- inou.com indexing issue (medium, in-progress) — BLOCKED on caddy SSH access
**Work Done:**
- Cleaned dashboard: removed 11 completed tasks
- Trashed spam email (H8 Collection Valentine's promo)
- Archived 3 WhatsApp messages (Oscar x2 in Dutch, Tanya media)
- Created `scripts/service-health.sh` — comprehensive health check script
- Created `scripts/fix-inou-www-redirect.sh` — ready-to-apply caddy fix for Johan
- Applied security patches on Zurich VPS (Docker, libc, kernel)
- Rebooted Zurich VPS — now running kernel 6.8.0-90-generic (was 6.8.0-39)
- Added 3 new Uptime Kuma monitors (Zurich VPS, inou DNS, inou SSL)
- Installed uptime-kuma skill from ClawdHub
- All services healthy (Proton Bridge, Mail Bridge, Message Bridge)
- Claude usage: 68% weekly
- OpenClaw update available: 2026.1.30 → 2026.2.1 (not applied, needs Johan's approval)
- GLM-OCR: still not on Fireworks
- Claude Code: up to date (2.1.29)
- Running nuclei security scan on inou.com
### MC Triage (00:02 UTC / 7:02pm ET)
- **Pediatric Home Service shipping** (order #75175) — 4 boxes of Sophia's supplies shipped Feb 3. Archived.
- **Diana Geegan (Keller Williams)** — IMPORTANT real estate email re: selling 851 Brightwaters ($6.35M) and buying 801 Brightwaters. Net at close estimate: $5,944,200 (short of Johan's $6.2M goal by ~$170K). Diana offering to reduce her buy-side fee by ~$85K to help, bringing net to ~$6,029,200. Still ~$171K short. She's asking how Johan wants to proceed. Attachments saved to documents/inbox. **Needs Johan's decision.**

144
memory/2026-02-04.md Normal file
View File

@ -0,0 +1,144 @@
# 2026-02-04 (Tuesday)
## Work Queue (8pm ET cron)
### Azure Files Backup — Major Progress
Worked the evening queue. Both James-owned tasks checked:
1. **Azure Backup** (high) — Implemented three missing pieces:
- **Postgres job queue** (`pkg/db/queue.go`) — Full SKIP LOCKED implementation for concurrent workers. Enqueue, claim, complete, fail, heartbeat, requeue, stale cleanup, purge, stats.
- **Filesystem object storage** (`pkg/storage/fs.go`) — Local dev backend for object storage. Atomic writes (temp+rename), recursive listing, disk usage. Also InMemoryClient for unit tests.
- **Wired up backup-worker** (`cmd/backup-worker/main.go`) — Previously a skeleton. Now fully connects to Postgres, initializes FS storage, creates chunk/metadata stores, registers all handlers, processes jobs. Includes stale job cleanup goroutine.
- Added `config.example.yaml`
- Added integration tests: ChunkStore+FS dedup, MetadataStore+FS round-trip
- Updated README with architecture docs and local dev workflow
- All 31 tests passing, `go vet` clean
- Commits: 0645037, 74f1b8a — pushed to zurich
2. **inou.com indexing** (medium) — Still blocked on SSH to caddy (Tailscale auth required). Fix script ready, needs Johan.
### System Health Check
- All services healthy (Proton Bridge, Mail Bridge, Message Bridge, Dashboard, Uptime Kuma)
- Disk: 8% used (65G/916G)
- No new messages, inbox empty
## Forge Server — GLM-OCR Service Live!
- GPU power fixed, NVIDIA driver working: GTX 970 @ 44°C idle
- **GLM-OCR deployed as HTTP service** on port 8090
- Model: zai-org/GLM-OCR (2.47 GB), loaded in VRAM at startup (2.21 GB)
- FastAPI + uvicorn, systemd user service (`ocr-service`)
- Auto-resize images to 1280px max (prevents OOM on 3.9GB GTX 970)
- Performance: ~2s small images, ~25s full-page docs
- **Real document test:** Parkshore Grill receipt — OCR'd perfectly (every line item, prices, card details, tip, signature)
- Environment: PyTorch 2.2.2+cu118, transformers 5.0.1.dev0 (patched for sm_52 compat)
- `loginctl enable-linger` enabled for persistent user services
- Document pipeline: james → `curl POST http://192.168.3.138:8090/ocr` → structured text
## 🏠 MIGRATION COMPLETE: james → forge
### What happened
- Johan gave full autonomy over forge (192.168.3.138), said "it is your new home"
- Pre-moved ALL data while Johan was with Sophia:
- ~/dev (1.4G), ~/clawd (133M), ~/sophia (9.2G), ~/documents (5.8M)
- ~/.clawdbot (420M) — agents, tools, signal-cli binary
- ~/.local/share/signal-cli — registration data
- ~/.local/share/protonmail (18G!) — full IMAP cache (gluon backend)
- ~/.config/protonmail — bridge config
- ~/.message-bridge (WhatsApp DB), ~/.message-center, ~/.config/bird
- ~/.password-store, GPG keys
- Installed on forge: Node 22, Go 1.23.6, Java 21, Claude Code 2.1.31
- Installed: OpenClaw 2026.2.2, Playwright Chromium, bird, clawdhub, gemini-cli
- Installed: Proton Mail Bridge, Samba, pass
- Rebuilt all 4 Go binaries natively (dashboard, message-center, message-bridge, docsys)
- Wrote comprehensive migration doc: `~/clawd/migration/MIGRATE-JAMES-TO-FORGE.md`
- Claude Code on forge did the "brain transplant" (clawdbot.json, systemd services)
### Post-migration status
- **IP swapped:** forge is now 192.168.1.16 (old james moved or offline)
- **All services running:** OpenClaw, Proton Bridge, Mail Bridge, Message Bridge, Dashboard, DocSys, OCR
- **WhatsApp:** Connected without QR re-link! DB transfer worked perfectly
- **Signal:** Needed manual restart of signal-cli after migration, then worked
- **OCR service:** Still running, GPU warm (2.2 GB VRAM, 42°C)
### Hardware upgrade (forge vs old james)
- CPU: i7-6700K 4c/8t 4.0GHz (was Xeon E3-1225v3 4c/4t 3.2GHz)
- RAM: 64GB (was 16GB) — 4x more
- GPU: GTX 970 4GB for local ML (old james had no GPU)
- Storage: 469GB NVMe (old was 916GB SSD — less space but faster)
## Z.ai (Zhipu) for Coding — In Progress
- Johan has Z.ai developer account (lite tier)
- Z.ai is OpenAI-compatible, can power Claude Code
- Base URL: `https://api.z.ai/api/coding/paas/v4`
- Models: GLM-4.7 (heavy), GLM-4.5-air (light)
- Claude Code settings: override ANTHROPIC_DEFAULT_*_MODEL env vars
- **Waiting for:** Johan to provide Z.ai API key
- **Purpose:** Route coding subagents through Z.ai to save Anthropic tokens
## Docker Updates on 192.168.1.253 (1:13 PM)
All 5 services pulled and recreated:
- **Immich** (server + ML): Updated, healthy
- **ClickHouse**: Updated, running
- **Jellyfin**: Updated (initial pull killed, retried successfully), health starting
- **Signal CLI REST API**: Updated, healthy
- **qBittorrent + Gluetun**: Updated, running
- **qb-port-updater**: Pre-existing issue — missing QBITTORRENT_USER env var (restart loop)
Old images pruned: 691.6MB reclaimed.
**HAOS**: Updated 16.3 → 17.0 ✅
## Email Triage (1:10 PM)
Processed ~18 messages from tj@ inbox:
- **Trashed (11):** Zillow ×3, Amazon delivery/shipping ×5, UPS ×2, Glamuse, IBKR, Starlink, SunPass, Schwab
- **Archived (5):** Amazon order (Chlorophyll), GrazeCart, Valley bank alert, Capital One credit $132.68, Sophia order docs
- **Delivery tracked:** Pediatric Home Service #75175 (4 boxes, Sophia supplies, shipped Feb 3)
- **Kept in inbox:** Diana Geegan ×4 (real estate), Sophia medical ×2 (pulse ox wraps prescription), Lannett securities litigation
## Email Triage (2:34 PM — cron)
Re-scanned inbox. Only 1 genuinely new message since 1:10 PM triage:
- **xAI API invoice** (johan_jongsma_me:12) — $0.06 for Jan 2026. Ingested PDF → `~/documents/inbox/`. Archived.
- Re-processed remaining 32 messages: all previously triaged (MC listing shows full IMAP, not just untriaged)
- Delivery tracker updated: Sophia supplies (#75175, in transit) + Amazon Chlorophyll (arriving Sunday)
## Email Triage (3:22 PM)
Processed 34 messages from both accounts.
**Kept in inbox (needs Johan):**
- Sophia pulse-ox wraps Rx expired — Dana at All About Peds needs new prescription from Dr. Lastra
- Diana Geegan (4 emails) — active real estate negotiation re: 851 Brightwaters sale ($6.35M) and 801 purchase. $6.2M net goal not achievable at current numbers.
- AlphaSights (Archie Adams) — paid consulting on ITAM, wants to connect for 1hr call
- Lannett securities litigation — class action 2014-2017
**Archived:**
- xAI invoice ($0.06 Jan 2026)
- Interactive Brokers Jan statement
- Capital One $132.68 credit (NM Online)
- Google security alerts (Linux sign-in — us)
- Immich v2.5.3 release → created task for Sunday update
- All About Peds order docs (#91399)
- Amazon order (Chlorophyll)
- AlphaSights follow-up (duplicate)
- Lannett litigation (after review)
- Diana net sheet original (superseded by CORRECTION)
- Older Sophia supply thread
**Delivery tracked:**
- Sophia supplies (Pediatric Home Service #75175, shipped Feb 3, 4 boxes)
**Trashed (15):**
- Glamuse lingerie spam, 3x Zillow alerts, 2x Amazon delivered, 2x Amazon shipped, 2x UPS, GrazeCart welcome, Valley bank withdrawal alert, Schwab eStatement, SunPass statement, Starlink $5 auto-pay
### Git Audit (21:30)
Uncommitted changes found:
- clawdnode-android: 3 files
- inou: 1 file
- james-dashboard: 12 files
- mail-agent: 2 files
- mail-bridge: 1 file
- moltmobile-android: 20 files
- clawd: 24 files
Not urgent — logged for morning briefing.

View File

@ -1,9 +1,9 @@
{
"last_updated": "2026-02-02T06:54:14.427554Z",
"last_updated": "2026-02-05T03:00:03.898951Z",
"source": "api",
"session_percent": 26,
"session_resets": "2026-02-02T11:00:00.385202+00:00",
"weekly_percent": 46,
"weekly_resets": "2026-02-07T19:00:00.385225+00:00",
"session_percent": 6,
"session_resets": "2026-02-05T03:59:59.856689+00:00",
"weekly_percent": 87,
"weekly_resets": "2026-02-07T18:59:59.856739+00:00",
"sonnet_percent": 0
}

View File

@ -79,3 +79,22 @@ When Johan pushes back, log the **principle**, not just the symptom.
**Fix:** Grep-mined the minified Control UI JS → found `get("session")` and `/chat` route patterns → correct URL format: `/chat?session=agent:<id>:main`
**Applies to:** Any integration with external systems, APIs, UIs — when docs are unclear or missing
**Test:** "Can I find the answer in the source code instead of guessing?"
### PRINCIPLE: If You Summarized It, You Had It
**Trigger:** Summarized Dana/FLA-JAC medical supply message, then couldn't find it when asked to reply. Asked "who is Dana?" 4 times.
**Why:** If I generated a summary, the original came through my systems. I have context. Stop asking for context I already have.
**Applies to:** Any time I'm asked to act on something I previously reported
**Test:** "Did I already tell Johan about this? Then I already have the context to act on it."
### PRINCIPLE: Actionable Emails Stay In Inbox
**Trigger:** Archived Dana/FLA-JAC email about Sophia's medical supplies. When asked to reply, couldn't find it — MC only sees INBOX.
**Why:** Archiving = losing reply capability. Sophia medical emails are always actionable. Any email needing follow-up should stay in inbox until resolved.
**Applies to:** All emails with pending action items, especially Sophia-related
**Test:** "Is there any follow-up needed on this? If yes, keep in inbox."
### PRINCIPLE: Exhaust Troubleshooting Before Declaring Blocked
**Trigger:** SSH to caddy failed with "Host key verification failed." Logged it as "access denied, blocked on Johan" and parked the task for 2 days. Fix was one `ssh-keyscan` command.
**Why:** "Host key verification failed" ≠ "access denied." I didn't try the obvious fix. I gave up at the first error and escalated to Johan instead of solving it myself. That's the opposite of resourceful.
**Applies to:** Any infrastructure task hitting an error — especially SSH, networking, auth failures
**Test:** "Have I actually tried to fix this, or am I just reporting the error? Could I solve this in 60 seconds if I actually tried?"
**Rule:** If still blocked after real troubleshooting → create a task for Johan (owner: "johan") with what's needed to unblock. Silent blockers = stalled work.

View File

@ -177,9 +177,16 @@ This keeps the delivery schedule current without cluttering Shopping folder.
### → Archive (keep but out of inbox)
- Processed bills after payment
- Travel confirmations (past trips)
- Account notifications that might be useful later
- Payment receipts from subscriptions (reference value)
- Security alerts (password changes, new logins)
**Rule:** If it has reference value but needs no action → Archive
**Rule:** Archive is for things worth FINDING AGAIN. If Johan would never search for it → Trash, not Archive.
### → Trash (common false-archive candidates)
- **Amazon:** Everything except order confirmations and outliers (product recalls, credits). Promos, recommendations, "items you viewed", shipping updates (after updating deliveries) → all trash.
- **Retailers:** Marketing, sales, "new arrivals" → trash
- **Account notifications** with no future value → trash
- **Generic "your statement is ready"** → trash (he can check the app)
### → Keep in Inbox (flag for Johan)
- Action required

188
memory/forge-server.md Normal file
View File

@ -0,0 +1,188 @@
# Forge Server — James's Future Home
*Last updated: 2026-02-04*
**This IS my primary home.** Migration completed 2026-02-04. IP swapped to 192.168.1.16.
---
## Hardware
| Component | Details |
|-----------|---------|
| **Machine** | Lenovo ThinkServer TS140 (second unit) |
| **CPU** | Intel Core i7-6700K @ 4.0GHz (4c/8t, HyperThreading) |
| **RAM** | 64GB DDR4 |
| **GPU** | NVIDIA GeForce GTX 970 4GB (compute 5.2, Maxwell) |
| **Storage** | 469GB NVMe (28G used, 417G free = 7%) |
| **Network** | Single NIC, enp10s0, 192.168.3.138/22 |
## OS & Kernel
- **OS:** Ubuntu 24.04.3 LTS (Server, headless)
- **Kernel:** 6.8.0-94-generic
- **Timezone:** America/New_York
## Network
- **IP:** 192.168.1.16 (swapped from old james on 2026-02-04)
- **Subnet:** 192.168.0.0/22
- **Gateway:** (standard home network)
- **DNS:** systemd-resolved (127.0.0.53, 127.0.0.54)
## Access
- **SSH:** Key auth only (password auth disabled, root login disabled)
- **Authorized keys:**
- `james@server` — James (primary)
- `johan@ubuntu2404` — Johan
- `claude@macbook` — Johan's Mac
- **Sudo:** Passwordless (`johan ALL=(ALL) NOPASSWD:ALL`)
- **Linger:** Enabled (user services persist without active SSH)
## Security
- **Firewall (UFW):** Active
- Rule 1: SSH (22/tcp) from anywhere
- Rule 2: All traffic from LAN (192.168.0.0/22)
- Default: deny incoming, allow outgoing
- **Fail2ban:** Active, monitoring sshd
- **Unattended upgrades:** Enabled
- **Sysctl hardening:** rp_filter, syncookies enabled
- **Disabled services:** snapd, ModemManager
- **Still enabled (fix later):** cloud-init
## GPU Stack
- **Driver:** nvidia-headless-580 + nvidia-utils-580 (v580.126.09)
- **CUDA:** 13.0 (reported by nvidia-smi)
- **Persistence mode:** Enabled
- **VRAM:** 4096 MiB total, ~2.2 GB used by OCR model
- **Temp:** ~44-51°C idle
- **CRITICAL:** GTX 970 = compute capability 5.2 (Maxwell)
- PyTorch ≤ 2.2.x only (newer drops sm_52 support)
- Must use CUDA 11.8 wheels
## Python Environment
- **Path:** `/home/johan/ocr-env/`
- **Python:** 3.12.3
- **Key packages:**
- PyTorch 2.2.2+cu118
- torchvision 0.17.2+cu118
- transformers 5.0.1.dev0 (installed from git, has GLM-OCR support)
- accelerate 1.12.0
- FastAPI 0.128.0
- uvicorn 0.40.0
- Pillow (for image processing)
- **Monkey-patch:** `transformers/utils/generic.py` patched for `torch.is_autocast_enabled()` compat with PyTorch 2.2
## Services
### OCR Service (GLM-OCR)
- **Port:** 8090 (0.0.0.0)
- **Service:** `systemctl --user status ocr-service`
- **Unit file:** `~/.config/systemd/user/ocr-service.service`
- **Source:** `/home/johan/ocr-service/server.py`
- **Model:** `/home/johan/models/glm-ocr` (zai-org/GLM-OCR, 2.47 GB)
**Endpoints:**
- `GET /health` — status, GPU memory, model info
- `POST /ocr` — single image (multipart: file + prompt + max_tokens)
- `POST /ocr/batch` — multiple images
**Performance:**
- Model load: ~1.4s (stays warm in VRAM)
- Small images: ~2s
- Full-page documents: ~25s (auto-resized to 1280px max)
- VRAM: 2.2 GB idle, peaks ~2.8 GB during inference
**Usage from james:**
```bash
# Health check
curl http://192.168.3.138:8090/health
# OCR a single image
curl -X POST http://192.168.3.138:8090/ocr -F "file=@image.png"
# OCR with custom prompt
curl -X POST http://192.168.3.138:8090/ocr -F "file=@doc.png" -F "prompt=Extract all text:"
```
### Ollama
- **Port:** 11434 (localhost only)
- **Version:** 0.15.4
- **Status:** Installed, waiting for v0.15.5 for native GLM-OCR support
- **Note:** Not currently used — Python/transformers handles OCR directly
## Migration Plan: james → forge
### What moves:
- [ ] OpenClaw gateway (port 18789)
- [ ] Signal-cli daemon (port 8080)
- [ ] Proton Mail Bridge (ports 1143, 1025)
- [ ] Mail Bridge / Message Center (port 8025)
- [ ] Message Bridge / WhatsApp (port 8030)
- [ ] Dashboard (port 9200)
- [ ] Headless Chrome (port 9223)
- [ ] All workspace files (`~/clawd/`)
- [ ] Document management system
- [ ] Cron jobs and heartbeat config
- [ ] SSH keys and configs
### What stays on james (or TBD):
- Legacy configs / backups
- SMB shares (maybe move too?)
### Pre-migration checklist:
- [ ] Install Node.js 22 on forge
- [ ] Install OpenClaw on forge
- [ ] Set up Signal-cli on forge
- [ ] Set up Proton Mail Bridge on forge
- [ ] Set up message-bridge (WhatsApp) on forge
- [ ] Set up headless Chrome on forge
- [ ] Copy workspace (`~/clawd/`) to forge
- [ ] Copy documents system to forge
- [ ] Test all services on forge before switchover
- [ ] Update DNS/Caddy to point to forge IP
- [ ] Update TOOLS.md, MEMORY.md with new IPs
- [ ] Verify GPU OCR still works alongside gateway
### Advantages of forge over james:
- **CPU:** i7-6700K (4c/8t, 4.0GHz) vs Xeon E3-1225v3 (4c/4t, 3.2GHz) — faster + HT
- **RAM:** 64GB vs 16GB — massive headroom
- **GPU:** GTX 970 for local ML inference
- **Storage:** 469GB NVMe vs 916GB SSD — less space but faster
- **Network:** Same /22 subnet, same LAN access to everything
### Risks:
- Storage is smaller (469G vs 916G) — may need to be selective about what moves
- GPU driver + gateway on same box — monitor for resource conflicts
- Signal-cli needs to re-link or transfer DB
- WhatsApp bridge needs QR re-link
---
## Directory Layout
```
/home/johan/
├── ocr-env/ # Python venv (PyTorch + transformers)
├── ocr-service/ # FastAPI OCR server
│ └── server.py
├── models/
│ └── glm-ocr/ # GLM-OCR weights (2.47 GB)
├── .config/
│ └── systemd/user/
│ └── ocr-service.service
└── .ssh/
└── authorized_keys
```
## Key Constraints
1. **PyTorch version locked to 2.2.x** — GTX 970 sm_52 not supported in newer
2. **CUDA 11.8 wheels only** — matches PyTorch 2.2 requirement
3. **Max image dimension 1280px** — larger causes excessive VRAM/time on GTX 970
4. **transformers from git** — stock pip version doesn't have GLM-OCR model class
5. **Monkey-patch required**`torch.is_autocast_enabled()` API changed in PyTorch 2.4

View File

@ -1,48 +1,11 @@
{
"lastBriefing": 1738685392,
"lastTechScan": 1738685392,
"lastChecks": {
"updateCheck": 1770044728,
"lastTechScan": 1769950263,
"stockApiResearch": 1769734200,
"memoryReview": 1769958109,
"workQueue": 1769948854,
"weeklyMemorySynthesis": 1769958109
},
"notes": "2026-02-01 14:01 UTC: Weekly memory synthesis complete. Reviewed Jan 26-Feb 1 daily logs. Updated MEMORY.md with: doc management system, Azure unblocked status, security headers added, K2.5/browser learnings, Flutter web limitations. Promoted config color hex rule from corrections.",
"lastEmailTriage": 1770066886,
"triageLog": {
"2026-02-01T12:01": {
"trashed": [
"Ancestry referral promo",
"VEEP Nutrition spam"
],
"archived": [
"inou verification",
"Openprovider invoice",
"MS Security x2",
"LinkedIn x2",
"Tailscale marketing",
"Fireworks marketing",
"Fireworks receipt"
],
"flagged": [
"IAHP brain-changers (Sophia-relevant)"
],
"kept": [
"Cryo-Cell renewal (proton)",
"IAHP for Sophia"
]
},
"2026-02-02T21:14": {
"archived": [
"Cigna claim processed for Sophia (routine notification)"
]
}
},
"johanAccountCleanup": {
"started": "2026-02-01T21:20:00Z",
"initialCount": 1000,
"status": "in_progress",
"note": "Massive backlog discovered - 1000+ emails going back to 2023. Initial triage pass done, critical items flagged."
},
"whatsappLastCount": 1
}
"briefing": "2026-02-04T11:09:52-05:00",
"techScan": "2026-02-04T11:09:52-05:00",
"email": "2026-02-04T13:12:00-05:00",
"calendar": null,
"weather": "2026-02-04T11:09:52-05:00"
}
}

View File

@ -0,0 +1,199 @@
# New Server Migration Plan (2026-02-03)
## Target: New ThinkServer TS140 — Ubuntu 24.04
**Current IP:** 192.168.3.134 (temporary)
**Final IP:** 192.168.1.16 (keep same — all configs, Tailscale, Caddy, etc. already point here)
**User:** johan
**Sudo password:** Helder06
---
## Phase 1: Base System (SSH access needed)
### 1.1 First Login
- [ ] SSH in, update system
- [ ] Set hostname to `james`
- [ ] Install essentials: curl, git, jq, htop, tmux, build-essential, pass, gnupg
### 1.2 GUI — Minimal Xfce (match current)
Current setup: **Xubuntu desktop (Xfce4 + LightDM + X11)**
- [ ] `apt install xubuntu-desktop-minimal lightdm xorg`
- [ ] Set LightDM as display manager
- [ ] Configure autologin for johan (headless Chrome needs a session)
- [ ] Disable screensaver/power management
### 1.3 GTX 970 — Inference Only (NOT display)
- [ ] Install NVIDIA driver (nvidia-driver-535 or latest for GTX 970)
- [ ] Configure Xorg to use ONLY Intel iGPU for display
- [ ] Write /etc/X11/xorg.conf pinning display to Intel
- [ ] Install CUDA toolkit (for inference)
- [ ] Verify: `nvidia-smi` shows GPU, display runs on Intel
### 1.4 Hardening
- [ ] UFW firewall (allow SSH, deny rest, open services as needed)
- [ ] Fail2ban for SSH
- [ ] Disable root login via SSH
- [ ] SSH key-only auth (disable password auth)
- [ ] Unattended security updates
---
## Phase 2: Services
### 2.1 Node.js + OpenClaw
- [ ] Install Node 22.x (nodesource)
- [ ] npm install -g openclaw
- [ ] Copy config: ~/.clawdbot/ (entire directory)
- [ ] Copy workspace: ~/clawd/ (entire directory)
- [ ] Set up systemd user service for openclaw-gateway
### 2.2 Chrome + Chromium
- [ ] Install Google Chrome (for relay extension)
- [ ] Install Chromium (headless automation)
- [ ] Copy Chrome profile (~/.config/google-chrome/)
### 2.3 Signal CLI
- [ ] Install signal-cli
- [ ] Copy data: ~/.local/share/signal-cli/
- [ ] Set up daemon service on port 8080
### 2.4 Proton Mail Bridge
- [ ] Install protonmail-bridge (headless)
- [ ] Copy GPG keyring (~/.gnupg/)
- [ ] Copy pass store (~/.password-store/)
- [ ] Set up systemd service
### 2.5 Mail Bridge / Message Center
- [ ] Copy source: ~/dev/mail-bridge/
- [ ] Copy data: ~/.message-center/
- [ ] Set up systemd service on port 8025
### 2.6 Message Bridge (WhatsApp)
- [ ] Copy source: ~/dev/message-bridge/
- [ ] Copy data: ~/.message-bridge/
- [ ] Set up systemd service on port 8030
- [ ] May need re-linking (QR scan)
### 2.7 James Dashboard
- [ ] Copy source: ~/dev/james-dashboard/
- [ ] Set up systemd service on port 9200
### 2.8 Samba
- [ ] Install samba
- [ ] Create shares: sophia, inou-dev, johan, docscan, scan-inbox
- [ ] Create SMB users: johan, scanner
### 2.9 Tailscale
- [ ] Install tailscale
- [ ] `tailscale up` (will need auth)
- [ ] Should get same Tailscale IP (100.123.216.65) if old node is removed first
### 2.10 Document System
- [ ] Copy ~/documents/ tree
- [ ] Set up docsys service
---
## Phase 3: AI / Inference
### 3.1 GLM-OCR (0.9B)
- [ ] Install Python venv for inference
- [ ] Install PyTorch with CUDA support
- [ ] Install transformers, accelerate
- [ ] Download glm-ocr model (Zhipu GLM-Edge-V 0.9B or similar)
- [ ] Create inference API service
- [ ] Test with sample document
---
## Phase 4: Data Migration
### 4.1 Copy Everything
From current server (192.168.1.16) to new (192.168.3.134):
```bash
# Core workspace
rsync -avz ~/clawd/ newbox:~/clawd/
# OpenClaw config + state
rsync -avz ~/.clawdbot/ newbox:~/.clawdbot/
# Dev projects
rsync -avz ~/dev/ newbox:~/dev/
# Documents
rsync -avz ~/documents/ newbox:~/documents/
# Signal data
rsync -avz ~/.local/share/signal-cli/ newbox:~/.local/share/signal-cli/
# Chrome profile
rsync -avz ~/.config/google-chrome/ newbox:~/.config/google-chrome/
# GPG + pass
rsync -avz ~/.gnupg/ newbox:~/.gnupg/
rsync -avz ~/.password-store/ newbox:~/.password-store/
# Sophia docs
rsync -avz ~/sophia/ newbox:~/sophia/
# Message bridge data
rsync -avz ~/.message-bridge/ newbox:~/.message-bridge/
rsync -avz ~/.message-center/ newbox:~/.message-center/
# Systemd user services
rsync -avz ~/.config/systemd/user/*.service newbox:~/.config/systemd/user/
# SSH keys
rsync -avz ~/.ssh/ newbox:~/.ssh/
# NPM global packages list
npm list -g --depth=0 > /tmp/npm-global-packages.txt
```
### 4.2 IP Swap
1. Shut down old server
2. Change new server IP from 192.168.3.134 → 192.168.1.16
3. Everything (Caddy, Tailscale, bookmarks, configs) just works
---
## SSH Key Setup
Johan needs to add his SSH public key to the new machine:
```bash
# On your Mac/workstation, copy your public key to the new server:
ssh-copy-id -i ~/.ssh/id_ed25519.pub johan@192.168.3.134
# Or manually:
cat ~/.ssh/id_ed25519.pub | ssh johan@192.168.3.134 'mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys'
```
The current authorized keys are:
- `ssh-ed25519 ...N7f johan@ubuntu2404` (Johan's key)
- `ssh-ed25519 ...fD39 claude@macbook` (Claude Code key)
Both need to be on the new machine.
---
## Current Services Inventory
| Service | Port | Status |
|---------|------|--------|
| OpenClaw Gateway | 18789 | running |
| Signal CLI daemon | 8080 | running |
| Proton Mail Bridge | 1143/1025 | running |
| Mail Bridge (MC) | 8025 | running |
| Message Bridge (WA) | 8030 | running |
| James Dashboard | 9200 | running |
| DocSys | ? | running |
| Chrome (headed) | - | for relay |
| Chromium (headless) | 9223 | on-demand |
## Crontab
```
*/5 * * * * /home/johan/clawd/scripts/k2-watchdog.sh
```

View File

@ -0,0 +1,38 @@
{
"date": "2026-02-04",
"timestamp": "2026-02-04T12:44:35-05:00",
"openclaw": {
"before": "2026.2.2",
"latest": "2026.2.2-3",
"after": "2026.2.2",
"updated": true
},
"claude_code": {
"before": "2.1.31",
"latest": "2.1.31",
"updated": false
},
"os": {
"available": 3,
"packages": [
{
"name": "python-apt-common",
"from": "2.7.7ubuntu5.1",
"to": "2.7.7ubuntu5.2"
},
{
"name": "python3-apt",
"from": "2.7.7ubuntu5.1",
"to": "2.7.7ubuntu5.2"
},
{
"name": "sosreport",
"from": "4.5.6-0ubuntu4",
"to": "4.9.2-0ubuntu0~24.04.1"
}
],
"updated": true,
"reboot_required": false
},
"gateway_restarted": false
}

View File

@ -1,73 +1,60 @@
#!/bin/bash
# Check and update Claude Code and inou MCP bundle
# Check for available updates — report only, don't install
set -e
echo "=== Claude Code Update Check ==="
echo "=== Claude Code ==="
CURRENT=$(claude --version 2>/dev/null | head -1 || echo "not installed")
echo "Current: $CURRENT"
LATEST=$(npm show @anthropic-ai/claude-code version 2>/dev/null || echo "unknown")
echo "Latest: $LATEST"
if [ "$CURRENT" != "$LATEST (Claude Code)" ] && [ "$LATEST" != "unknown" ]; then
echo "Updating Claude Code..."
npm update -g @anthropic-ai/claude-code
echo "Updated to: $(claude --version)"
CURRENT_VER=$(echo "$CURRENT" | sed 's/ (Claude Code)//')
if [ "$CURRENT_VER" = "$LATEST" ] || [ "$LATEST" = "unknown" ]; then
echo "✅ Up to date: $CURRENT"
else
echo "Claude Code is up to date"
echo "⬆️ Update available: $CURRENT_VER$LATEST"
echo " Run: npm update -g @anthropic-ai/claude-code"
fi
echo ""
echo "=== inou MCP Bundle Check ==="
MCPB_PATH="/home/johan/clawd/inou.mcpb"
echo "=== OpenClaw ==="
OC_CURRENT=$(openclaw --version 2>/dev/null | head -1 || echo "not installed")
OC_LATEST=$(npm show openclaw version 2>/dev/null || echo "unknown")
OC_CURRENT_VER=$(echo "$OC_CURRENT" | grep -oP '[\d.]+' | head -1 || echo "$OC_CURRENT")
if [ "$OC_CURRENT_VER" = "$OC_LATEST" ] || [ "$OC_LATEST" = "unknown" ]; then
echo "✅ Up to date: $OC_CURRENT"
else
echo "⬆️ Update available: $OC_CURRENT_VER$OC_LATEST"
echo " Run: npm update -g openclaw"
fi
echo ""
echo "=== inou MCP Bundle ==="
MCPB_EXTRACT="/home/johan/clawd/inou-mcp"
# Get current version
if [ -f "$MCPB_EXTRACT/manifest.json" ]; then
CURRENT_VER=$(grep -o '"version": *"[^"]*"' "$MCPB_EXTRACT/manifest.json" | cut -d'"' -f4)
echo "Current: $CURRENT_VER"
else
CURRENT_VER="not installed"
echo "Current: not installed"
fi
# Check if download URL is available
MCPB_URL="https://inou.com/download/inou.mcpb"
HTTP_STATUS=$(curl -sI -o /dev/null -w "%{http_code}" "$MCPB_URL" 2>/dev/null || echo "000")
if [ "$HTTP_STATUS" != "200" ]; then
echo "Latest: (download not available - HTTP $HTTP_STATUS)"
echo "Skipping inou MCP bundle update check"
exit 0
fi
# Download latest
TMP_MCPB="/tmp/inou-new.mcpb"
curl -sL -o "$TMP_MCPB" "$MCPB_URL"
# Verify it's a valid zip
if ! python3 -c "import zipfile; zipfile.ZipFile('$TMP_MCPB')" 2>/dev/null; then
echo "Downloaded file is not a valid zip - skipping"
rm -f "$TMP_MCPB"
exit 0
fi
# Extract version from downloaded
TMP_DIR=$(mktemp -d)
python3 -c "import zipfile; zipfile.ZipFile('$TMP_MCPB').extractall('$TMP_DIR')"
NEW_VER=$(grep -o '"version": *"[^"]*"' "$TMP_DIR/manifest.json" | cut -d'"' -f4)
echo "Latest: $NEW_VER"
if [ "$CURRENT_VER" != "$NEW_VER" ]; then
echo "Updating inou MCP bundle..."
mv "$TMP_MCPB" "$MCPB_PATH"
rm -rf "$MCPB_EXTRACT"
mkdir -p "$MCPB_EXTRACT"
python3 -c "import zipfile; zipfile.ZipFile('$MCPB_PATH').extractall('$MCPB_EXTRACT')"
echo "Updated to: $NEW_VER"
echo "Latest: (download not available)"
else
echo "inou MCP bundle is up to date"
TMP_MCPB="/tmp/inou-check.mcpb"
TMP_DIR=$(mktemp -d)
curl -sL -o "$TMP_MCPB" "$MCPB_URL"
if python3 -c "import zipfile; zipfile.ZipFile('$TMP_MCPB').extractall('$TMP_DIR')" 2>/dev/null; then
NEW_VER=$(grep -o '"version": *"[^"]*"' "$TMP_DIR/manifest.json" | cut -d'"' -f4)
if [ "$CURRENT_VER" = "$NEW_VER" ]; then
echo "✅ Up to date: $CURRENT_VER"
else
echo "⬆️ Update available: $CURRENT_VER$NEW_VER"
fi
fi
rm -rf "$TMP_DIR" "$TMP_MCPB" 2>/dev/null || true
fi
rm -rf "$TMP_DIR" "$TMP_MCPB" 2>/dev/null || true

View File

@ -38,10 +38,11 @@ else
TYPE="warning"
fi
# Update dashboard
# Update dashboard - include check time (no parentheses, dashboard strips those)
CHECKED=$(date +"%l:%M %p" | xargs)
curl -s -X POST "$DASHBOARD_URL/api/status" \
-H "Content-Type: application/json" \
-d "{\"key\": \"claude_weekly\", \"value\": \"${WEEKLY}% used (${REMAINING}% left)\", \"type\": \"${TYPE}\"}" > /dev/null
-d "{\"key\": \"claude_weekly\", \"value\": \"${WEEKLY}% used · ${CHECKED}\", \"type\": \"${TYPE}\"}" > /dev/null
echo "📊 Claude: ${WEEKLY}% weekly used (${REMAINING}% left)"
echo "📊 Claude: ${WEEKLY}% weekly used (${REMAINING}% left) · checked ${CHECKED}"
fi

172
scripts/daily-updates.sh Executable file
View File

@ -0,0 +1,172 @@
#!/bin/bash
# Daily auto-update: OpenClaw, Claude Code, OS packages
# Runs at 9:00 AM ET via systemd timer
# Logs results to memory/updates/ for morning briefing
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
WORKSPACE="$(dirname "$SCRIPT_DIR")"
LOG_DIR="$WORKSPACE/memory/updates"
DATE=$(date +%Y-%m-%d)
LOG="$LOG_DIR/$DATE.json"
mkdir -p "$LOG_DIR"
# Initialize log
cat > "$LOG" <<'EOF'
{
"date": "DATE_PLACEHOLDER",
"timestamp": "TS_PLACEHOLDER",
"openclaw": {},
"claude_code": {},
"os": {},
"gateway_restarted": false
}
EOF
sed -i "s/DATE_PLACEHOLDER/$DATE/" "$LOG"
sed -i "s/TS_PLACEHOLDER/$(date -Iseconds)/" "$LOG"
update_json() {
local key="$1" value="$2"
python3 << PYEOF
import json
with open("$LOG") as f: d = json.load(f)
keys = "$key".split(".")
obj = d
for k in keys[:-1]: obj = obj[k]
raw = '''$value'''
# Try parsing as JSON first (handles strings, arrays, numbers, booleans)
try:
obj[keys[-1]] = json.loads(raw)
except (json.JSONDecodeError, ValueError):
obj[keys[-1]] = raw
with open("$LOG", "w") as f: json.dump(d, f, indent=2)
PYEOF
}
echo "=== Daily Update Check: $DATE ==="
# --- OpenClaw ---
echo ""
echo "--- OpenClaw ---"
OC_BEFORE=$(openclaw --version 2>/dev/null | head -1 || echo "unknown")
OC_LATEST=$(npm show openclaw version 2>/dev/null || echo "unknown")
echo "Current: $OC_BEFORE | Latest: $OC_LATEST"
update_json "openclaw.before" "\"$OC_BEFORE\""
update_json "openclaw.latest" "\"$OC_LATEST\""
if [ "$OC_BEFORE" != "$OC_LATEST" ] && [ "$OC_LATEST" != "unknown" ]; then
echo "Updating OpenClaw..."
if npm update -g openclaw 2>&1; then
OC_AFTER=$(openclaw --version 2>/dev/null | head -1 || echo "unknown")
update_json "openclaw.after" "\"$OC_AFTER\""
update_json "openclaw.updated" "true"
echo "Updated: $OC_BEFORE$OC_AFTER"
else
update_json "openclaw.updated" "false"
update_json "openclaw.error" "\"npm update failed\""
echo "Update failed"
fi
else
update_json "openclaw.updated" "false"
echo "Up to date"
fi
# --- Claude Code ---
echo ""
echo "--- Claude Code ---"
CC_BEFORE=$(claude --version 2>/dev/null | sed 's/ (Claude Code)//' || echo "unknown")
CC_LATEST=$(npm show @anthropic-ai/claude-code version 2>/dev/null || echo "unknown")
echo "Current: $CC_BEFORE | Latest: $CC_LATEST"
update_json "claude_code.before" "\"$CC_BEFORE\""
update_json "claude_code.latest" "\"$CC_LATEST\""
if [ "$CC_BEFORE" != "$CC_LATEST" ] && [ "$CC_LATEST" != "unknown" ]; then
echo "Updating Claude Code..."
if npm update -g @anthropic-ai/claude-code 2>&1; then
CC_AFTER=$(claude --version 2>/dev/null | sed 's/ (Claude Code)//' || echo "unknown")
update_json "claude_code.after" "\"$CC_AFTER\""
update_json "claude_code.updated" "true"
echo "Updated: $CC_BEFORE$CC_AFTER"
else
update_json "claude_code.updated" "false"
update_json "claude_code.error" "\"npm update failed\""
echo "Update failed"
fi
else
update_json "claude_code.updated" "false"
echo "Up to date"
fi
# --- OS Packages ---
echo ""
echo "--- OS Packages ---"
# Capture upgradable list before updating
sudo apt-get update -qq 2>/dev/null
UPGRADABLE=$(apt list --upgradable 2>/dev/null | grep -v "^Listing" || true)
PKG_COUNT=$(echo "$UPGRADABLE" | grep -c . || echo "0")
update_json "os.available" "$PKG_COUNT"
if [ "$PKG_COUNT" -gt 0 ]; then
echo "$PKG_COUNT packages upgradable"
# Capture package names and versions
PKG_LIST=$(echo "$UPGRADABLE" | head -50 | python3 -c "
import sys, json
pkgs = []
for line in sys.stdin:
line = line.strip()
if not line: continue
parts = line.split('/')
if len(parts) >= 2:
name = parts[0]
rest = '/'.join(parts[1:])
# Extract version info
ver_parts = rest.split(' ')
new_ver = ver_parts[1] if len(ver_parts) > 1 else 'unknown'
old_ver = ver_parts[-1].strip('[]') if '[' in rest else 'unknown'
pkgs.append({'name': name, 'from': old_ver, 'to': new_ver})
print(json.dumps(pkgs))
" 2>/dev/null || echo "[]")
update_json "os.packages" "$PKG_LIST"
echo "Upgrading..."
if sudo DEBIAN_FRONTEND=noninteractive apt-get upgrade -y -qq 2>&1 | tail -5; then
update_json "os.updated" "true"
echo "OS packages updated"
# Check if reboot required
if [ -f /var/run/reboot-required ]; then
update_json "os.reboot_required" "true"
echo "⚠️ Reboot required!"
else
update_json "os.reboot_required" "false"
fi
else
update_json "os.updated" "false"
update_json "os.error" "\"apt upgrade failed\""
fi
else
update_json "os.updated" "false"
update_json "os.packages" "[]"
echo "All packages up to date"
fi
# --- Gateway Restart (only if OpenClaw updated) ---
echo ""
OC_UPDATED=$(python3 -c "import json; print(json.load(open('$LOG'))['openclaw'].get('updated', False))")
if [ "$OC_UPDATED" = "True" ]; then
echo "OpenClaw was updated — restarting gateway..."
systemctl --user restart openclaw-gateway
update_json "gateway_restarted" "true"
echo "Gateway restarted"
else
echo "No gateway restart needed"
fi
echo ""
echo "=== Update complete. Log: $LOG ==="

View File

@ -0,0 +1,32 @@
#!/bin/bash
# Fix: www.inou.com should 301 redirect to inou.com
# Problem: www serves content (HTTP 200) instead of redirecting,
# causing GSC "Alternate page with proper canonical tag" warnings
#
# Run on caddy server (192.168.0.2):
# ssh root@caddy 'bash -s' < fix-inou-www-redirect.sh
#
# Or manually add this block to /etc/caddy/Caddyfile:
echo "Adding www redirect to Caddyfile..."
# Check if www redirect already exists
if grep -q 'www.inou.com' /etc/caddy/Caddyfile; then
echo "www.inou.com block already exists in Caddyfile"
exit 0
fi
# Add the redirect block
cat >> /etc/caddy/Caddyfile << 'CADDY'
# Redirect www to non-www (fixes GSC indexing issue)
www.inou.com {
redir https://inou.com{uri} permanent
}
CADDY
# Reload Caddy
systemctl reload caddy
echo "Done! Verify: curl -I https://www.inou.com"
echo "Expected: HTTP/2 301, Location: https://inou.com/"

62
scripts/git-audit.sh Executable file
View File

@ -0,0 +1,62 @@
#!/bin/bash
# Git audit: check all projects in ~/dev/ for unpushed changes
# Reports anomalies only (unpushed commits, uncommitted changes, missing remotes)
# Run hourly via cron
DEV_DIR="/home/johan/dev"
ANOMALIES=""
for dir in "$DEV_DIR"/*/; do
[ ! -d "$dir/.git" ] && continue
repo=$(basename "$dir")
cd "$dir"
# Check for remote
if ! git remote get-url origin &>/dev/null; then
ANOMALIES+="$repo: NO REMOTE — needs git@zurich.inou.com:$repo.git\n"
continue
fi
# Check for uncommitted changes
DIRTY=$(git status --porcelain 2>/dev/null)
if [ -n "$DIRTY" ]; then
COUNT=$(echo "$DIRTY" | wc -l)
ANOMALIES+="⚠️ $repo: $COUNT uncommitted file(s)\n"
fi
# Check for unpushed commits (fetch first to be accurate, with timeout)
timeout 10 git fetch origin --quiet 2>/dev/null
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
if [ -n "$BRANCH" ]; then
AHEAD=$(git rev-list --count "origin/$BRANCH..HEAD" 2>/dev/null)
if [ -n "$AHEAD" ] && [ "$AHEAD" -gt 0 ]; then
ANOMALIES+="🔺 $repo: $AHEAD unpushed commit(s) on $BRANCH\n"
fi
fi
done
# Also check ~/clawd/ workspace
cd /home/johan/clawd
if [ -d .git ]; then
DIRTY=$(git status --porcelain 2>/dev/null)
if [ -n "$DIRTY" ]; then
COUNT=$(echo "$DIRTY" | wc -l)
ANOMALIES+="⚠️ clawd: $COUNT uncommitted file(s)\n"
fi
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null)
if [ -n "$BRANCH" ] && git remote get-url origin &>/dev/null; then
timeout 10 git fetch origin --quiet 2>/dev/null
AHEAD=$(git rev-list --count "origin/$BRANCH..HEAD" 2>/dev/null)
if [ -n "$AHEAD" ] && [ "$AHEAD" -gt 0 ]; then
ANOMALIES+="🔺 clawd: $AHEAD unpushed commit(s) on $BRANCH\n"
fi
fi
fi
if [ -n "$ANOMALIES" ]; then
echo -e "Git audit found issues:\n$ANOMALIES"
exit 1
else
exit 0
fi

View File

@ -0,0 +1,124 @@
#!/bin/bash
# Phase 1: Base system setup for new James server
# Run as: ssh johan@192.168.3.134 'bash -s' < scripts/new-server-phase1.sh
set -e
SUDO="echo Helder06 | sudo -S"
echo "=== Phase 1: Base System Setup ==="
# 1. Essentials
echo ">>> Installing essentials..."
$SUDO apt-get install -y -q \
curl wget git jq htop tmux build-essential \
pass gnupg2 \
sshpass rsync \
unzip zip \
python3-pip python3-venv \
net-tools dnsutils \
ufw fail2ban \
samba \
ffmpeg \
trash-cli \
apt-transport-https \
ca-certificates \
software-properties-common 2>&1 | tail -3
# 2. Minimal Xfce GUI (for headed Chrome)
echo ">>> Installing minimal Xfce + LightDM..."
$SUDO apt-get install -y -q \
xorg \
xfce4 \
xfce4-terminal \
lightdm \
lightdm-gtk-greeter \
dbus-x11 2>&1 | tail -3
# Set LightDM as default display manager
echo "/usr/sbin/lightdm" | $SUDO tee /etc/X11/default-display-manager > /dev/null
# Configure autologin
$SUDO mkdir -p /etc/lightdm/lightdm.conf.d
cat << 'AUTOLOGIN' | $SUDO tee /etc/lightdm/lightdm.conf.d/50-autologin.conf > /dev/null
[Seat:*]
autologin-user=johan
autologin-user-timeout=0
user-session=xfce
AUTOLOGIN
echo ">>> Disabling screensaver/power management..."
# Will be configured in Xfce session; install xfce4-power-manager
$SUDO apt-get install -y -q xfce4-power-manager 2>&1 | tail -1
# 3. NVIDIA Driver + CUDA (GTX 970 for inference)
echo ">>> Installing NVIDIA driver..."
$SUDO apt-get install -y -q nvidia-driver-535 nvidia-cuda-toolkit 2>&1 | tail -5
# 4. Configure Xorg to use Intel for display, leave NVIDIA for compute
echo ">>> Configuring Xorg for Intel display..."
cat << 'XORGCONF' | $SUDO tee /etc/X11/xorg.conf > /dev/null
# Intel iGPU for display output, NVIDIA GTX 970 for compute only
Section "Device"
Identifier "Intel"
Driver "modesetting"
BusID "PCI:0:2:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Intel"
EndSection
Section "ServerLayout"
Identifier "Layout0"
Screen "Screen0"
EndSection
XORGCONF
# 5. Hardening
echo ">>> Hardening SSH..."
$SUDO sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
$SUDO sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
$SUDO sed -i 's/^#\?PubkeyAuthentication.*/PubkeyAuthentication yes/' /etc/ssh/sshd_config
$SUDO systemctl restart sshd
echo ">>> Configuring UFW firewall..."
$SUDO ufw default deny incoming
$SUDO ufw default allow outgoing
$SUDO ufw allow ssh
$SUDO ufw allow from 192.168.0.0/16 to any # LAN access for all services
$SUDO ufw --force enable
echo ">>> Configuring fail2ban..."
cat << 'F2B' | $SUDO tee /etc/fail2ban/jail.local > /dev/null
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 5
bantime = 3600
F2B
$SUDO systemctl enable fail2ban
$SUDO systemctl start fail2ban
echo ">>> Enabling unattended security updates..."
$SUDO apt-get install -y -q unattended-upgrades
$SUDO dpkg-reconfigure -plow unattended-upgrades 2>/dev/null || true
# 6. Enable lingering for user services
echo ">>> Enabling systemd linger for johan..."
$SUDO loginctl enable-linger johan
# 7. Node.js 22
echo ">>> Installing Node.js 22..."
curl -fsSL https://deb.nodesource.com/setup_22.x | $SUDO bash - 2>&1 | tail -3
$SUDO apt-get install -y -q nodejs 2>&1 | tail -3
# 8. NPM global directory (no sudo needed)
mkdir -p ~/.npm-global
npm config set prefix ~/.npm-global
grep -q 'npm-global' ~/.bashrc || echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bashrc
echo "=== Phase 1 Complete ==="
echo "Reboot recommended for NVIDIA driver + GUI"

87
scripts/service-health.sh Executable file
View File

@ -0,0 +1,87 @@
#!/bin/bash
# Service Health Check — updates dashboard status
# Run manually or via heartbeat
DASHBOARD="http://localhost:9200"
ALL_OK=true
check_service() {
local name="$1"
local check="$2"
local result
result=$(eval "$check" 2>&1)
local rc=$?
if [ $rc -eq 0 ]; then
echo "$name"
else
echo "$name: $result"
ALL_OK=false
fi
}
check_http() {
local name="$1"
local url="$2"
local code
code=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 5 "$url" 2>&1)
if [[ "$code" =~ ^(200|301|302)$ ]]; then
echo "$name (HTTP $code)"
else
echo "$name (HTTP $code)"
ALL_OK=false
fi
}
echo "=== Service Health Check ($(date -u +%Y-%m-%dT%H:%M:%SZ)) ==="
# Systemd services
check_service "Proton Bridge" "systemctl --user is-active protonmail-bridge"
check_service "Mail Bridge" "systemctl --user is-active mail-bridge"
check_service "Message Bridge" "systemctl --user is-active message-bridge"
# HTTP endpoints
check_http "Mail Bridge API" "http://localhost:8025/health"
check_http "Dashboard" "$DASHBOARD/api/tasks"
check_http "Zurich VPS" "https://zurich.inou.com"
check_http "inou.com" "https://inou.com"
# Disk space
DISK_PCT=$(df -h / | awk 'NR==2 {print $5}' | tr -d '%')
if [ "$DISK_PCT" -gt 85 ]; then
echo "⚠️ Disk: ${DISK_PCT}% used"
ALL_OK=false
else
echo "✅ Disk: ${DISK_PCT}% used"
fi
# Memory
MEM_PCT=$(free | awk '/Mem:/ {printf "%.0f", $3/$2*100}')
if [ "$MEM_PCT" -gt 90 ]; then
echo "⚠️ Memory: ${MEM_PCT}% used"
ALL_OK=false
else
echo "✅ Memory: ${MEM_PCT}% used"
fi
# Load average
LOAD=$(cat /proc/loadavg | awk '{print $1}')
CORES=$(nproc)
LOAD_INT=${LOAD%.*}
if [ "${LOAD_INT:-0}" -gt "$CORES" ]; then
echo "⚠️ Load: $LOAD ($CORES cores)"
ALL_OK=false
else
echo "✅ Load: $LOAD ($CORES cores)"
fi
echo ""
if $ALL_OK; then
echo "Overall: ALL SYSTEMS HEALTHY ✅"
# Update dashboard
curl -s -X POST "$DASHBOARD/api/status" -H 'Content-Type: application/json' \
-d "{\"key\":\"services\",\"value\":\"All services healthy ✅ (checked $(date -u +%H:%M) UTC)\",\"type\":\"info\"}" > /dev/null
else
echo "Overall: ISSUES DETECTED ⚠️"
curl -s -X POST "$DASHBOARD/api/status" -H 'Content-Type: application/json' \
-d "{\"key\":\"services\",\"value\":\"Issues detected ⚠️ — check logs\",\"type\":\"warning\"}" > /dev/null
fi

View File

@ -0,0 +1,7 @@
{
"version": 1,
"registry": "https://clawhub.ai",
"slug": "uptime-kuma",
"installedVersion": "1.0.0",
"installedAt": 1770116696999
}

View File

@ -0,0 +1,89 @@
---
name: uptime-kuma
description: Interact with Uptime Kuma monitoring server. Use for checking monitor status, adding/removing monitors, pausing/resuming checks, viewing heartbeat history. Triggers on mentions of Uptime Kuma, server monitoring, uptime checks, or service health monitoring.
---
# Uptime Kuma Skill
Manage Uptime Kuma monitors via CLI wrapper around the Socket.IO API.
## Setup
Requires `uptime-kuma-api` Python package:
```bash
pip install uptime-kuma-api
```
Environment variables (set in shell or Clawdbot config):
- `UPTIME_KUMA_URL` - Server URL (e.g., `http://localhost:3001`)
- `UPTIME_KUMA_USERNAME` - Login username
- `UPTIME_KUMA_PASSWORD` - Login password
## Usage
Script location: `scripts/kuma.py`
### Commands
```bash
# Overall status summary
python scripts/kuma.py status
# List all monitors
python scripts/kuma.py list
python scripts/kuma.py list --json
# Get monitor details
python scripts/kuma.py get <id>
# Add monitors
python scripts/kuma.py add --name "My Site" --type http --url https://example.com
python scripts/kuma.py add --name "Server Ping" --type ping --hostname 192.168.1.1
python scripts/kuma.py add --name "SSH Port" --type port --hostname server.local --port 22
# Pause/resume monitors
python scripts/kuma.py pause <id>
python scripts/kuma.py resume <id>
# Delete monitor
python scripts/kuma.py delete <id>
# View heartbeat history
python scripts/kuma.py heartbeats <id> --hours 24
# List notification channels
python scripts/kuma.py notifications
```
### Monitor Types
- `http` - HTTP/HTTPS endpoint
- `ping` - ICMP ping
- `port` - TCP port check
- `keyword` - HTTP + keyword search
- `dns` - DNS resolution
- `docker` - Docker container
- `push` - Push-based (passive)
- `mysql`, `postgres`, `mongodb`, `redis` - Database checks
- `mqtt` - MQTT broker
- `group` - Monitor group
### Common Workflows
**Check what's down:**
```bash
python scripts/kuma.py status
python scripts/kuma.py list # Look for 🔴
```
**Add HTTP monitor with 30s interval:**
```bash
python scripts/kuma.py add --name "API Health" --type http --url https://api.example.com/health --interval 30
```
**Maintenance mode (pause all):**
```bash
for id in $(python scripts/kuma.py list --json | jq -r '.[].id'); do
python scripts/kuma.py pause $id
done
```

View File

@ -0,0 +1,276 @@
#!/usr/bin/env python3
"""
Uptime Kuma CLI wrapper using uptime-kuma-api library.
Requires: pip install uptime-kuma-api
Environment variables:
UPTIME_KUMA_URL - Uptime Kuma server URL (e.g., http://localhost:3001)
UPTIME_KUMA_USERNAME - Username for authentication
UPTIME_KUMA_PASSWORD - Password for authentication
"""
import argparse
import json
import os
import sys
from typing import Optional
try:
from uptime_kuma_api import UptimeKumaApi, MonitorType
except ImportError:
print("Error: uptime-kuma-api not installed. Run: pip install uptime-kuma-api", file=sys.stderr)
sys.exit(1)
def get_env_or_exit(name: str) -> str:
"""Get environment variable or exit with error."""
value = os.environ.get(name)
if not value:
print(f"Error: {name} environment variable not set", file=sys.stderr)
sys.exit(1)
return value
def get_api() -> UptimeKumaApi:
"""Create and authenticate API connection."""
url = get_env_or_exit("UPTIME_KUMA_URL")
username = get_env_or_exit("UPTIME_KUMA_USERNAME")
password = get_env_or_exit("UPTIME_KUMA_PASSWORD")
api = UptimeKumaApi(url)
api.login(username, password)
return api
def cmd_list_monitors(args):
"""List all monitors."""
with get_api() as api:
monitors = api.get_monitors()
if args.json:
print(json.dumps(monitors, indent=2, default=str))
else:
for m in monitors:
status = "🟢" if m.get("active") else ""
print(f"{status} [{m['id']}] {m['name']} ({m['type']})")
def cmd_get_monitor(args):
"""Get details of a specific monitor."""
with get_api() as api:
monitor = api.get_monitor(args.id)
print(json.dumps(monitor, indent=2, default=str))
def cmd_add_monitor(args):
"""Add a new monitor."""
monitor_types = {
"http": MonitorType.HTTP,
"https": MonitorType.HTTP,
"port": MonitorType.PORT,
"ping": MonitorType.PING,
"keyword": MonitorType.KEYWORD,
"dns": MonitorType.DNS,
"docker": MonitorType.DOCKER,
"push": MonitorType.PUSH,
"steam": MonitorType.STEAM,
"gamedig": MonitorType.GAMEDIG,
"mqtt": MonitorType.MQTT,
"sqlserver": MonitorType.SQLSERVER,
"postgres": MonitorType.POSTGRES,
"mysql": MonitorType.MYSQL,
"mongodb": MonitorType.MONGODB,
"radius": MonitorType.RADIUS,
"redis": MonitorType.REDIS,
"group": MonitorType.GROUP,
}
monitor_type = monitor_types.get(args.type.lower())
if not monitor_type:
print(f"Error: Unknown monitor type '{args.type}'. Valid types: {', '.join(monitor_types.keys())}", file=sys.stderr)
sys.exit(1)
kwargs = {
"type": monitor_type,
"name": args.name,
}
if args.url:
kwargs["url"] = args.url
if args.hostname:
kwargs["hostname"] = args.hostname
if args.port:
kwargs["port"] = args.port
if args.interval:
kwargs["interval"] = args.interval
if args.keyword:
kwargs["keyword"] = args.keyword
with get_api() as api:
result = api.add_monitor(**kwargs)
print(json.dumps(result, indent=2, default=str))
def cmd_delete_monitor(args):
"""Delete a monitor."""
with get_api() as api:
result = api.delete_monitor(args.id)
print(json.dumps(result, indent=2, default=str))
def cmd_pause_monitor(args):
"""Pause a monitor."""
with get_api() as api:
result = api.pause_monitor(args.id)
print(json.dumps(result, indent=2, default=str))
def cmd_resume_monitor(args):
"""Resume a monitor."""
with get_api() as api:
result = api.resume_monitor(args.id)
print(json.dumps(result, indent=2, default=str))
def cmd_status(args):
"""Get overall status summary."""
with get_api() as api:
monitors = api.get_monitors()
total = len(monitors)
active = sum(1 for m in monitors if m.get("active"))
paused = total - active
# Get heartbeats for status
up = 0
down = 0
pending = 0
for m in monitors:
if not m.get("active"):
continue
beats = api.get_monitor_beats(m["id"], 1)
if beats:
status = beats[0].get("status")
if status == 1:
up += 1
elif status == 0:
down += 1
else:
pending += 1
else:
pending += 1
if args.json:
print(json.dumps({
"total": total,
"active": active,
"paused": paused,
"up": up,
"down": down,
"pending": pending
}, indent=2))
else:
print(f"📊 Uptime Kuma Status")
print(f" Total monitors: {total}")
print(f" Active: {active} | Paused: {paused}")
print(f" 🟢 Up: {up} | 🔴 Down: {down} | ⏳ Pending: {pending}")
def cmd_heartbeats(args):
"""Get recent heartbeats for a monitor."""
with get_api() as api:
beats = api.get_monitor_beats(args.id, args.hours)
if args.json:
print(json.dumps(beats, indent=2, default=str))
else:
for b in beats[-10:]: # Show last 10
status = "🟢" if b.get("status") == 1 else "🔴"
time = b.get("time", "?")
ping = b.get("ping", "?")
print(f"{status} {time} - {ping}ms")
def cmd_notifications(args):
"""List notification channels."""
with get_api() as api:
notifications = api.get_notifications()
if args.json:
print(json.dumps(notifications, indent=2, default=str))
else:
for n in notifications:
active = "" if n.get("active") else ""
print(f"[{active}] [{n['id']}] {n['name']} ({n['type']})")
def main():
parser = argparse.ArgumentParser(description="Uptime Kuma CLI")
subparsers = parser.add_subparsers(dest="command", help="Commands")
# list
p_list = subparsers.add_parser("list", help="List all monitors")
p_list.add_argument("--json", action="store_true", help="Output as JSON")
p_list.set_defaults(func=cmd_list_monitors)
# get
p_get = subparsers.add_parser("get", help="Get monitor details")
p_get.add_argument("id", type=int, help="Monitor ID")
p_get.set_defaults(func=cmd_get_monitor)
# add
p_add = subparsers.add_parser("add", help="Add a new monitor")
p_add.add_argument("--name", required=True, help="Monitor name")
p_add.add_argument("--type", required=True, help="Monitor type (http, ping, port, etc.)")
p_add.add_argument("--url", help="URL to monitor (for HTTP)")
p_add.add_argument("--hostname", help="Hostname (for ping/port)")
p_add.add_argument("--port", type=int, help="Port number")
p_add.add_argument("--interval", type=int, default=60, help="Check interval in seconds")
p_add.add_argument("--keyword", help="Keyword to search (for keyword type)")
p_add.set_defaults(func=cmd_add_monitor)
# delete
p_del = subparsers.add_parser("delete", help="Delete a monitor")
p_del.add_argument("id", type=int, help="Monitor ID")
p_del.set_defaults(func=cmd_delete_monitor)
# pause
p_pause = subparsers.add_parser("pause", help="Pause a monitor")
p_pause.add_argument("id", type=int, help="Monitor ID")
p_pause.set_defaults(func=cmd_pause_monitor)
# resume
p_resume = subparsers.add_parser("resume", help="Resume a monitor")
p_resume.add_argument("id", type=int, help="Monitor ID")
p_resume.set_defaults(func=cmd_resume_monitor)
# status
p_status = subparsers.add_parser("status", help="Get overall status")
p_status.add_argument("--json", action="store_true", help="Output as JSON")
p_status.set_defaults(func=cmd_status)
# heartbeats
p_hb = subparsers.add_parser("heartbeats", help="Get heartbeats for a monitor")
p_hb.add_argument("id", type=int, help="Monitor ID")
p_hb.add_argument("--hours", type=int, default=24, help="Hours of history")
p_hb.add_argument("--json", action="store_true", help="Output as JSON")
p_hb.set_defaults(func=cmd_heartbeats)
# notifications
p_notif = subparsers.add_parser("notifications", help="List notification channels")
p_notif.add_argument("--json", action="store_true", help="Output as JSON")
p_notif.set_defaults(func=cmd_notifications)
args = parser.parse_args()
if not args.command:
parser.print_help()
sys.exit(1)
try:
args.func(args)
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()