diff --git a/.clawdhub/lock.json b/.clawdhub/lock.json index caf00ae..f6bf6cb 100644 --- a/.clawdhub/lock.json +++ b/.clawdhub/lock.json @@ -4,6 +4,10 @@ "homeassistant": { "version": "1.0.0", "installedAt": 1769414031023 + }, + "uptime-kuma": { + "version": "1.0.0", + "installedAt": 1770116697002 } } } diff --git a/AGENTS.md b/AGENTS.md index b431f57..d277b55 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -253,6 +253,20 @@ Use subagents liberally: - Keep main context window clean for conversation - For complex problems, throw more compute at it +## ๐Ÿ”’ Git & Backup Rules + +**Every new project gets a Zurich remote.** No exceptions. +1. Create bare repo: `ssh root@zurich.inou.com "cd /home/git && git init --bare .git && chown -R git:git .git"` +2. Add remote: `git remote add origin git@zurich.inou.com:.git` +3. Push immediately + +**Hourly git audit** (`scripts/git-audit.sh` via cron at :30) checks all `~/dev/` repos for: +- Missing remotes โ†’ alert immediately +- Uncommitted changes โ†’ report +- Unpushed commits โ†’ report + +Only anomalies are reported. Silence = healthy. + ## ๐Ÿ”„ Continuous Improvement **"It's not bad to make a mistake. It is bad to not learn from them."** diff --git a/HEARTBEAT.md b/HEARTBEAT.md index 782e1fb..991fb50 100644 --- a/HEARTBEAT.md +++ b/HEARTBEAT.md @@ -20,9 +20,18 @@ At 10:30pm+ he's WORKING, not sleeping. Don't assume "late night = quiet time." Check `memory/heartbeat-state.json` for last run times. Only run if >24h since last check. -### Update Check (daily) -Run `/home/johan/clawd/scripts/check-updates.sh` to check for Claude Code and inou MCP bundle updates. -Update `memory/heartbeat-state.json` with `lastUpdateCheck` timestamp after running. +### Update Summary (in briefings) +**Automated at 9:00 AM ET** via systemd timer (`daily-updates.timer`). +Updates OpenClaw, Claude Code, and OS packages. Only restarts gateway if OpenClaw changed. +Log: `memory/updates/YYYY-MM-DD.json` + +**In every morning briefing:** +1. Read today's update log from `memory/updates/` +2. Summarize what was updated (versions, package names) +3. For OpenClaw/Claude Code updates: fetch release notes from GitHub and include key changes +4. Flag if reboot required (kernel updates) + +**No manual check needed** โ€” the timer handles it. I just read the log and report. ### Memory Review (daily) Quick scan of today's conversations for durable facts worth remembering: @@ -48,6 +57,17 @@ After generating any briefing (morning, afternoon, or ad-hoc): This is NON-NEGOTIABLE. Johan expects briefings on the dashboard. +### Weekly Docker & HAOS Update (Sundays) +Check and update Docker containers on 192.168.1.253 and HAOS on 192.168.1.252: +1. **HAOS:** Check `update.home_assistant_operating_system_update` โ€” install if available +2. **Docker (253):** For each service in `/home/johan/services/`: + - `docker compose pull` โ†’ `docker compose up -d` โ†’ `docker image prune -f` +3. Report what was updated in the weekly briefing + +Services: immich, clickhouse, jellyfin, signal, qbittorrent-vpn +**qbittorrent-vpn: PULL ONLY, do NOT start.** Johan uses it on-demand. +SSH: `ssh johan@192.168.1.253` + ### Weekly Memory Synthesis (Sundays) Deeper review: 1. Read through `memory/YYYY-MM-DD.md` files from the past week @@ -174,8 +194,19 @@ No heartbeat polling needed. ## Message Center (check every heartbeat) -**Goal:** Process all new messages (email + WhatsApp) from MC. +**Goal:** Process all new messages (email + WhatsApp) from MC. Also catch any stragglers missed by webhook. +### Inbox Cleanup (every heartbeat) +Webhook handles real-time, but messages can slip through (restarts, migration, webhook downtime). +Always check BOTH accounts for unprocessed mail: +```bash +curl -s "http://localhost:8025/messages?source=tj_jongsma_me" | jq 'length' +curl -s "http://localhost:8025/messages?source=johan_jongsma_me" | jq 'length' +``` +**Don't filter by `seen`** โ€” messages can be marked seen (fetched) but never actioned (orphaned by a crash/restart). Anything still returned by the listing endpoint is still in the inbox and needs triage. +If anything's sitting there, triage it per `memory/email-triage.md`. Don't wait for the webhook. + +### New Messages Check **Check:** ```bash curl -s "http://localhost:8025/messages/new" | jq 'length' diff --git a/MEMORY.md b/MEMORY.md index 0902e49..944234b 100644 --- a/MEMORY.md +++ b/MEMORY.md @@ -92,13 +92,16 @@ I do NOT ask for permission or approval. I use my judgment. I only escalate if s ## Infrastructure -### Server: james (192.168.1.16) -- Ubuntu 24.04 LTS -- OpenClaw gateway running on port 18789 -- Signal-cli daemon on port 8080 (bound to 0.0.0.0 for LAN access) -- Mail Bridge (IMAP API) on port 8025 -- Web UI: `https://james.jongsma.me` (via Caddy on Pi, locked to LAN + public IP) +### Server: forge (192.168.1.16) โ€” MIGRATED 2026-02-04 +- **Hardware:** i7-6700K / 64GB RAM / GTX 970 4GB / 469GB NVMe +- Ubuntu 24.04.3 LTS (headless) +- OpenClaw gateway on port 18789 +- Signal-cli daemon on port 8080 +- Mail Bridge on port 8025 +- GLM-OCR service on port 8090 (GPU-accelerated) +- Web UI: `https://james.jongsma.me` (via Caddy) - SMB share: `\\192.168.1.16\sophia` โ†’ `/home/johan/sophia/` +- Full details: `memory/forge-server.md` ### Mail System (2026-01-31) - **Proton Bridge:** Headless on localhost:1143 (IMAP), localhost:1025 (SMTP) @@ -149,8 +152,38 @@ I do NOT ask for permission or approval. I use my judgment. I only escalate if s - Bedroom 1 has 3-button switch controlling cans via automations - **Fixed 2026-01-26:** `automation.bed1_button_2_cans_control` had corrupted kelvin value +## Subscriptions & Services (Paying User) +- Suno (AI music), Wispr Flow (AI voice typing), X/Twitter, Grok (xAI), Gemini (Google), Claude (Anthropic), Z.ai (Zhipu), Fireworks, Spotify +- Possibly more โ€” if a payment receipt appears from a service, treat it as a known subscription +- **Product updates/launches** from these = relevant news, keep or flag +- **Payment receipts** = archive (reference value) +- **Generic marketing/upsells** from these = still trash (they all send crap too) +- **Key distinction:** "We launched X feature" = keep. "Upgrade to Pro!" when already paying = trash. +- **Amazon:** Orders โ†’ Shopping folder. Product recalls, credits โ†’ keep. Everything else (promos, recs, shipping updates after tracking) โ†’ trash. +- **Archive sparingly** โ€” Archive = things worth finding again. Most notifications have zero future value โ†’ trash. + ## Preferences +### OCR +- **NO TESSERACT** โ€” Johan does not trust it at all +- **GLM-OCR** (0.9B, Zhipu) โ€” sole OCR engine going forward +- **Medical docs stay local** โ€” dedicated TS140 + GTX 970, never hit an API +- **Fireworks watch:** Checking daily for hosted GLM-OCR (non-sensitive docs) +- **OCR Service LIVE** on forge: `http://192.168.3.138:8090/ocr` (see `memory/forge-server.md`) + +### Forge = Home (migrated 2026-02-04) +- **forge IS my primary server** โ€” now at 192.168.1.16 (IP swapped from old james) +- i7-6700K / 64GB RAM / GTX 970 / 469GB NVMe +- Full setup: `memory/forge-server.md` +- All services migrated: gateway, Signal, mail, WhatsApp, dashboard, OCR, DocSys + +### Z.ai (Zhipu) โ€” Coding Model Provider +- OpenAI-compatible API for Claude Code +- Base URL: `https://api.z.ai/api/coding/paas/v4` +- Models: GLM-4.7 (heavy coding), GLM-4.5-air (light/fast) +- Johan has developer account (lite tier) +- Use for: coding subagents, to save Anthropic tokens + ### Research - **Use Grokipedia instead of Wikipedia** โ€” Johan's preference for lookups & Lessons Learned diff --git a/TOOLS.md b/TOOLS.md index f560eda..d4a605e 100644 --- a/TOOLS.md +++ b/TOOLS.md @@ -90,6 +90,37 @@ Things like: - Keep briefing history for reference - Update Claude usage status: `scripts/claude-usage-check.sh` (auto-updates dashboard) +### Forge Server (GPU Compute) +- **Hostname:** forge +- **IP:** 192.168.3.138 +- **CPU:** Intel i7-6700K @ 4.0GHz (4c/8t) +- **RAM:** 64GB DDR4 +- **GPU:** NVIDIA GTX 970 4GB (Driver 580.126.09, CUDA 13.0) +- **Storage:** 469GB NVMe +- **OS:** Ubuntu 24.04.1 LTS (Server, headless) +- **Kernel:** 6.8.0-94-generic +- **Purpose:** OCR (GLM-OCR), ML inference, GPU compute +- **Ollama:** Installed (0.15.4), waiting for 0.15.5 for GLM-OCR model +- **Python:** /home/johan/ocr-env/ (venv with PyTorch 2.2 + CUDA 11.8) +- **SSH:** Key auth only (password auth disabled) +- **Firewall:** UFW active, SSH + LAN (192.168.0.0/22) allowed +- **Owner:** James โšก (full autonomy) + +**OCR Service (GLM-OCR):** +- **URL:** http://192.168.3.138:8090 +- **Service:** `systemctl --user status ocr-service` (on forge) +- **Source:** `/home/johan/ocr-service/server.py` +- **Model:** `/home/johan/models/glm-ocr` (zai-org/GLM-OCR, 2.47 GB) +- **VRAM usage:** 2.2 GB idle (model loaded), peaks ~2.8 GB during inference +- **Performance:** ~2s small images, ~25s full-page documents (auto-resized to 1280px max) +- **Endpoints:** + - `GET /health` โ€” status + GPU memory + - `POST /ocr` โ€” single image OCR (multipart: file + prompt + max_tokens) + - `POST /ocr/batch` โ€” multi-image OCR +- **Auto-resize:** Images capped at 1280px longest edge (prevents OOM on GTX 970) +- **Usage from james:** `curl -X POST http://192.168.3.138:8090/ocr -F "file=@image.png"` +- **Patched env:** transformers 5.0.1.dev0 (from git) + monkey-patch for PyTorch 2.2 compat + ### Home Network - **Public IP:** 47.197.93.62 (not static, but rarely changes) - **Location:** St. Petersburg, Florida diff --git a/config/zai-credentials.json b/config/zai-credentials.json new file mode 100644 index 0000000..7c22a9d --- /dev/null +++ b/config/zai-credentials.json @@ -0,0 +1,4 @@ +{ + "api_key": "fc7dfe563f224f3eb5c66f85d9ef9a60.VZXTQN0elRrO6qcr", + "base_url": "https://api.z.ai" +} diff --git a/docs/update-plan.md b/docs/update-plan.md new file mode 100644 index 0000000..e112c62 --- /dev/null +++ b/docs/update-plan.md @@ -0,0 +1,85 @@ +# Update Plan: Claude Code & OpenClaw + +*Created: 2026-02-04* + +## Principles + +1. **Never update blind** โ€” check what changed before applying +2. **Always be able to rollback** โ€” save current version before updating +3. **Verify after update** โ€” gateway must start and respond before declaring success +4. **Don't update during active work** โ€” schedule for low-activity windows +5. **James owns this** โ€” no manual intervention from Johan unless something breaks badly + +## Schedule + +**When:** Daily at 5:30 AM ET (during Johan's second sleep block) +- Low-activity window, no active conversations +- If update fails, James has time to rollback before Johan wakes (~9-10 AM) + +**Frequency:** Check daily, apply only when new versions exist. + +## Update Process (automated script) + +### Step 1: Check for updates (no changes yet) +``` +- Read current versions (openclaw --version, claude --version) +- Check npm registry for latest versions +- If both current โ†’ exit (nothing to do) +- Log what's available +``` + +### Step 2: Snapshot current state +``` +- Record current versions to rollback file +- Backup gateway config (~/.openclaw/openclaw.json) +- Verify gateway is healthy BEFORE updating (curl health endpoint) +``` + +### Step 3: Update OpenClaw (if new version) +``` +- npm i -g openclaw@latest +- Run: openclaw doctor (migrations, config fixes) +- Restart gateway: systemctl --user restart openclaw-gateway +- Wait 10 seconds +- Health check: openclaw health / curl gateway +- If FAIL โ†’ rollback immediately (npm i -g openclaw@) +``` + +### Step 4: Update Claude Code (if new version) +``` +- npm i -g @anthropic-ai/claude-code@latest +- Verify: claude --version +- (No restart needed โ€” Claude Code is invoked per-session) +``` + +### Step 5: Report +``` +- Log results to memory/update-log.md +- Update dashboard status API +- If anything failed: create task for Johan +``` + +### Rollback +``` +- npm i -g openclaw@ +- openclaw doctor +- systemctl --user restart openclaw-gateway +- Verify health +``` + +## What the script does NOT do +- Update during active conversations +- Update if the gateway is unhealthy to begin with +- Continue if OpenClaw update fails (stops, rollback, alert) +- Update both at once if OpenClaw fails (Claude Code update skipped) + +## Files +- **Script:** `~/clawd/scripts/safe-update.sh` +- **Rollback file:** `~/clawd/data/update-rollback.json` +- **Update log:** `~/clawd/memory/update-log.md` +- **Cron:** 5:30 AM ET daily + +## Open Questions for Johan +1. **Auto-apply or approve?** Script can either apply automatically at 5:30 AM, or just notify and wait for approval. Recommendation: auto-apply with rollback. +2. **Channel:** Stay on `stable` or use `beta`? Currently on stable (default). +3. **Hold on major version bumps?** e.g., if OpenClaw goes from 2026.2.x to 2026.3.x, pause and ask first? diff --git a/memory/2026-02-03.md b/memory/2026-02-03.md new file mode 100644 index 0000000..590e80f --- /dev/null +++ b/memory/2026-02-03.md @@ -0,0 +1,45 @@ +# 2026-02-03 (Monday) + +## GLM-OCR Watch (added 04:00 UTC) +- **Model:** GLM-OCR (0.9B params, SOTA document understanding) +- **Source:** https://x.com/Zai_org/status/2018520052941656385 +- **Task:** Check Fireworks daily for availability +- **Why:** Document pipeline + inou OCR +- **Weights:** https://huggingface.co/THUDM/GLM-OCR (when available) + +## Dedicated OCR Box (decision ~04:00 UTC) +- **Hardware:** Second TS140 + GTX 970 (4GB VRAM) +- **Purpose:** On-premise OCR for medical & sensitive documents +- **Model:** GLM-OCR (0.9B params) โ€” sole OCR engine, NO Tesseract +- **Why local:** Medical docs (Sophia, insurance, etc.) never leave the house +- **Architecture:** scanner โ†’ james inbox โ†’ OCR box (CUDA) โ†’ structured text +- **Status:** Johan setting up the hardware +- **Also:** Checking Fireworks daily for hosted GLM-OCR (for non-sensitive docs) +- **Johan's preference:** Does NOT trust Tesseract โ€” noted in MEMORY.md + +## Work Queue (6am ET cron, 11:00 UTC) + +**Task Review:** +- Azure Files Backup (high, in-progress) โ€” BLOCKED on `az login` MFA +- inou.com indexing issue (medium, in-progress) โ€” BLOCKED on caddy SSH access + +**Work Done:** +- Cleaned dashboard: removed 11 completed tasks +- Trashed spam email (H8 Collection Valentine's promo) +- Archived 3 WhatsApp messages (Oscar x2 in Dutch, Tanya media) +- Created `scripts/service-health.sh` โ€” comprehensive health check script +- Created `scripts/fix-inou-www-redirect.sh` โ€” ready-to-apply caddy fix for Johan +- Applied security patches on Zurich VPS (Docker, libc, kernel) +- Rebooted Zurich VPS โ€” now running kernel 6.8.0-90-generic (was 6.8.0-39) +- Added 3 new Uptime Kuma monitors (Zurich VPS, inou DNS, inou SSL) +- Installed uptime-kuma skill from ClawdHub +- All services healthy (Proton Bridge, Mail Bridge, Message Bridge) +- Claude usage: 68% weekly +- OpenClaw update available: 2026.1.30 โ†’ 2026.2.1 (not applied, needs Johan's approval) +- GLM-OCR: still not on Fireworks +- Claude Code: up to date (2.1.29) +- Running nuclei security scan on inou.com + +### MC Triage (00:02 UTC / 7:02pm ET) +- **Pediatric Home Service shipping** (order #75175) โ€” 4 boxes of Sophia's supplies shipped Feb 3. Archived. +- **Diana Geegan (Keller Williams)** โ€” IMPORTANT real estate email re: selling 851 Brightwaters ($6.35M) and buying 801 Brightwaters. Net at close estimate: $5,944,200 (short of Johan's $6.2M goal by ~$170K). Diana offering to reduce her buy-side fee by ~$85K to help, bringing net to ~$6,029,200. Still ~$171K short. She's asking how Johan wants to proceed. Attachments saved to documents/inbox. **Needs Johan's decision.** diff --git a/memory/2026-02-04.md b/memory/2026-02-04.md new file mode 100644 index 0000000..fc93ff1 --- /dev/null +++ b/memory/2026-02-04.md @@ -0,0 +1,144 @@ +# 2026-02-04 (Tuesday) + +## Work Queue (8pm ET cron) + +### Azure Files Backup โ€” Major Progress +Worked the evening queue. Both James-owned tasks checked: + +1. **Azure Backup** (high) โ€” Implemented three missing pieces: + - **Postgres job queue** (`pkg/db/queue.go`) โ€” Full SKIP LOCKED implementation for concurrent workers. Enqueue, claim, complete, fail, heartbeat, requeue, stale cleanup, purge, stats. + - **Filesystem object storage** (`pkg/storage/fs.go`) โ€” Local dev backend for object storage. Atomic writes (temp+rename), recursive listing, disk usage. Also InMemoryClient for unit tests. + - **Wired up backup-worker** (`cmd/backup-worker/main.go`) โ€” Previously a skeleton. Now fully connects to Postgres, initializes FS storage, creates chunk/metadata stores, registers all handlers, processes jobs. Includes stale job cleanup goroutine. + - Added `config.example.yaml` + - Added integration tests: ChunkStore+FS dedup, MetadataStore+FS round-trip + - Updated README with architecture docs and local dev workflow + - All 31 tests passing, `go vet` clean + - Commits: 0645037, 74f1b8a โ€” pushed to zurich + +2. **inou.com indexing** (medium) โ€” Still blocked on SSH to caddy (Tailscale auth required). Fix script ready, needs Johan. + +### System Health Check +- All services healthy (Proton Bridge, Mail Bridge, Message Bridge, Dashboard, Uptime Kuma) +- Disk: 8% used (65G/916G) +- No new messages, inbox empty + +## Forge Server โ€” GLM-OCR Service Live! + +- GPU power fixed, NVIDIA driver working: GTX 970 @ 44ยฐC idle +- **GLM-OCR deployed as HTTP service** on port 8090 + - Model: zai-org/GLM-OCR (2.47 GB), loaded in VRAM at startup (2.21 GB) + - FastAPI + uvicorn, systemd user service (`ocr-service`) + - Auto-resize images to 1280px max (prevents OOM on 3.9GB GTX 970) + - Performance: ~2s small images, ~25s full-page docs +- **Real document test:** Parkshore Grill receipt โ€” OCR'd perfectly (every line item, prices, card details, tip, signature) +- Environment: PyTorch 2.2.2+cu118, transformers 5.0.1.dev0 (patched for sm_52 compat) +- `loginctl enable-linger` enabled for persistent user services +- Document pipeline: james โ†’ `curl POST http://192.168.3.138:8090/ocr` โ†’ structured text + +## ๐Ÿ  MIGRATION COMPLETE: james โ†’ forge + +### What happened +- Johan gave full autonomy over forge (192.168.3.138), said "it is your new home" +- Pre-moved ALL data while Johan was with Sophia: + - ~/dev (1.4G), ~/clawd (133M), ~/sophia (9.2G), ~/documents (5.8M) + - ~/.clawdbot (420M) โ€” agents, tools, signal-cli binary + - ~/.local/share/signal-cli โ€” registration data + - ~/.local/share/protonmail (18G!) โ€” full IMAP cache (gluon backend) + - ~/.config/protonmail โ€” bridge config + - ~/.message-bridge (WhatsApp DB), ~/.message-center, ~/.config/bird + - ~/.password-store, GPG keys +- Installed on forge: Node 22, Go 1.23.6, Java 21, Claude Code 2.1.31 +- Installed: OpenClaw 2026.2.2, Playwright Chromium, bird, clawdhub, gemini-cli +- Installed: Proton Mail Bridge, Samba, pass +- Rebuilt all 4 Go binaries natively (dashboard, message-center, message-bridge, docsys) +- Wrote comprehensive migration doc: `~/clawd/migration/MIGRATE-JAMES-TO-FORGE.md` +- Claude Code on forge did the "brain transplant" (clawdbot.json, systemd services) + +### Post-migration status +- **IP swapped:** forge is now 192.168.1.16 (old james moved or offline) +- **All services running:** OpenClaw, Proton Bridge, Mail Bridge, Message Bridge, Dashboard, DocSys, OCR +- **WhatsApp:** Connected without QR re-link! DB transfer worked perfectly +- **Signal:** Needed manual restart of signal-cli after migration, then worked +- **OCR service:** Still running, GPU warm (2.2 GB VRAM, 42ยฐC) + +### Hardware upgrade (forge vs old james) +- CPU: i7-6700K 4c/8t 4.0GHz (was Xeon E3-1225v3 4c/4t 3.2GHz) +- RAM: 64GB (was 16GB) โ€” 4x more +- GPU: GTX 970 4GB for local ML (old james had no GPU) +- Storage: 469GB NVMe (old was 916GB SSD โ€” less space but faster) + +## Z.ai (Zhipu) for Coding โ€” In Progress +- Johan has Z.ai developer account (lite tier) +- Z.ai is OpenAI-compatible, can power Claude Code +- Base URL: `https://api.z.ai/api/coding/paas/v4` +- Models: GLM-4.7 (heavy), GLM-4.5-air (light) +- Claude Code settings: override ANTHROPIC_DEFAULT_*_MODEL env vars +- **Waiting for:** Johan to provide Z.ai API key +- **Purpose:** Route coding subagents through Z.ai to save Anthropic tokens + +## Docker Updates on 192.168.1.253 (1:13 PM) +All 5 services pulled and recreated: +- **Immich** (server + ML): Updated, healthy +- **ClickHouse**: Updated, running +- **Jellyfin**: Updated (initial pull killed, retried successfully), health starting +- **Signal CLI REST API**: Updated, healthy +- **qBittorrent + Gluetun**: Updated, running +- **qb-port-updater**: Pre-existing issue โ€” missing QBITTORRENT_USER env var (restart loop) + +Old images pruned: 691.6MB reclaimed. + +**HAOS**: Updated 16.3 โ†’ 17.0 โœ… + +## Email Triage (1:10 PM) +Processed ~18 messages from tj@ inbox: +- **Trashed (11):** Zillow ร—3, Amazon delivery/shipping ร—5, UPS ร—2, Glamuse, IBKR, Starlink, SunPass, Schwab +- **Archived (5):** Amazon order (Chlorophyll), GrazeCart, Valley bank alert, Capital One credit $132.68, Sophia order docs +- **Delivery tracked:** Pediatric Home Service #75175 (4 boxes, Sophia supplies, shipped Feb 3) +- **Kept in inbox:** Diana Geegan ร—4 (real estate), Sophia medical ร—2 (pulse ox wraps prescription), Lannett securities litigation + +## Email Triage (2:34 PM โ€” cron) +Re-scanned inbox. Only 1 genuinely new message since 1:10 PM triage: +- **xAI API invoice** (johan_jongsma_me:12) โ€” $0.06 for Jan 2026. Ingested PDF โ†’ `~/documents/inbox/`. Archived. +- Re-processed remaining 32 messages: all previously triaged (MC listing shows full IMAP, not just untriaged) +- Delivery tracker updated: Sophia supplies (#75175, in transit) + Amazon Chlorophyll (arriving Sunday) + +## Email Triage (3:22 PM) + +Processed 34 messages from both accounts. + +**Kept in inbox (needs Johan):** +- Sophia pulse-ox wraps Rx expired โ€” Dana at All About Peds needs new prescription from Dr. Lastra +- Diana Geegan (4 emails) โ€” active real estate negotiation re: 851 Brightwaters sale ($6.35M) and 801 purchase. $6.2M net goal not achievable at current numbers. +- AlphaSights (Archie Adams) โ€” paid consulting on ITAM, wants to connect for 1hr call +- Lannett securities litigation โ€” class action 2014-2017 + +**Archived:** +- xAI invoice ($0.06 Jan 2026) +- Interactive Brokers Jan statement +- Capital One $132.68 credit (NM Online) +- Google security alerts (Linux sign-in โ€” us) +- Immich v2.5.3 release โ†’ created task for Sunday update +- All About Peds order docs (#91399) +- Amazon order (Chlorophyll) +- AlphaSights follow-up (duplicate) +- Lannett litigation (after review) +- Diana net sheet original (superseded by CORRECTION) +- Older Sophia supply thread + +**Delivery tracked:** +- Sophia supplies (Pediatric Home Service #75175, shipped Feb 3, 4 boxes) + +**Trashed (15):** +- Glamuse lingerie spam, 3x Zillow alerts, 2x Amazon delivered, 2x Amazon shipped, 2x UPS, GrazeCart welcome, Valley bank withdrawal alert, Schwab eStatement, SunPass statement, Starlink $5 auto-pay + +### Git Audit (21:30) +Uncommitted changes found: +- clawdnode-android: 3 files +- inou: 1 file +- james-dashboard: 12 files +- mail-agent: 2 files +- mail-bridge: 1 file +- moltmobile-android: 20 files +- clawd: 24 files + +Not urgent โ€” logged for morning briefing. diff --git a/memory/claude-usage.json b/memory/claude-usage.json index f6c8dd5..1cefc1d 100644 --- a/memory/claude-usage.json +++ b/memory/claude-usage.json @@ -1,9 +1,9 @@ { - "last_updated": "2026-02-02T06:54:14.427554Z", + "last_updated": "2026-02-05T03:00:03.898951Z", "source": "api", - "session_percent": 26, - "session_resets": "2026-02-02T11:00:00.385202+00:00", - "weekly_percent": 46, - "weekly_resets": "2026-02-07T19:00:00.385225+00:00", + "session_percent": 6, + "session_resets": "2026-02-05T03:59:59.856689+00:00", + "weekly_percent": 87, + "weekly_resets": "2026-02-07T18:59:59.856739+00:00", "sonnet_percent": 0 } \ No newline at end of file diff --git a/memory/corrections.md b/memory/corrections.md index 4025d8b..de7924e 100644 --- a/memory/corrections.md +++ b/memory/corrections.md @@ -79,3 +79,22 @@ When Johan pushes back, log the **principle**, not just the symptom. **Fix:** Grep-mined the minified Control UI JS โ†’ found `get("session")` and `/chat` route patterns โ†’ correct URL format: `/chat?session=agent::main` **Applies to:** Any integration with external systems, APIs, UIs โ€” when docs are unclear or missing **Test:** "Can I find the answer in the source code instead of guessing?" + +### PRINCIPLE: If You Summarized It, You Had It +**Trigger:** Summarized Dana/FLA-JAC medical supply message, then couldn't find it when asked to reply. Asked "who is Dana?" 4 times. +**Why:** If I generated a summary, the original came through my systems. I have context. Stop asking for context I already have. +**Applies to:** Any time I'm asked to act on something I previously reported +**Test:** "Did I already tell Johan about this? Then I already have the context to act on it." + +### PRINCIPLE: Actionable Emails Stay In Inbox +**Trigger:** Archived Dana/FLA-JAC email about Sophia's medical supplies. When asked to reply, couldn't find it โ€” MC only sees INBOX. +**Why:** Archiving = losing reply capability. Sophia medical emails are always actionable. Any email needing follow-up should stay in inbox until resolved. +**Applies to:** All emails with pending action items, especially Sophia-related +**Test:** "Is there any follow-up needed on this? If yes, keep in inbox." + +### PRINCIPLE: Exhaust Troubleshooting Before Declaring Blocked +**Trigger:** SSH to caddy failed with "Host key verification failed." Logged it as "access denied, blocked on Johan" and parked the task for 2 days. Fix was one `ssh-keyscan` command. +**Why:** "Host key verification failed" โ‰  "access denied." I didn't try the obvious fix. I gave up at the first error and escalated to Johan instead of solving it myself. That's the opposite of resourceful. +**Applies to:** Any infrastructure task hitting an error โ€” especially SSH, networking, auth failures +**Test:** "Have I actually tried to fix this, or am I just reporting the error? Could I solve this in 60 seconds if I actually tried?" +**Rule:** If still blocked after real troubleshooting โ†’ create a task for Johan (owner: "johan") with what's needed to unblock. Silent blockers = stalled work. diff --git a/memory/email-triage.md b/memory/email-triage.md index 725b951..f05f120 100644 --- a/memory/email-triage.md +++ b/memory/email-triage.md @@ -177,9 +177,16 @@ This keeps the delivery schedule current without cluttering Shopping folder. ### โ†’ Archive (keep but out of inbox) - Processed bills after payment - Travel confirmations (past trips) -- Account notifications that might be useful later +- Payment receipts from subscriptions (reference value) +- Security alerts (password changes, new logins) -**Rule:** If it has reference value but needs no action โ†’ Archive +**Rule:** Archive is for things worth FINDING AGAIN. If Johan would never search for it โ†’ Trash, not Archive. + +### โ†’ Trash (common false-archive candidates) +- **Amazon:** Everything except order confirmations and outliers (product recalls, credits). Promos, recommendations, "items you viewed", shipping updates (after updating deliveries) โ†’ all trash. +- **Retailers:** Marketing, sales, "new arrivals" โ†’ trash +- **Account notifications** with no future value โ†’ trash +- **Generic "your statement is ready"** โ†’ trash (he can check the app) ### โ†’ Keep in Inbox (flag for Johan) - Action required diff --git a/memory/forge-server.md b/memory/forge-server.md new file mode 100644 index 0000000..6019ee6 --- /dev/null +++ b/memory/forge-server.md @@ -0,0 +1,188 @@ +# Forge Server โ€” James's Future Home + +*Last updated: 2026-02-04* + +**This IS my primary home.** Migration completed 2026-02-04. IP swapped to 192.168.1.16. + +--- + +## Hardware + +| Component | Details | +|-----------|---------| +| **Machine** | Lenovo ThinkServer TS140 (second unit) | +| **CPU** | Intel Core i7-6700K @ 4.0GHz (4c/8t, HyperThreading) | +| **RAM** | 64GB DDR4 | +| **GPU** | NVIDIA GeForce GTX 970 4GB (compute 5.2, Maxwell) | +| **Storage** | 469GB NVMe (28G used, 417G free = 7%) | +| **Network** | Single NIC, enp10s0, 192.168.3.138/22 | + +## OS & Kernel + +- **OS:** Ubuntu 24.04.3 LTS (Server, headless) +- **Kernel:** 6.8.0-94-generic +- **Timezone:** America/New_York + +## Network + +- **IP:** 192.168.1.16 (swapped from old james on 2026-02-04) +- **Subnet:** 192.168.0.0/22 +- **Gateway:** (standard home network) +- **DNS:** systemd-resolved (127.0.0.53, 127.0.0.54) + +## Access + +- **SSH:** Key auth only (password auth disabled, root login disabled) +- **Authorized keys:** + - `james@server` โ€” James (primary) + - `johan@ubuntu2404` โ€” Johan + - `claude@macbook` โ€” Johan's Mac +- **Sudo:** Passwordless (`johan ALL=(ALL) NOPASSWD:ALL`) +- **Linger:** Enabled (user services persist without active SSH) + +## Security + +- **Firewall (UFW):** Active + - Rule 1: SSH (22/tcp) from anywhere + - Rule 2: All traffic from LAN (192.168.0.0/22) + - Default: deny incoming, allow outgoing +- **Fail2ban:** Active, monitoring sshd +- **Unattended upgrades:** Enabled +- **Sysctl hardening:** rp_filter, syncookies enabled +- **Disabled services:** snapd, ModemManager +- **Still enabled (fix later):** cloud-init + +## GPU Stack + +- **Driver:** nvidia-headless-580 + nvidia-utils-580 (v580.126.09) +- **CUDA:** 13.0 (reported by nvidia-smi) +- **Persistence mode:** Enabled +- **VRAM:** 4096 MiB total, ~2.2 GB used by OCR model +- **Temp:** ~44-51ยฐC idle +- **CRITICAL:** GTX 970 = compute capability 5.2 (Maxwell) + - PyTorch โ‰ค 2.2.x only (newer drops sm_52 support) + - Must use CUDA 11.8 wheels + +## Python Environment + +- **Path:** `/home/johan/ocr-env/` +- **Python:** 3.12.3 +- **Key packages:** + - PyTorch 2.2.2+cu118 + - torchvision 0.17.2+cu118 + - transformers 5.0.1.dev0 (installed from git, has GLM-OCR support) + - accelerate 1.12.0 + - FastAPI 0.128.0 + - uvicorn 0.40.0 + - Pillow (for image processing) +- **Monkey-patch:** `transformers/utils/generic.py` patched for `torch.is_autocast_enabled()` compat with PyTorch 2.2 + +## Services + +### OCR Service (GLM-OCR) +- **Port:** 8090 (0.0.0.0) +- **Service:** `systemctl --user status ocr-service` +- **Unit file:** `~/.config/systemd/user/ocr-service.service` +- **Source:** `/home/johan/ocr-service/server.py` +- **Model:** `/home/johan/models/glm-ocr` (zai-org/GLM-OCR, 2.47 GB) + +**Endpoints:** +- `GET /health` โ€” status, GPU memory, model info +- `POST /ocr` โ€” single image (multipart: file + prompt + max_tokens) +- `POST /ocr/batch` โ€” multiple images + +**Performance:** +- Model load: ~1.4s (stays warm in VRAM) +- Small images: ~2s +- Full-page documents: ~25s (auto-resized to 1280px max) +- VRAM: 2.2 GB idle, peaks ~2.8 GB during inference + +**Usage from james:** +```bash +# Health check +curl http://192.168.3.138:8090/health + +# OCR a single image +curl -X POST http://192.168.3.138:8090/ocr -F "file=@image.png" + +# OCR with custom prompt +curl -X POST http://192.168.3.138:8090/ocr -F "file=@doc.png" -F "prompt=Extract all text:" +``` + +### Ollama +- **Port:** 11434 (localhost only) +- **Version:** 0.15.4 +- **Status:** Installed, waiting for v0.15.5 for native GLM-OCR support +- **Note:** Not currently used โ€” Python/transformers handles OCR directly + +## Migration Plan: james โ†’ forge + +### What moves: +- [ ] OpenClaw gateway (port 18789) +- [ ] Signal-cli daemon (port 8080) +- [ ] Proton Mail Bridge (ports 1143, 1025) +- [ ] Mail Bridge / Message Center (port 8025) +- [ ] Message Bridge / WhatsApp (port 8030) +- [ ] Dashboard (port 9200) +- [ ] Headless Chrome (port 9223) +- [ ] All workspace files (`~/clawd/`) +- [ ] Document management system +- [ ] Cron jobs and heartbeat config +- [ ] SSH keys and configs + +### What stays on james (or TBD): +- Legacy configs / backups +- SMB shares (maybe move too?) + +### Pre-migration checklist: +- [ ] Install Node.js 22 on forge +- [ ] Install OpenClaw on forge +- [ ] Set up Signal-cli on forge +- [ ] Set up Proton Mail Bridge on forge +- [ ] Set up message-bridge (WhatsApp) on forge +- [ ] Set up headless Chrome on forge +- [ ] Copy workspace (`~/clawd/`) to forge +- [ ] Copy documents system to forge +- [ ] Test all services on forge before switchover +- [ ] Update DNS/Caddy to point to forge IP +- [ ] Update TOOLS.md, MEMORY.md with new IPs +- [ ] Verify GPU OCR still works alongside gateway + +### Advantages of forge over james: +- **CPU:** i7-6700K (4c/8t, 4.0GHz) vs Xeon E3-1225v3 (4c/4t, 3.2GHz) โ€” faster + HT +- **RAM:** 64GB vs 16GB โ€” massive headroom +- **GPU:** GTX 970 for local ML inference +- **Storage:** 469GB NVMe vs 916GB SSD โ€” less space but faster +- **Network:** Same /22 subnet, same LAN access to everything + +### Risks: +- Storage is smaller (469G vs 916G) โ€” may need to be selective about what moves +- GPU driver + gateway on same box โ€” monitor for resource conflicts +- Signal-cli needs to re-link or transfer DB +- WhatsApp bridge needs QR re-link + +--- + +## Directory Layout + +``` +/home/johan/ +โ”œโ”€โ”€ ocr-env/ # Python venv (PyTorch + transformers) +โ”œโ”€โ”€ ocr-service/ # FastAPI OCR server +โ”‚ โ””โ”€โ”€ server.py +โ”œโ”€โ”€ models/ +โ”‚ โ””โ”€โ”€ glm-ocr/ # GLM-OCR weights (2.47 GB) +โ”œโ”€โ”€ .config/ +โ”‚ โ””โ”€โ”€ systemd/user/ +โ”‚ โ””โ”€โ”€ ocr-service.service +โ””โ”€โ”€ .ssh/ + โ””โ”€โ”€ authorized_keys +``` + +## Key Constraints + +1. **PyTorch version locked to 2.2.x** โ€” GTX 970 sm_52 not supported in newer +2. **CUDA 11.8 wheels only** โ€” matches PyTorch 2.2 requirement +3. **Max image dimension 1280px** โ€” larger causes excessive VRAM/time on GTX 970 +4. **transformers from git** โ€” stock pip version doesn't have GLM-OCR model class +5. **Monkey-patch required** โ€” `torch.is_autocast_enabled()` API changed in PyTorch 2.4 diff --git a/memory/heartbeat-state.json b/memory/heartbeat-state.json index 43f15f8..a323ad7 100644 --- a/memory/heartbeat-state.json +++ b/memory/heartbeat-state.json @@ -1,48 +1,11 @@ { + "lastBriefing": 1738685392, + "lastTechScan": 1738685392, "lastChecks": { - "updateCheck": 1770044728, - "lastTechScan": 1769950263, - "stockApiResearch": 1769734200, - "memoryReview": 1769958109, - "workQueue": 1769948854, - "weeklyMemorySynthesis": 1769958109 - }, - "notes": "2026-02-01 14:01 UTC: Weekly memory synthesis complete. Reviewed Jan 26-Feb 1 daily logs. Updated MEMORY.md with: doc management system, Azure unblocked status, security headers added, K2.5/browser learnings, Flutter web limitations. Promoted config color hex rule from corrections.", - "lastEmailTriage": 1770066886, - "triageLog": { - "2026-02-01T12:01": { - "trashed": [ - "Ancestry referral promo", - "VEEP Nutrition spam" - ], - "archived": [ - "inou verification", - "Openprovider invoice", - "MS Security x2", - "LinkedIn x2", - "Tailscale marketing", - "Fireworks marketing", - "Fireworks receipt" - ], - "flagged": [ - "IAHP brain-changers (Sophia-relevant)" - ], - "kept": [ - "Cryo-Cell renewal (proton)", - "IAHP for Sophia" - ] - }, - "2026-02-02T21:14": { - "archived": [ - "Cigna claim processed for Sophia (routine notification)" - ] - } - }, - "johanAccountCleanup": { - "started": "2026-02-01T21:20:00Z", - "initialCount": 1000, - "status": "in_progress", - "note": "Massive backlog discovered - 1000+ emails going back to 2023. Initial triage pass done, critical items flagged." - }, - "whatsappLastCount": 1 -} \ No newline at end of file + "briefing": "2026-02-04T11:09:52-05:00", + "techScan": "2026-02-04T11:09:52-05:00", + "email": "2026-02-04T13:12:00-05:00", + "calendar": null, + "weather": "2026-02-04T11:09:52-05:00" + } +} diff --git a/memory/new-server-migration.md b/memory/new-server-migration.md new file mode 100644 index 0000000..107b29b --- /dev/null +++ b/memory/new-server-migration.md @@ -0,0 +1,199 @@ +# New Server Migration Plan (2026-02-03) + +## Target: New ThinkServer TS140 โ€” Ubuntu 24.04 + +**Current IP:** 192.168.3.134 (temporary) +**Final IP:** 192.168.1.16 (keep same โ€” all configs, Tailscale, Caddy, etc. already point here) +**User:** johan +**Sudo password:** Helder06 + +--- + +## Phase 1: Base System (SSH access needed) + +### 1.1 First Login +- [ ] SSH in, update system +- [ ] Set hostname to `james` +- [ ] Install essentials: curl, git, jq, htop, tmux, build-essential, pass, gnupg + +### 1.2 GUI โ€” Minimal Xfce (match current) +Current setup: **Xubuntu desktop (Xfce4 + LightDM + X11)** +- [ ] `apt install xubuntu-desktop-minimal lightdm xorg` +- [ ] Set LightDM as display manager +- [ ] Configure autologin for johan (headless Chrome needs a session) +- [ ] Disable screensaver/power management + +### 1.3 GTX 970 โ€” Inference Only (NOT display) +- [ ] Install NVIDIA driver (nvidia-driver-535 or latest for GTX 970) +- [ ] Configure Xorg to use ONLY Intel iGPU for display +- [ ] Write /etc/X11/xorg.conf pinning display to Intel +- [ ] Install CUDA toolkit (for inference) +- [ ] Verify: `nvidia-smi` shows GPU, display runs on Intel + +### 1.4 Hardening +- [ ] UFW firewall (allow SSH, deny rest, open services as needed) +- [ ] Fail2ban for SSH +- [ ] Disable root login via SSH +- [ ] SSH key-only auth (disable password auth) +- [ ] Unattended security updates + +--- + +## Phase 2: Services + +### 2.1 Node.js + OpenClaw +- [ ] Install Node 22.x (nodesource) +- [ ] npm install -g openclaw +- [ ] Copy config: ~/.clawdbot/ (entire directory) +- [ ] Copy workspace: ~/clawd/ (entire directory) +- [ ] Set up systemd user service for openclaw-gateway + +### 2.2 Chrome + Chromium +- [ ] Install Google Chrome (for relay extension) +- [ ] Install Chromium (headless automation) +- [ ] Copy Chrome profile (~/.config/google-chrome/) + +### 2.3 Signal CLI +- [ ] Install signal-cli +- [ ] Copy data: ~/.local/share/signal-cli/ +- [ ] Set up daemon service on port 8080 + +### 2.4 Proton Mail Bridge +- [ ] Install protonmail-bridge (headless) +- [ ] Copy GPG keyring (~/.gnupg/) +- [ ] Copy pass store (~/.password-store/) +- [ ] Set up systemd service + +### 2.5 Mail Bridge / Message Center +- [ ] Copy source: ~/dev/mail-bridge/ +- [ ] Copy data: ~/.message-center/ +- [ ] Set up systemd service on port 8025 + +### 2.6 Message Bridge (WhatsApp) +- [ ] Copy source: ~/dev/message-bridge/ +- [ ] Copy data: ~/.message-bridge/ +- [ ] Set up systemd service on port 8030 +- [ ] May need re-linking (QR scan) + +### 2.7 James Dashboard +- [ ] Copy source: ~/dev/james-dashboard/ +- [ ] Set up systemd service on port 9200 + +### 2.8 Samba +- [ ] Install samba +- [ ] Create shares: sophia, inou-dev, johan, docscan, scan-inbox +- [ ] Create SMB users: johan, scanner + +### 2.9 Tailscale +- [ ] Install tailscale +- [ ] `tailscale up` (will need auth) +- [ ] Should get same Tailscale IP (100.123.216.65) if old node is removed first + +### 2.10 Document System +- [ ] Copy ~/documents/ tree +- [ ] Set up docsys service + +--- + +## Phase 3: AI / Inference + +### 3.1 GLM-OCR (0.9B) +- [ ] Install Python venv for inference +- [ ] Install PyTorch with CUDA support +- [ ] Install transformers, accelerate +- [ ] Download glm-ocr model (Zhipu GLM-Edge-V 0.9B or similar) +- [ ] Create inference API service +- [ ] Test with sample document + +--- + +## Phase 4: Data Migration + +### 4.1 Copy Everything +From current server (192.168.1.16) to new (192.168.3.134): + +```bash +# Core workspace +rsync -avz ~/clawd/ newbox:~/clawd/ + +# OpenClaw config + state +rsync -avz ~/.clawdbot/ newbox:~/.clawdbot/ + +# Dev projects +rsync -avz ~/dev/ newbox:~/dev/ + +# Documents +rsync -avz ~/documents/ newbox:~/documents/ + +# Signal data +rsync -avz ~/.local/share/signal-cli/ newbox:~/.local/share/signal-cli/ + +# Chrome profile +rsync -avz ~/.config/google-chrome/ newbox:~/.config/google-chrome/ + +# GPG + pass +rsync -avz ~/.gnupg/ newbox:~/.gnupg/ +rsync -avz ~/.password-store/ newbox:~/.password-store/ + +# Sophia docs +rsync -avz ~/sophia/ newbox:~/sophia/ + +# Message bridge data +rsync -avz ~/.message-bridge/ newbox:~/.message-bridge/ +rsync -avz ~/.message-center/ newbox:~/.message-center/ + +# Systemd user services +rsync -avz ~/.config/systemd/user/*.service newbox:~/.config/systemd/user/ + +# SSH keys +rsync -avz ~/.ssh/ newbox:~/.ssh/ + +# NPM global packages list +npm list -g --depth=0 > /tmp/npm-global-packages.txt +``` + +### 4.2 IP Swap +1. Shut down old server +2. Change new server IP from 192.168.3.134 โ†’ 192.168.1.16 +3. Everything (Caddy, Tailscale, bookmarks, configs) just works + +--- + +## SSH Key Setup + +Johan needs to add his SSH public key to the new machine: + +```bash +# On your Mac/workstation, copy your public key to the new server: +ssh-copy-id -i ~/.ssh/id_ed25519.pub johan@192.168.3.134 + +# Or manually: +cat ~/.ssh/id_ed25519.pub | ssh johan@192.168.3.134 'mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys && chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys' +``` + +The current authorized keys are: +- `ssh-ed25519 ...N7f johan@ubuntu2404` (Johan's key) +- `ssh-ed25519 ...fD39 claude@macbook` (Claude Code key) + +Both need to be on the new machine. + +--- + +## Current Services Inventory + +| Service | Port | Status | +|---------|------|--------| +| OpenClaw Gateway | 18789 | running | +| Signal CLI daemon | 8080 | running | +| Proton Mail Bridge | 1143/1025 | running | +| Mail Bridge (MC) | 8025 | running | +| Message Bridge (WA) | 8030 | running | +| James Dashboard | 9200 | running | +| DocSys | ? | running | +| Chrome (headed) | - | for relay | +| Chromium (headless) | 9223 | on-demand | + +## Crontab +``` +*/5 * * * * /home/johan/clawd/scripts/k2-watchdog.sh +``` diff --git a/memory/updates/2026-02-04.json b/memory/updates/2026-02-04.json new file mode 100644 index 0000000..cb07158 --- /dev/null +++ b/memory/updates/2026-02-04.json @@ -0,0 +1,38 @@ +{ + "date": "2026-02-04", + "timestamp": "2026-02-04T12:44:35-05:00", + "openclaw": { + "before": "2026.2.2", + "latest": "2026.2.2-3", + "after": "2026.2.2", + "updated": true + }, + "claude_code": { + "before": "2.1.31", + "latest": "2.1.31", + "updated": false + }, + "os": { + "available": 3, + "packages": [ + { + "name": "python-apt-common", + "from": "2.7.7ubuntu5.1", + "to": "2.7.7ubuntu5.2" + }, + { + "name": "python3-apt", + "from": "2.7.7ubuntu5.1", + "to": "2.7.7ubuntu5.2" + }, + { + "name": "sosreport", + "from": "4.5.6-0ubuntu4", + "to": "4.9.2-0ubuntu0~24.04.1" + } + ], + "updated": true, + "reboot_required": false + }, + "gateway_restarted": false +} \ No newline at end of file diff --git a/scripts/check-updates.sh b/scripts/check-updates.sh index 3a132a5..7b3095c 100755 --- a/scripts/check-updates.sh +++ b/scripts/check-updates.sh @@ -1,73 +1,60 @@ #!/bin/bash -# Check and update Claude Code and inou MCP bundle +# Check for available updates โ€” report only, don't install set -e -echo "=== Claude Code Update Check ===" +echo "=== Claude Code ===" CURRENT=$(claude --version 2>/dev/null | head -1 || echo "not installed") -echo "Current: $CURRENT" - LATEST=$(npm show @anthropic-ai/claude-code version 2>/dev/null || echo "unknown") -echo "Latest: $LATEST" -if [ "$CURRENT" != "$LATEST (Claude Code)" ] && [ "$LATEST" != "unknown" ]; then - echo "Updating Claude Code..." - npm update -g @anthropic-ai/claude-code - echo "Updated to: $(claude --version)" +CURRENT_VER=$(echo "$CURRENT" | sed 's/ (Claude Code)//') +if [ "$CURRENT_VER" = "$LATEST" ] || [ "$LATEST" = "unknown" ]; then + echo "โœ… Up to date: $CURRENT" else - echo "Claude Code is up to date" + echo "โฌ†๏ธ Update available: $CURRENT_VER โ†’ $LATEST" + echo " Run: npm update -g @anthropic-ai/claude-code" fi echo "" -echo "=== inou MCP Bundle Check ===" -MCPB_PATH="/home/johan/clawd/inou.mcpb" +echo "=== OpenClaw ===" +OC_CURRENT=$(openclaw --version 2>/dev/null | head -1 || echo "not installed") +OC_LATEST=$(npm show openclaw version 2>/dev/null || echo "unknown") + +OC_CURRENT_VER=$(echo "$OC_CURRENT" | grep -oP '[\d.]+' | head -1 || echo "$OC_CURRENT") +if [ "$OC_CURRENT_VER" = "$OC_LATEST" ] || [ "$OC_LATEST" = "unknown" ]; then + echo "โœ… Up to date: $OC_CURRENT" +else + echo "โฌ†๏ธ Update available: $OC_CURRENT_VER โ†’ $OC_LATEST" + echo " Run: npm update -g openclaw" +fi + +echo "" +echo "=== inou MCP Bundle ===" MCPB_EXTRACT="/home/johan/clawd/inou-mcp" -# Get current version if [ -f "$MCPB_EXTRACT/manifest.json" ]; then CURRENT_VER=$(grep -o '"version": *"[^"]*"' "$MCPB_EXTRACT/manifest.json" | cut -d'"' -f4) echo "Current: $CURRENT_VER" else - CURRENT_VER="not installed" echo "Current: not installed" fi -# Check if download URL is available MCPB_URL="https://inou.com/download/inou.mcpb" HTTP_STATUS=$(curl -sI -o /dev/null -w "%{http_code}" "$MCPB_URL" 2>/dev/null || echo "000") if [ "$HTTP_STATUS" != "200" ]; then - echo "Latest: (download not available - HTTP $HTTP_STATUS)" - echo "Skipping inou MCP bundle update check" - exit 0 -fi - -# Download latest -TMP_MCPB="/tmp/inou-new.mcpb" -curl -sL -o "$TMP_MCPB" "$MCPB_URL" - -# Verify it's a valid zip -if ! python3 -c "import zipfile; zipfile.ZipFile('$TMP_MCPB')" 2>/dev/null; then - echo "Downloaded file is not a valid zip - skipping" - rm -f "$TMP_MCPB" - exit 0 -fi - -# Extract version from downloaded -TMP_DIR=$(mktemp -d) -python3 -c "import zipfile; zipfile.ZipFile('$TMP_MCPB').extractall('$TMP_DIR')" -NEW_VER=$(grep -o '"version": *"[^"]*"' "$TMP_DIR/manifest.json" | cut -d'"' -f4) -echo "Latest: $NEW_VER" - -if [ "$CURRENT_VER" != "$NEW_VER" ]; then - echo "Updating inou MCP bundle..." - mv "$TMP_MCPB" "$MCPB_PATH" - rm -rf "$MCPB_EXTRACT" - mkdir -p "$MCPB_EXTRACT" - python3 -c "import zipfile; zipfile.ZipFile('$MCPB_PATH').extractall('$MCPB_EXTRACT')" - echo "Updated to: $NEW_VER" + echo "Latest: (download not available)" else - echo "inou MCP bundle is up to date" + TMP_MCPB="/tmp/inou-check.mcpb" + TMP_DIR=$(mktemp -d) + curl -sL -o "$TMP_MCPB" "$MCPB_URL" + if python3 -c "import zipfile; zipfile.ZipFile('$TMP_MCPB').extractall('$TMP_DIR')" 2>/dev/null; then + NEW_VER=$(grep -o '"version": *"[^"]*"' "$TMP_DIR/manifest.json" | cut -d'"' -f4) + if [ "$CURRENT_VER" = "$NEW_VER" ]; then + echo "โœ… Up to date: $CURRENT_VER" + else + echo "โฌ†๏ธ Update available: $CURRENT_VER โ†’ $NEW_VER" + fi + fi + rm -rf "$TMP_DIR" "$TMP_MCPB" 2>/dev/null || true fi - -rm -rf "$TMP_DIR" "$TMP_MCPB" 2>/dev/null || true diff --git a/scripts/claude-usage-check.sh b/scripts/claude-usage-check.sh index 80575e6..4ceddcc 100755 --- a/scripts/claude-usage-check.sh +++ b/scripts/claude-usage-check.sh @@ -38,10 +38,11 @@ else TYPE="warning" fi - # Update dashboard + # Update dashboard - include check time (no parentheses, dashboard strips those) + CHECKED=$(date +"%l:%M %p" | xargs) curl -s -X POST "$DASHBOARD_URL/api/status" \ -H "Content-Type: application/json" \ - -d "{\"key\": \"claude_weekly\", \"value\": \"${WEEKLY}% used (${REMAINING}% left)\", \"type\": \"${TYPE}\"}" > /dev/null + -d "{\"key\": \"claude_weekly\", \"value\": \"${WEEKLY}% used ยท ${CHECKED}\", \"type\": \"${TYPE}\"}" > /dev/null - echo "๐Ÿ“Š Claude: ${WEEKLY}% weekly used (${REMAINING}% left)" + echo "๐Ÿ“Š Claude: ${WEEKLY}% weekly used (${REMAINING}% left) ยท checked ${CHECKED}" fi diff --git a/scripts/daily-updates.sh b/scripts/daily-updates.sh new file mode 100755 index 0000000..be95a2d --- /dev/null +++ b/scripts/daily-updates.sh @@ -0,0 +1,172 @@ +#!/bin/bash +# Daily auto-update: OpenClaw, Claude Code, OS packages +# Runs at 9:00 AM ET via systemd timer +# Logs results to memory/updates/ for morning briefing + +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" +WORKSPACE="$(dirname "$SCRIPT_DIR")" +LOG_DIR="$WORKSPACE/memory/updates" +DATE=$(date +%Y-%m-%d) +LOG="$LOG_DIR/$DATE.json" + +mkdir -p "$LOG_DIR" + +# Initialize log +cat > "$LOG" <<'EOF' +{ + "date": "DATE_PLACEHOLDER", + "timestamp": "TS_PLACEHOLDER", + "openclaw": {}, + "claude_code": {}, + "os": {}, + "gateway_restarted": false +} +EOF +sed -i "s/DATE_PLACEHOLDER/$DATE/" "$LOG" +sed -i "s/TS_PLACEHOLDER/$(date -Iseconds)/" "$LOG" + +update_json() { + local key="$1" value="$2" + python3 << PYEOF +import json +with open("$LOG") as f: d = json.load(f) +keys = "$key".split(".") +obj = d +for k in keys[:-1]: obj = obj[k] +raw = '''$value''' +# Try parsing as JSON first (handles strings, arrays, numbers, booleans) +try: + obj[keys[-1]] = json.loads(raw) +except (json.JSONDecodeError, ValueError): + obj[keys[-1]] = raw +with open("$LOG", "w") as f: json.dump(d, f, indent=2) +PYEOF +} + +echo "=== Daily Update Check: $DATE ===" + +# --- OpenClaw --- +echo "" +echo "--- OpenClaw ---" +OC_BEFORE=$(openclaw --version 2>/dev/null | head -1 || echo "unknown") +OC_LATEST=$(npm show openclaw version 2>/dev/null || echo "unknown") +echo "Current: $OC_BEFORE | Latest: $OC_LATEST" + +update_json "openclaw.before" "\"$OC_BEFORE\"" +update_json "openclaw.latest" "\"$OC_LATEST\"" + +if [ "$OC_BEFORE" != "$OC_LATEST" ] && [ "$OC_LATEST" != "unknown" ]; then + echo "Updating OpenClaw..." + if npm update -g openclaw 2>&1; then + OC_AFTER=$(openclaw --version 2>/dev/null | head -1 || echo "unknown") + update_json "openclaw.after" "\"$OC_AFTER\"" + update_json "openclaw.updated" "true" + echo "Updated: $OC_BEFORE โ†’ $OC_AFTER" + else + update_json "openclaw.updated" "false" + update_json "openclaw.error" "\"npm update failed\"" + echo "Update failed" + fi +else + update_json "openclaw.updated" "false" + echo "Up to date" +fi + +# --- Claude Code --- +echo "" +echo "--- Claude Code ---" +CC_BEFORE=$(claude --version 2>/dev/null | sed 's/ (Claude Code)//' || echo "unknown") +CC_LATEST=$(npm show @anthropic-ai/claude-code version 2>/dev/null || echo "unknown") +echo "Current: $CC_BEFORE | Latest: $CC_LATEST" + +update_json "claude_code.before" "\"$CC_BEFORE\"" +update_json "claude_code.latest" "\"$CC_LATEST\"" + +if [ "$CC_BEFORE" != "$CC_LATEST" ] && [ "$CC_LATEST" != "unknown" ]; then + echo "Updating Claude Code..." + if npm update -g @anthropic-ai/claude-code 2>&1; then + CC_AFTER=$(claude --version 2>/dev/null | sed 's/ (Claude Code)//' || echo "unknown") + update_json "claude_code.after" "\"$CC_AFTER\"" + update_json "claude_code.updated" "true" + echo "Updated: $CC_BEFORE โ†’ $CC_AFTER" + else + update_json "claude_code.updated" "false" + update_json "claude_code.error" "\"npm update failed\"" + echo "Update failed" + fi +else + update_json "claude_code.updated" "false" + echo "Up to date" +fi + +# --- OS Packages --- +echo "" +echo "--- OS Packages ---" +# Capture upgradable list before updating +sudo apt-get update -qq 2>/dev/null + +UPGRADABLE=$(apt list --upgradable 2>/dev/null | grep -v "^Listing" || true) +PKG_COUNT=$(echo "$UPGRADABLE" | grep -c . || echo "0") + +update_json "os.available" "$PKG_COUNT" + +if [ "$PKG_COUNT" -gt 0 ]; then + echo "$PKG_COUNT packages upgradable" + # Capture package names and versions + PKG_LIST=$(echo "$UPGRADABLE" | head -50 | python3 -c " +import sys, json +pkgs = [] +for line in sys.stdin: + line = line.strip() + if not line: continue + parts = line.split('/') + if len(parts) >= 2: + name = parts[0] + rest = '/'.join(parts[1:]) + # Extract version info + ver_parts = rest.split(' ') + new_ver = ver_parts[1] if len(ver_parts) > 1 else 'unknown' + old_ver = ver_parts[-1].strip('[]') if '[' in rest else 'unknown' + pkgs.append({'name': name, 'from': old_ver, 'to': new_ver}) +print(json.dumps(pkgs)) +" 2>/dev/null || echo "[]") + update_json "os.packages" "$PKG_LIST" + + echo "Upgrading..." + if sudo DEBIAN_FRONTEND=noninteractive apt-get upgrade -y -qq 2>&1 | tail -5; then + update_json "os.updated" "true" + echo "OS packages updated" + + # Check if reboot required + if [ -f /var/run/reboot-required ]; then + update_json "os.reboot_required" "true" + echo "โš ๏ธ Reboot required!" + else + update_json "os.reboot_required" "false" + fi + else + update_json "os.updated" "false" + update_json "os.error" "\"apt upgrade failed\"" + fi +else + update_json "os.updated" "false" + update_json "os.packages" "[]" + echo "All packages up to date" +fi + +# --- Gateway Restart (only if OpenClaw updated) --- +echo "" +OC_UPDATED=$(python3 -c "import json; print(json.load(open('$LOG'))['openclaw'].get('updated', False))") +if [ "$OC_UPDATED" = "True" ]; then + echo "OpenClaw was updated โ€” restarting gateway..." + systemctl --user restart openclaw-gateway + update_json "gateway_restarted" "true" + echo "Gateway restarted" +else + echo "No gateway restart needed" +fi + +echo "" +echo "=== Update complete. Log: $LOG ===" diff --git a/scripts/fix-inou-www-redirect.sh b/scripts/fix-inou-www-redirect.sh new file mode 100755 index 0000000..f304cdf --- /dev/null +++ b/scripts/fix-inou-www-redirect.sh @@ -0,0 +1,32 @@ +#!/bin/bash +# Fix: www.inou.com should 301 redirect to inou.com +# Problem: www serves content (HTTP 200) instead of redirecting, +# causing GSC "Alternate page with proper canonical tag" warnings +# +# Run on caddy server (192.168.0.2): +# ssh root@caddy 'bash -s' < fix-inou-www-redirect.sh +# +# Or manually add this block to /etc/caddy/Caddyfile: + +echo "Adding www redirect to Caddyfile..." + +# Check if www redirect already exists +if grep -q 'www.inou.com' /etc/caddy/Caddyfile; then + echo "www.inou.com block already exists in Caddyfile" + exit 0 +fi + +# Add the redirect block +cat >> /etc/caddy/Caddyfile << 'CADDY' + +# Redirect www to non-www (fixes GSC indexing issue) +www.inou.com { + redir https://inou.com{uri} permanent +} +CADDY + +# Reload Caddy +systemctl reload caddy + +echo "Done! Verify: curl -I https://www.inou.com" +echo "Expected: HTTP/2 301, Location: https://inou.com/" diff --git a/scripts/git-audit.sh b/scripts/git-audit.sh new file mode 100755 index 0000000..ae5e2dc --- /dev/null +++ b/scripts/git-audit.sh @@ -0,0 +1,62 @@ +#!/bin/bash +# Git audit: check all projects in ~/dev/ for unpushed changes +# Reports anomalies only (unpushed commits, uncommitted changes, missing remotes) +# Run hourly via cron + +DEV_DIR="/home/johan/dev" +ANOMALIES="" + +for dir in "$DEV_DIR"/*/; do + [ ! -d "$dir/.git" ] && continue + + repo=$(basename "$dir") + cd "$dir" + + # Check for remote + if ! git remote get-url origin &>/dev/null; then + ANOMALIES+="โŒ $repo: NO REMOTE โ€” needs git@zurich.inou.com:$repo.git\n" + continue + fi + + # Check for uncommitted changes + DIRTY=$(git status --porcelain 2>/dev/null) + if [ -n "$DIRTY" ]; then + COUNT=$(echo "$DIRTY" | wc -l) + ANOMALIES+="โš ๏ธ $repo: $COUNT uncommitted file(s)\n" + fi + + # Check for unpushed commits (fetch first to be accurate, with timeout) + timeout 10 git fetch origin --quiet 2>/dev/null + BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null) + if [ -n "$BRANCH" ]; then + AHEAD=$(git rev-list --count "origin/$BRANCH..HEAD" 2>/dev/null) + if [ -n "$AHEAD" ] && [ "$AHEAD" -gt 0 ]; then + ANOMALIES+="๐Ÿ”บ $repo: $AHEAD unpushed commit(s) on $BRANCH\n" + fi + fi +done + +# Also check ~/clawd/ workspace +cd /home/johan/clawd +if [ -d .git ]; then + DIRTY=$(git status --porcelain 2>/dev/null) + if [ -n "$DIRTY" ]; then + COUNT=$(echo "$DIRTY" | wc -l) + ANOMALIES+="โš ๏ธ clawd: $COUNT uncommitted file(s)\n" + fi + BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null) + if [ -n "$BRANCH" ] && git remote get-url origin &>/dev/null; then + timeout 10 git fetch origin --quiet 2>/dev/null + AHEAD=$(git rev-list --count "origin/$BRANCH..HEAD" 2>/dev/null) + if [ -n "$AHEAD" ] && [ "$AHEAD" -gt 0 ]; then + ANOMALIES+="๐Ÿ”บ clawd: $AHEAD unpushed commit(s) on $BRANCH\n" + fi + fi +fi + +if [ -n "$ANOMALIES" ]; then + echo -e "Git audit found issues:\n$ANOMALIES" + exit 1 +else + exit 0 +fi diff --git a/scripts/new-server-phase1.sh b/scripts/new-server-phase1.sh new file mode 100644 index 0000000..53355fb --- /dev/null +++ b/scripts/new-server-phase1.sh @@ -0,0 +1,124 @@ +#!/bin/bash +# Phase 1: Base system setup for new James server +# Run as: ssh johan@192.168.3.134 'bash -s' < scripts/new-server-phase1.sh +set -e + +SUDO="echo Helder06 | sudo -S" + +echo "=== Phase 1: Base System Setup ===" + +# 1. Essentials +echo ">>> Installing essentials..." +$SUDO apt-get install -y -q \ + curl wget git jq htop tmux build-essential \ + pass gnupg2 \ + sshpass rsync \ + unzip zip \ + python3-pip python3-venv \ + net-tools dnsutils \ + ufw fail2ban \ + samba \ + ffmpeg \ + trash-cli \ + apt-transport-https \ + ca-certificates \ + software-properties-common 2>&1 | tail -3 + +# 2. Minimal Xfce GUI (for headed Chrome) +echo ">>> Installing minimal Xfce + LightDM..." +$SUDO apt-get install -y -q \ + xorg \ + xfce4 \ + xfce4-terminal \ + lightdm \ + lightdm-gtk-greeter \ + dbus-x11 2>&1 | tail -3 + +# Set LightDM as default display manager +echo "/usr/sbin/lightdm" | $SUDO tee /etc/X11/default-display-manager > /dev/null + +# Configure autologin +$SUDO mkdir -p /etc/lightdm/lightdm.conf.d +cat << 'AUTOLOGIN' | $SUDO tee /etc/lightdm/lightdm.conf.d/50-autologin.conf > /dev/null +[Seat:*] +autologin-user=johan +autologin-user-timeout=0 +user-session=xfce +AUTOLOGIN + +echo ">>> Disabling screensaver/power management..." +# Will be configured in Xfce session; install xfce4-power-manager +$SUDO apt-get install -y -q xfce4-power-manager 2>&1 | tail -1 + +# 3. NVIDIA Driver + CUDA (GTX 970 for inference) +echo ">>> Installing NVIDIA driver..." +$SUDO apt-get install -y -q nvidia-driver-535 nvidia-cuda-toolkit 2>&1 | tail -5 + +# 4. Configure Xorg to use Intel for display, leave NVIDIA for compute +echo ">>> Configuring Xorg for Intel display..." +cat << 'XORGCONF' | $SUDO tee /etc/X11/xorg.conf > /dev/null +# Intel iGPU for display output, NVIDIA GTX 970 for compute only +Section "Device" + Identifier "Intel" + Driver "modesetting" + BusID "PCI:0:2:0" +EndSection + +Section "Screen" + Identifier "Screen0" + Device "Intel" +EndSection + +Section "ServerLayout" + Identifier "Layout0" + Screen "Screen0" +EndSection +XORGCONF + +# 5. Hardening +echo ">>> Hardening SSH..." +$SUDO sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config +$SUDO sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config +$SUDO sed -i 's/^#\?PubkeyAuthentication.*/PubkeyAuthentication yes/' /etc/ssh/sshd_config +$SUDO systemctl restart sshd + +echo ">>> Configuring UFW firewall..." +$SUDO ufw default deny incoming +$SUDO ufw default allow outgoing +$SUDO ufw allow ssh +$SUDO ufw allow from 192.168.0.0/16 to any # LAN access for all services +$SUDO ufw --force enable + +echo ">>> Configuring fail2ban..." +cat << 'F2B' | $SUDO tee /etc/fail2ban/jail.local > /dev/null +[sshd] +enabled = true +port = ssh +filter = sshd +logpath = /var/log/auth.log +maxretry = 5 +bantime = 3600 +F2B +$SUDO systemctl enable fail2ban +$SUDO systemctl start fail2ban + +echo ">>> Enabling unattended security updates..." +$SUDO apt-get install -y -q unattended-upgrades +$SUDO dpkg-reconfigure -plow unattended-upgrades 2>/dev/null || true + +# 6. Enable lingering for user services +echo ">>> Enabling systemd linger for johan..." +$SUDO loginctl enable-linger johan + +# 7. Node.js 22 +echo ">>> Installing Node.js 22..." +curl -fsSL https://deb.nodesource.com/setup_22.x | $SUDO bash - 2>&1 | tail -3 +$SUDO apt-get install -y -q nodejs 2>&1 | tail -3 + +# 8. NPM global directory (no sudo needed) +mkdir -p ~/.npm-global +npm config set prefix ~/.npm-global +grep -q 'npm-global' ~/.bashrc || echo 'export PATH=~/.npm-global/bin:$PATH' >> ~/.bashrc + +echo "=== Phase 1 Complete ===" +echo "Reboot recommended for NVIDIA driver + GUI" diff --git a/scripts/service-health.sh b/scripts/service-health.sh new file mode 100755 index 0000000..b88874c --- /dev/null +++ b/scripts/service-health.sh @@ -0,0 +1,87 @@ +#!/bin/bash +# Service Health Check โ€” updates dashboard status +# Run manually or via heartbeat + +DASHBOARD="http://localhost:9200" +ALL_OK=true + +check_service() { + local name="$1" + local check="$2" + local result + result=$(eval "$check" 2>&1) + local rc=$? + if [ $rc -eq 0 ]; then + echo "โœ… $name" + else + echo "โŒ $name: $result" + ALL_OK=false + fi +} + +check_http() { + local name="$1" + local url="$2" + local code + code=$(curl -s -o /dev/null -w "%{http_code}" --connect-timeout 5 "$url" 2>&1) + if [[ "$code" =~ ^(200|301|302)$ ]]; then + echo "โœ… $name (HTTP $code)" + else + echo "โŒ $name (HTTP $code)" + ALL_OK=false + fi +} + +echo "=== Service Health Check ($(date -u +%Y-%m-%dT%H:%M:%SZ)) ===" + +# Systemd services +check_service "Proton Bridge" "systemctl --user is-active protonmail-bridge" +check_service "Mail Bridge" "systemctl --user is-active mail-bridge" +check_service "Message Bridge" "systemctl --user is-active message-bridge" + +# HTTP endpoints +check_http "Mail Bridge API" "http://localhost:8025/health" +check_http "Dashboard" "$DASHBOARD/api/tasks" +check_http "Zurich VPS" "https://zurich.inou.com" +check_http "inou.com" "https://inou.com" + +# Disk space +DISK_PCT=$(df -h / | awk 'NR==2 {print $5}' | tr -d '%') +if [ "$DISK_PCT" -gt 85 ]; then + echo "โš ๏ธ Disk: ${DISK_PCT}% used" + ALL_OK=false +else + echo "โœ… Disk: ${DISK_PCT}% used" +fi + +# Memory +MEM_PCT=$(free | awk '/Mem:/ {printf "%.0f", $3/$2*100}') +if [ "$MEM_PCT" -gt 90 ]; then + echo "โš ๏ธ Memory: ${MEM_PCT}% used" + ALL_OK=false +else + echo "โœ… Memory: ${MEM_PCT}% used" +fi + +# Load average +LOAD=$(cat /proc/loadavg | awk '{print $1}') +CORES=$(nproc) +LOAD_INT=${LOAD%.*} +if [ "${LOAD_INT:-0}" -gt "$CORES" ]; then + echo "โš ๏ธ Load: $LOAD ($CORES cores)" + ALL_OK=false +else + echo "โœ… Load: $LOAD ($CORES cores)" +fi + +echo "" +if $ALL_OK; then + echo "Overall: ALL SYSTEMS HEALTHY โœ…" + # Update dashboard + curl -s -X POST "$DASHBOARD/api/status" -H 'Content-Type: application/json' \ + -d "{\"key\":\"services\",\"value\":\"All services healthy โœ… (checked $(date -u +%H:%M) UTC)\",\"type\":\"info\"}" > /dev/null +else + echo "Overall: ISSUES DETECTED โš ๏ธ" + curl -s -X POST "$DASHBOARD/api/status" -H 'Content-Type: application/json' \ + -d "{\"key\":\"services\",\"value\":\"Issues detected โš ๏ธ โ€” check logs\",\"type\":\"warning\"}" > /dev/null +fi diff --git a/skills/uptime-kuma/.clawdhub/origin.json b/skills/uptime-kuma/.clawdhub/origin.json new file mode 100644 index 0000000..c92d87e --- /dev/null +++ b/skills/uptime-kuma/.clawdhub/origin.json @@ -0,0 +1,7 @@ +{ + "version": 1, + "registry": "https://clawhub.ai", + "slug": "uptime-kuma", + "installedVersion": "1.0.0", + "installedAt": 1770116696999 +} diff --git a/skills/uptime-kuma/SKILL.md b/skills/uptime-kuma/SKILL.md new file mode 100644 index 0000000..2edc47e --- /dev/null +++ b/skills/uptime-kuma/SKILL.md @@ -0,0 +1,89 @@ +--- +name: uptime-kuma +description: Interact with Uptime Kuma monitoring server. Use for checking monitor status, adding/removing monitors, pausing/resuming checks, viewing heartbeat history. Triggers on mentions of Uptime Kuma, server monitoring, uptime checks, or service health monitoring. +--- + +# Uptime Kuma Skill + +Manage Uptime Kuma monitors via CLI wrapper around the Socket.IO API. + +## Setup + +Requires `uptime-kuma-api` Python package: +```bash +pip install uptime-kuma-api +``` + +Environment variables (set in shell or Clawdbot config): +- `UPTIME_KUMA_URL` - Server URL (e.g., `http://localhost:3001`) +- `UPTIME_KUMA_USERNAME` - Login username +- `UPTIME_KUMA_PASSWORD` - Login password + +## Usage + +Script location: `scripts/kuma.py` + +### Commands + +```bash +# Overall status summary +python scripts/kuma.py status + +# List all monitors +python scripts/kuma.py list +python scripts/kuma.py list --json + +# Get monitor details +python scripts/kuma.py get + +# Add monitors +python scripts/kuma.py add --name "My Site" --type http --url https://example.com +python scripts/kuma.py add --name "Server Ping" --type ping --hostname 192.168.1.1 +python scripts/kuma.py add --name "SSH Port" --type port --hostname server.local --port 22 + +# Pause/resume monitors +python scripts/kuma.py pause +python scripts/kuma.py resume + +# Delete monitor +python scripts/kuma.py delete + +# View heartbeat history +python scripts/kuma.py heartbeats --hours 24 + +# List notification channels +python scripts/kuma.py notifications +``` + +### Monitor Types + +- `http` - HTTP/HTTPS endpoint +- `ping` - ICMP ping +- `port` - TCP port check +- `keyword` - HTTP + keyword search +- `dns` - DNS resolution +- `docker` - Docker container +- `push` - Push-based (passive) +- `mysql`, `postgres`, `mongodb`, `redis` - Database checks +- `mqtt` - MQTT broker +- `group` - Monitor group + +### Common Workflows + +**Check what's down:** +```bash +python scripts/kuma.py status +python scripts/kuma.py list # Look for ๐Ÿ”ด +``` + +**Add HTTP monitor with 30s interval:** +```bash +python scripts/kuma.py add --name "API Health" --type http --url https://api.example.com/health --interval 30 +``` + +**Maintenance mode (pause all):** +```bash +for id in $(python scripts/kuma.py list --json | jq -r '.[].id'); do + python scripts/kuma.py pause $id +done +``` diff --git a/skills/uptime-kuma/scripts/kuma.py b/skills/uptime-kuma/scripts/kuma.py new file mode 100644 index 0000000..a1dfae9 --- /dev/null +++ b/skills/uptime-kuma/scripts/kuma.py @@ -0,0 +1,276 @@ +#!/usr/bin/env python3 +""" +Uptime Kuma CLI wrapper using uptime-kuma-api library. +Requires: pip install uptime-kuma-api + +Environment variables: + UPTIME_KUMA_URL - Uptime Kuma server URL (e.g., http://localhost:3001) + UPTIME_KUMA_USERNAME - Username for authentication + UPTIME_KUMA_PASSWORD - Password for authentication +""" + +import argparse +import json +import os +import sys +from typing import Optional + +try: + from uptime_kuma_api import UptimeKumaApi, MonitorType +except ImportError: + print("Error: uptime-kuma-api not installed. Run: pip install uptime-kuma-api", file=sys.stderr) + sys.exit(1) + + +def get_env_or_exit(name: str) -> str: + """Get environment variable or exit with error.""" + value = os.environ.get(name) + if not value: + print(f"Error: {name} environment variable not set", file=sys.stderr) + sys.exit(1) + return value + + +def get_api() -> UptimeKumaApi: + """Create and authenticate API connection.""" + url = get_env_or_exit("UPTIME_KUMA_URL") + username = get_env_or_exit("UPTIME_KUMA_USERNAME") + password = get_env_or_exit("UPTIME_KUMA_PASSWORD") + + api = UptimeKumaApi(url) + api.login(username, password) + return api + + +def cmd_list_monitors(args): + """List all monitors.""" + with get_api() as api: + monitors = api.get_monitors() + if args.json: + print(json.dumps(monitors, indent=2, default=str)) + else: + for m in monitors: + status = "๐ŸŸข" if m.get("active") else "โšซ" + print(f"{status} [{m['id']}] {m['name']} ({m['type']})") + + +def cmd_get_monitor(args): + """Get details of a specific monitor.""" + with get_api() as api: + monitor = api.get_monitor(args.id) + print(json.dumps(monitor, indent=2, default=str)) + + +def cmd_add_monitor(args): + """Add a new monitor.""" + monitor_types = { + "http": MonitorType.HTTP, + "https": MonitorType.HTTP, + "port": MonitorType.PORT, + "ping": MonitorType.PING, + "keyword": MonitorType.KEYWORD, + "dns": MonitorType.DNS, + "docker": MonitorType.DOCKER, + "push": MonitorType.PUSH, + "steam": MonitorType.STEAM, + "gamedig": MonitorType.GAMEDIG, + "mqtt": MonitorType.MQTT, + "sqlserver": MonitorType.SQLSERVER, + "postgres": MonitorType.POSTGRES, + "mysql": MonitorType.MYSQL, + "mongodb": MonitorType.MONGODB, + "radius": MonitorType.RADIUS, + "redis": MonitorType.REDIS, + "group": MonitorType.GROUP, + } + + monitor_type = monitor_types.get(args.type.lower()) + if not monitor_type: + print(f"Error: Unknown monitor type '{args.type}'. Valid types: {', '.join(monitor_types.keys())}", file=sys.stderr) + sys.exit(1) + + kwargs = { + "type": monitor_type, + "name": args.name, + } + + if args.url: + kwargs["url"] = args.url + if args.hostname: + kwargs["hostname"] = args.hostname + if args.port: + kwargs["port"] = args.port + if args.interval: + kwargs["interval"] = args.interval + if args.keyword: + kwargs["keyword"] = args.keyword + + with get_api() as api: + result = api.add_monitor(**kwargs) + print(json.dumps(result, indent=2, default=str)) + + +def cmd_delete_monitor(args): + """Delete a monitor.""" + with get_api() as api: + result = api.delete_monitor(args.id) + print(json.dumps(result, indent=2, default=str)) + + +def cmd_pause_monitor(args): + """Pause a monitor.""" + with get_api() as api: + result = api.pause_monitor(args.id) + print(json.dumps(result, indent=2, default=str)) + + +def cmd_resume_monitor(args): + """Resume a monitor.""" + with get_api() as api: + result = api.resume_monitor(args.id) + print(json.dumps(result, indent=2, default=str)) + + +def cmd_status(args): + """Get overall status summary.""" + with get_api() as api: + monitors = api.get_monitors() + + total = len(monitors) + active = sum(1 for m in monitors if m.get("active")) + paused = total - active + + # Get heartbeats for status + up = 0 + down = 0 + pending = 0 + + for m in monitors: + if not m.get("active"): + continue + beats = api.get_monitor_beats(m["id"], 1) + if beats: + status = beats[0].get("status") + if status == 1: + up += 1 + elif status == 0: + down += 1 + else: + pending += 1 + else: + pending += 1 + + if args.json: + print(json.dumps({ + "total": total, + "active": active, + "paused": paused, + "up": up, + "down": down, + "pending": pending + }, indent=2)) + else: + print(f"๐Ÿ“Š Uptime Kuma Status") + print(f" Total monitors: {total}") + print(f" Active: {active} | Paused: {paused}") + print(f" ๐ŸŸข Up: {up} | ๐Ÿ”ด Down: {down} | โณ Pending: {pending}") + + +def cmd_heartbeats(args): + """Get recent heartbeats for a monitor.""" + with get_api() as api: + beats = api.get_monitor_beats(args.id, args.hours) + if args.json: + print(json.dumps(beats, indent=2, default=str)) + else: + for b in beats[-10:]: # Show last 10 + status = "๐ŸŸข" if b.get("status") == 1 else "๐Ÿ”ด" + time = b.get("time", "?") + ping = b.get("ping", "?") + print(f"{status} {time} - {ping}ms") + + +def cmd_notifications(args): + """List notification channels.""" + with get_api() as api: + notifications = api.get_notifications() + if args.json: + print(json.dumps(notifications, indent=2, default=str)) + else: + for n in notifications: + active = "โœ“" if n.get("active") else "โœ—" + print(f"[{active}] [{n['id']}] {n['name']} ({n['type']})") + + +def main(): + parser = argparse.ArgumentParser(description="Uptime Kuma CLI") + subparsers = parser.add_subparsers(dest="command", help="Commands") + + # list + p_list = subparsers.add_parser("list", help="List all monitors") + p_list.add_argument("--json", action="store_true", help="Output as JSON") + p_list.set_defaults(func=cmd_list_monitors) + + # get + p_get = subparsers.add_parser("get", help="Get monitor details") + p_get.add_argument("id", type=int, help="Monitor ID") + p_get.set_defaults(func=cmd_get_monitor) + + # add + p_add = subparsers.add_parser("add", help="Add a new monitor") + p_add.add_argument("--name", required=True, help="Monitor name") + p_add.add_argument("--type", required=True, help="Monitor type (http, ping, port, etc.)") + p_add.add_argument("--url", help="URL to monitor (for HTTP)") + p_add.add_argument("--hostname", help="Hostname (for ping/port)") + p_add.add_argument("--port", type=int, help="Port number") + p_add.add_argument("--interval", type=int, default=60, help="Check interval in seconds") + p_add.add_argument("--keyword", help="Keyword to search (for keyword type)") + p_add.set_defaults(func=cmd_add_monitor) + + # delete + p_del = subparsers.add_parser("delete", help="Delete a monitor") + p_del.add_argument("id", type=int, help="Monitor ID") + p_del.set_defaults(func=cmd_delete_monitor) + + # pause + p_pause = subparsers.add_parser("pause", help="Pause a monitor") + p_pause.add_argument("id", type=int, help="Monitor ID") + p_pause.set_defaults(func=cmd_pause_monitor) + + # resume + p_resume = subparsers.add_parser("resume", help="Resume a monitor") + p_resume.add_argument("id", type=int, help="Monitor ID") + p_resume.set_defaults(func=cmd_resume_monitor) + + # status + p_status = subparsers.add_parser("status", help="Get overall status") + p_status.add_argument("--json", action="store_true", help="Output as JSON") + p_status.set_defaults(func=cmd_status) + + # heartbeats + p_hb = subparsers.add_parser("heartbeats", help="Get heartbeats for a monitor") + p_hb.add_argument("id", type=int, help="Monitor ID") + p_hb.add_argument("--hours", type=int, default=24, help="Hours of history") + p_hb.add_argument("--json", action="store_true", help="Output as JSON") + p_hb.set_defaults(func=cmd_heartbeats) + + # notifications + p_notif = subparsers.add_parser("notifications", help="List notification channels") + p_notif.add_argument("--json", action="store_true", help="Output as JSON") + p_notif.set_defaults(func=cmd_notifications) + + args = parser.parse_args() + + if not args.command: + parser.print_help() + sys.exit(1) + + try: + args.func(args) + except Exception as e: + print(f"Error: {e}", file=sys.stderr) + sys.exit(1) + + +if __name__ == "__main__": + main()