15 KiB
TOOLS.md - Local Notes
Skills define how tools work. This file is for your specifics — the stuff that's unique to your setup.
James Server Hardware (forge — 192.168.1.16)
- CPU: Intel i7-6700K @ 4.0GHz (4c/8t)
- RAM: 64GB DDR4
- GPU: NVIDIA GTX 970 4GB
- Storage: 477GB NVMe (Samsung 950 PRO 512GB)
- OS: Ubuntu 24.04.1 LTS headless + minimal GUI for headed Chrome
- Hostname: forge
See memory/infrastructure.md for full infrastructure map.
What Goes Here
Things like:
- Camera names and locations
- SSH hosts and aliases
- Preferred voices for TTS
- Speaker/room names
- Device nicknames
- Anything environment-specific
Examples
### Cameras
- living-room → Main area, 180° wide angle
- front-door → Entrance, motion-triggered
### SSH
- home-server → 192.168.1.100, user: admin
### TTS
- Preferred voice: "Nova" (warm, slightly British)
- Default speaker: Kitchen HomePod
James Dashboard
- URL: http://100.123.216.65:9200 (Tailscale) or http://localhost:9200
- Purpose: Visual status board for tasks, briefings, history
Tasks API:
GET /api/tasks- list all tasksPOST /api/tasks- add taskPATCH /api/tasks/:id- update taskDELETE /api/tasks/:id- remove task- Fields: title, text, priority, status, owner, domain, notes
- Priority: high, medium, low
- Status: pending, in-progress, done
- Owner: "johan" or "james" (who owns the task)
- Domain: Kaseya, inou, Sophia, ClawdNode, Infrastructure, Personal, etc.
News API:
GET /api/news- list news (newest first, max 20)POST /api/news- add news itemDELETE /api/news- clear all newsDELETE /api/news/:id- remove specific item- Fields: title, body, type, source (optional), url (optional)
- Type: info, success, warning, error
Briefings API:
GET /api/briefings- list briefings (newest first, last 30)POST /api/briefings- add briefingGET /api/briefings/:id- get single briefingDELETE /api/briefings/:id- remove briefing- Fields: title, date, weather, markets, news, tasks, summary
Deliveries API:
GET /api/deliveries- list active deliveries (excludes delivered)GET /api/deliveries?all=true- list all including deliveredPOST /api/deliveries- add delivery (prefer upsert instead)PUT /api/deliveries/upsert- smart upsert: matches by tracking_number or description+retailer, updates existing or creates new. Always use this for shipping email triage.GET /api/deliveries/:id- get single deliveryPATCH /api/deliveries/:id- update deliveryDELETE /api/deliveries/:id- remove delivery- Fields: description, carrier, retailer, tracking_number, tracking_url, expected_date, status, notes
- Status values: shipped, in_transit, out_for_delivery, delayed, delivered
Status API:
GET /api/status- list all status itemsPOST /api/status- set status{"key": "...", "value": "...", "type": "info|warning|error"}GET /api/status/:key- get single statusDELETE /api/status/:key- remove status- Use for: Claude usage, system health, key metrics displayed at top of dashboard
Workflow:
- Morning brief → POST to /api/briefings, send Signal link to dashboard
- Track tasks with owner field (mine vs Johan's)
- Keep briefing history for reference
- Update Claude usage status:
scripts/claude-usage-check.sh(auto-updates dashboard)
Forge = James' Home (192.168.1.16)
- forge IS the James server now. See hardware section above.
- GPU: GTX 970 4GB (Driver 580.126.09, CUDA 13.0)
- Ollama: Installed (0.15.4)
- Python: /home/johan/ocr-env/ (venv with PyTorch 2.2 + CUDA 11.8)
- SSH: Key auth only (password auth disabled)
- Firewall: UFW active, SSH + LAN allowed
- Owner: James ⚡ (full autonomy)
OCR Service (GLM-OCR):
- URL: http://192.168.3.138:8090
- Service:
systemctl --user status ocr-service(on forge) - Source:
/home/johan/ocr-service/server.py - Model:
/home/johan/models/glm-ocr(zai-org/GLM-OCR, 2.47 GB) - VRAM usage: 2.2 GB idle (model loaded), peaks ~2.8 GB during inference
- Performance: ~2s small images, ~25s full-page documents (auto-resized to 1280px max)
- Endpoints:
GET /health— status + GPU memoryPOST /ocr— single image OCR (multipart: file + prompt + max_tokens)POST /ocr/batch— multi-image OCR
- Auto-resize: Images capped at 1280px longest edge (prevents OOM on GTX 970)
- Usage from james:
curl -X POST http://192.168.3.138:8090/ocr -F "file=@image.png" - Patched env: transformers 5.0.1.dev0 (from git) + monkey-patch for PyTorch 2.2 compat
Home Network
- Public IP: 47.197.93.62 (not static, but rarely changes)
- Location: St. Petersburg, Florida
- Caddy (reverse proxy): 192.168.0.2 / Tailscale: 100.84.42.55 (caddy)
- SSH:
tailscale ssh root@caddyorssh root@caddy(key installed) - Config:
/etc/caddy/Caddyfile
- SSH:
James Server (Hetzner)
- LAN IP: 192.168.1.16
- Gateway UI: http://192.168.1.16:18789/
- Agents:
- Main (James): http://192.168.1.16:18789/
- Mail Agent: http://192.168.1.16:18789/agents/mailagent
Uptime Kuma
- URL: http://zurich.inou.com:3001
- User: james
- Password: WW8ipJfY27ELf7nnouaKLCL6
Openprovider (Domain Registrar)
- URL: https://cp.openprovider.eu
- User: johan.jongsma@iasobackup.com
- Password: !!Helder06
Test Devices
- ThinkPhone 1 (Motorola/Lenovo) — Johan's Android test device
Git Server (Zurich)
- Host: zurich.inou.com
- User: git
- URL format:
git@zurich.inou.com:<repo>.git - Repos: azure-backup, clawdnode-android, inou-mobile, mail-agent
- Auth: SSH keys (claude@macbook, james@server, johan@ubuntu2404 authorized)
- Create new repo:
ssh git@zurich.inou.com "git init --bare <name>.git"
myCigna (Health Insurance Portal)
- URL: https://my.cigna.com
- Username: tjjongsma
- Password: QFL&ARHXGW4R
- 2FA: Email to tj@jongsma.me (I can grab the code from MC) or SMS to ***-2475
- Account holder: Tatyana
- Covered: Tatyana, Johan, Michael, Sophia
- Access method: Real Chrome on Xvfb:99, CDP on port 9224 (headless Playwright gets WAF-blocked)
SSH Hosts
-
hostkey50304 → 82.22.36.202 / zurich.inou.com (CH/Switzerland VPS)
- Location: Zürich, likely Equinix ZH (Josefstrasse 225)
- Upstream: Cogent Communications
- Specs: 4 vCore, 6GB RAM, 120GB SSD
- OS: Fresh install (2025-01-27)
- User: root
- Purpose: Security infrastructure (geographic diversity for monitoring, SOC2 compliance). NOT for hosting inou.com.
-
Home Assistant → 192.168.1.252
- User: root
- Port: 22
- ⚠️ STRICT RULES:
- NO changes without Johan's explicit permission
- NEVER change lights during night hours
- NEVER play audio on speakers/tablets during night hours
- Night = Sophia's sleep/care time — disruptions are dangerous
Browser Profiles
| Profile | Type | Port | Use Case |
|---|---|---|---|
| chrome | Relay (your actual Chrome) | 18792 | X.com, paranoid sites, full auth |
| fast | Headless Chromium | 9223 | General automation, tolerant sites |
| clawd | Headless (managed) | 18800 | Default, no auth |
Chrome Relay (profile="chrome") — Best for authenticated sites
Attaches to your actual Chrome browser via extension. No session copying, no detection issues.
Setup (one-time):
- Extension installed at
~/.clawdbot/browser/chrome-extension - Load in Chrome:
chrome://extensions→ Developer mode → Load unpacked - Pin the extension to toolbar
Usage:
- Navigate to site in Chrome, make sure you're logged in
- Click the Clawdbot extension icon (badge shows ON)
- I use
browser(profile="chrome")to control that tab
When to use Chrome Relay:
- X.com (very aggressive bot detection)
- Sites requiring 2FA/login you've already completed
- Anything that blocks headless browsers
Headless Chromium (profile="fast") — For general automation
Runs headless with synced cookies from Chrome. Some paranoid sites (X.com) detect and block it.
Setup script: ~/clawd/scripts/browser-setup.sh
scripts/browser-setup.sh start-login # Open Chrome, login to sites
scripts/browser-setup.sh sync # Close Chrome first! Then sync
scripts/browser-setup.sh start-headless # Start headless on port 9223
scripts/browser-setup.sh status # Check what's running
scripts/browser-setup.sh stop # Stop all
⚠️ Important: Close Chrome before running sync — copying a live profile invalidates sessions!
Usage: browser(action="...", profile="fast")
Browsing Tips
- Use
web_fetchfor simple page reads (faster, cheaper than full browser) - Use
browserwhen you need JavaScript rendering, interactions, or auth - For large pages, use
maxCharson snapshots to avoid context bloat - Keep
targetIdfrom snapshot responses for stable tab references
bird (X/Twitter CLI)
- Wrapper:
~/clawd/scripts/bird(sets auth tokens automatically) - Config:
~/.config/bird/config.json5(tokens stored but not read properly - use wrapper) - Usage:
~/clawd/scripts/bird read <tweet-id>orbird read <url> - Commands:
bird search,bird home,bird mentions,bird news,bird user-tweets @handle - For X.com access — use bird instead of browser (faster, no bot detection issues)
- Auth: Using @johanjongsma account
Proton Mail Bridge (Headless)
- Service:
systemctl --user status protonmail-bridge - Account: tj@jongsma.me (Tanya & Johan Jongsma)
- IMAP: 127.0.0.1:1143 (STARTTLS)
- SMTP: 127.0.0.1:1025 (STARTTLS)
- Bridge Password: BlcMCKtNDfqv0cq1LmGR9g
- Keychain:
pass(no gnome-keyring needed) - Config:
~/.config/protonmail/bridge-v3/prefs.json→{"preferred_keychain": "pass"}
Message Bridge (WhatsApp backend) - whatsmeow
- Service:
systemctl --user status message-bridge - Port: 8030
- Source:
/home/johan/dev/message-bridge/ - Data:
~/.message-bridge/ - Linked Number: +1 727 225 2475 (Johan's ThinkPhone)
- Note: MC wraps this — use MC API for WhatsApp, not this directly
Direct API (for debugging):
GET /status- connection statusGET /messages- list messagesGET /qr?format=png- QR code for linking (if disconnected)POST /send- send message
Message Center (MC) - Unified Messaging API
- Service:
systemctl --user status mail-bridge - Port: 8025
- Source:
/home/johan/dev/mail-bridge/(branch: mc-unified) - Data:
~/.message-center/(cursors.json, orchestration.db) - Health:
curl http://localhost:8025/health
Connectors:
tj_jongsma_me— IMAP (tj@jongsma.me via Proton Bridge)johan_jongsma_me— IMAP (johan@jongsma.me via Proton Bridge)whatsapp— HTTP wrapper for message-bridge on port 8030
Unified API:
GET /messages/new— unseen messages from all sourcesGET /messages?since=24h— replay windowGET /messages/{id}— single messagePOST /messages/ack— advance consumer cursor
Actions:
POST /messages/{id}/archive— mark seen/archivePOST /messages/{id}/delete— delete messagePOST /messages/{id}/reply— send reply{"body": "..."}POST /messages/{id}/to-docs— forward attachments to ~/documents/inbox/
Orchestration DB: ~/.message-center/orchestration.db
- Tracks: first_seen, last_action, acked_by per message
- Actions persist across restarts
Webhook: POSTs {"event": "new"} to http://localhost:18789/hooks/messages
Commands
/screenshot→ Pull latest screenshot from Mac desktop (uses screenshot skill)
Uptime Kuma (Zurich)
- URL: https://kuma.inou.com (DNS → zurich 82.22.36.202, Caddy reverse proxy → localhost:3001)
- User: james / JamesKuma2026!
- Env vars for kuma.py:
UPTIME_KUMA_URL=http://localhost:13001 # (via SSH tunnel: ssh -L 13001:127.0.0.1:3001 root@zurich.inou.com) UPTIME_KUMA_USERNAME=james UPTIME_KUMA_PASSWORD=JamesKuma2026! - Python venv:
/home/johan/clawd/skills/uptime-kuma/.venv/ - Notification channels:
- [1] Clawdbot Signal Alert (webhook → OC) — for MC alerts
- [2] Johan Direct (ntfy) — for OC/network alerts
- Monitors:
- [1-5] inou.com (HTTP, API, Zurich VPS, DNS, SSL Cert)
- [6] Forge — OpenClaw (push, token: r1G9JcTYCg) → ntfy
- [7] Forge — Message Center (push, token: rLdedldMLP) → OC webhook
- [8] Home Network (Public) (ping 47.197.93.62) → ntfy
ntfy (Zurich — self-hosted)
- URL: https://ntfy.inou.com (Caddy → localhost:2586)
- User: james / JamesNtfy2026!
- API Token: tk_k120jegay3lugeqbr9fmpuxdqmzx5
- Alert topic: forge-alerts (anonymous read allowed for iOS app)
- Johan subscribes to: https://ntfy.inou.com/forge-alerts (in ntfy iOS app)
ntfy (Push Notifications)
- Server: https://ntfy.inou.com (self-hosted on Zurich)
- Topic:
forge-alerts(anonymous read, auth required to publish) - Auth:
Authorization: Bearer tk_k120jegay3lugeqbr9fmpuxdqmzx5(james admin account) - Markdown: Supported — use
-H "Markdown: yes"header - Headers: Title, Priority (1-5), Tags (comma-separated emoji shortcodes)
- Use for: Alerts when OC/Signal is down, urgent pings, status reports
- Johan subscribed on: Android (ntfy app, Feb 15 2026)
Health Push Script
- Script:
/home/johan/scripts/health-push.sh - Cron:
* * * * *(every minute) - Logic: Checks MC health + OC health locally, pushes to Kuma only if healthy
- Alert routing:
- MC down → James (OC webhook) — James can investigate
- OC down → Johan direct (ntfy) — James IS the thing that's down
- Home network down → Johan direct (ntfy) — everything at home is down
OpenProvider API (Domain Registrar)
- API:
https://api.openprovider.eu/v1beta/ - Creds:
/home/johan/.config/openprovider.env—OP_USERNAME,OP_PASSWORD - Auth: POST
/auth/login→ bearer token - NS change: PUT
/domains/{id}with{"name_servers":[...]} - List domains: GET
/domains - I have full access — don't ask Johan to change NS manually!
Why Separate?
Skills are shared. Your setup is yours. Keeping them apart means you can update skills without losing your notes, and share skills without leaking your infrastructure.
Add whatever helps you do your job. This is your cheat sheet.
Govee H5122 Buttons
- Button 1:
event.gv5122775b_button_1(MAC: D2:2D:83:86:77:5B)- Automation:
automation.govee_button_mbed_pendants_toggle→ togglesswitch.mbed_pendants
- Automation:
- Button 2:
event.gv51222839_button_1(MAC: D2:2D:80:C6:28:39) — Office- Automation:
automation.govee_button_2_suction_machine→ toggles suction machine
- Automation:
- Pairing: Hold 5 sec (LED flashes), press once to broadcast, add from HA UI quickly
- Note: BLE proxy must be enabled on nearby Athom sensor (Office1 b372f4 has it now)
Office Tablet (office1.tbl)
- Media player:
media_player.lenovo_tab_m8_7 - TTS notify:
notify.lenovo_tab_m8_text_to_speech_7 - Overlay notify:
notify.lenovo_tab_m8_overlay_message_7 - Screen:
light.office_tbl_screen - Fully Kiosk media_player:
media_player.office_tbl - Use for: James voice output testing, announcements