# HEARTBEAT.md ## Johan's Schedule (US Eastern) **Sleep blocks — DO NOT PING:** - ~7:30pm – 10:15pm (first sleep) - ~5:15am – 9/10am (second sleep, weekdays) - ~7:00am – 11am+ (second sleep, weekends & holidays — sleeps in, no work) **Available/Awake:** - ~10am – 7:30pm (daytime) - ~10:30pm – 5:00am weekdays / 7:00am weekends & holidays (night shift, caring for Sophia so Babuska can rest) At 10:30pm+ he's WORKING, not sleeping. Don't assume "late night = quiet time." **Default assumption:** If not in a sleep block, Johan is working or available. When in doubt, ping. --- ## Daily Tasks (run once per day) Check `memory/heartbeat-state.json` for last run times. Only run if >24h since last check. ### Update Summary (in briefings) **Automated at 9:00 AM ET** via systemd timer (`daily-updates.timer`). Updates OpenClaw, Claude Code, and OS packages. Only restarts gateway if OpenClaw changed. Log: `memory/updates/YYYY-MM-DD.json` **In every morning briefing:** 1. Read today's update log from `memory/updates/` 2. Summarize what was updated (versions, package names) 3. For OpenClaw/Claude Code updates: fetch release notes from GitHub and include key changes 4. Flag if reboot required (kernel updates) **No manual check needed** — the timer handles it. I just read the log and report. ### Spacebot Monitor (weekly — revisit 2026-03-03) Watch for new Spacebot releases that may fix the worker dispatch issue: - Check latest release: `curl -s https://api.github.com/repos/spacedriveapp/spacebot/releases/latest | python3 -c "import json,sys; r=json.load(sys.stdin); print(r['tag_name'], r['published_at'][:10], r['body'][:200])"` - Current version running: **v0.1.15** - **Watch for:** fix to worker dispatch (channel calls reply() and stops without spawning workers) - **PR #193** open at https://github.com/spacedriveapp/spacebot/pull/193 — check if merged - If significant release → note in briefing. Don't pull/update Andrew until Johan says so. - Johan revisiting Andrew seriously on **2026-03-03** ### inou Daily Suggestion (daily, once per day) **Goal:** Keep inou moving forward even on days Johan doesn't mention it. **Track:** `lastInouSuggestion` in `memory/heartbeat-state.json`. Skip if run in last 20h. **Timing:** Morning or early afternoon — not at night. **Mode:** Building only. We are NOT ready to promote yet. No marketing suggestions. Spawn a subagent to think through ONE concrete suggestion: ``` sessions_spawn(task="inou daily suggestion: review memory/inou-context.md and recent inou work, then propose ONE specific thing Johan could do today to move inou forward (building/product focus only, not marketing). Post to dashboard news and send via Discord DM to Johan (user 666836243262210068). Keep it to 2-3 sentences max.", label="inou-nudge") ``` **Good suggestions:** a feature gap to close, a bug to fix, a UX improvement, an integration to build, a technical decision to make, a missing data source to add, a doctor/specialist workflow to improve. **Bad suggestions:** anything about promoting, tweeting, getting users, press, or going public. --- ### Cron Job Failure Check (every heartbeat) Check if systemd timers/services have failed since last run: ```bash systemctl --user list-units --state=failed --no-legend 2>/dev/null ``` Key timers to watch: `daily-updates.timer`, `k2-watchdog.timer` If any service shows `Result=exit-code` or `ActiveState=failed`: 1. Check logs: `journalctl --user -u -n 50 --no-pager` 2. If it's `daily-updates.service` failing: check `memory/updates/YYYY-MM-DD.json` for what ran (nightly maintenance may have covered it), then fix the script 3. **Alert via ntfy (forge-alerts)** — don't wait for Johan to notice ```bash # Quick check systemctl --user show daily-updates.service --property=Result,ActiveState | grep -v "^$" ``` --- ### Disk Usage Check (every heartbeat) ```bash df -h / | awk 'NR==2 {print $5, $4}' ``` - **< 70%:** Fine, no action - **70–85%:** Note in next briefing - **> 85%:** Alert via ntfy (forge-alerts) immediately - **> 90%:** URGENT — ping Johan Current baseline: ~54% used (236G / 469G) as of 2026-03-15. Top consumers to check if filling fast: ClickHouse (`/var/lib/docker`), Jellyfin media, doc inbox. --- ### Memory Review (daily) Quick scan of today's conversations for durable facts worth remembering: - New people, relationships, preferences - Decisions made, lessons learned - Project milestones, status changes - Anything Johan said "remember this" about If found: update MEMORY.md with dated entries. Keep it lean. Track in `memory/heartbeat-state.json` with `lastMemoryReview` timestamp. ### Weather (in briefings) Florida = "every day is the same." Don't report normal weather. - **Format:** Just temp range + icon (e.g., "☀️ 68-82°F") - **Only alert on:** Hurricanes, tropical storms, unusual temps, severe weather - If nothing exceptional: one-liner or skip entirely ### Briefings Dashboard (ALWAYS!) After generating any briefing (morning, afternoon, or ad-hoc): 1. **POST to dashboard:** `curl -X POST http://localhost:9200/api/briefings -H 'Content-Type: application/json' -d '{"title":"...","date":"YYYY-MM-DD","weather":"...","markets":"...","news":"...","tasks":"...","summary":"..."}'` 2. **Send to Johan via Discord DM** (NOT Signal — retired 2026-03-01): `message(action="send", channel="discord", target="johanjongsma", message="...")` with dashboard link http://100.123.216.65:9200 3. **Verify it worked:** `curl -s http://localhost:9200/api/briefings | jq '.briefings | length'` This is NON-NEGOTIABLE. Johan expects briefings on the dashboard. ### Briefing Quality Rules - **Tasks:** Pull ONLY from `GET /api/tasks` on the dashboard. Do NOT invent or recall tasks from memory. - **Alerts:** Only include if fresh (< 24h). A 2-day-old bank alert is not news. - **Server status:** Before flagging anything as "down", verify it. If you're running on forge and sending this briefing, forge is NOT down. - **Abandoned projects:** Flutter Styleguide, Azure Backup (trial) — do NOT include. If unsure if a project is active, skip it. - **Domain renewals:** All domains auto-renew at Openprovider. Do NOT flag expirations unless renewal actually fails. ### Weekly Docker & HAOS Update (Sundays) Check and update Docker containers on 192.168.1.253 and HAOS on 192.168.1.252: 1. **HAOS:** Check `update.home_assistant_operating_system_update` — install if available 2. **Docker (253):** For each service in `/home/johan/services/`: - `docker compose pull` → `docker compose up -d` → `docker image prune -f` 3. Report what was updated in the weekly briefing Services: immich, clickhouse, jellyfin, qbittorrent-vpn **qbittorrent-vpn: PULL ONLY, do NOT start.** Johan uses it on-demand. SSH: `ssh johan@192.168.1.253` ### Weekly Memory Synthesis (Sundays) Deeper review: 1. Read through `memory/YYYY-MM-DD.md` files from the past week 2. Read `memory/corrections.md` — look for patterns in pushback 3. Distill patterns, insights, recurring themes into MEMORY.md 4. Promote recurring corrections to AGENTS.md as rules 5. Flag anything stale in MEMORY.md (ask Johan or remove) Schedule: Sunday mornings (cron handles this separately) ### Corrections Process (ongoing) When Johan pushes back: 1. **Log immediately** to `memory/corrections.md` 2. **Capture the principle**, not just the symptom 3. **Ask:** "Why was this wrong? Where else does this apply?" 4. Format: ``` ### PRINCIPLE: [Name it] **Trigger:** What happened **Why:** Why it matters **Applies to:** Other situations **Test:** Question to ask before acting ``` ### Tech News Scan (daily, morning) Search X/Twitter for news Johan cares about: **Tech/AI:** - Storage optimizations, compression, deduplication - Vector embeddings improvements (better/smaller/faster) - Backup technology advances - Data reduction techniques - New AI models (faster, smaller, cheaper, open source) - **WebMCP** (Chrome's Web Model Context Protocol) — spec updates, browser adoption, site implementations, healthcare/enterprise use cases **Clawdbot / AI Assistants:** - @moltbot (Mr. Lobster 🦞, Clawdbot creator) - releases, features, issues - @AlexFinn - Moltbot power user, AI insights, Creator Buddy - Clawdbot GitHub releases/issues: https://github.com/clawdbot/clawdbot - ClawdHub skills/add-ons: https://clawdhub.com - Claude Code updates - New/interesting skills, plugins, integrations - **Major platform integrations** (Cloudflare, Vercel, etc. adding Moltbot support) **Politics/General:** - What did Trump do today (executive orders, major moves) - What's making noise / trending topics **Infrastructure/Markets (only if news):** - Data center power (grid constraints, pricing, outages, nuclear/renewables for DCs) - Memory pricing (DRAM, NAND, HBM — only significant moves) - Big stock moves (MSFT, NVDA, etc. — 5%+ swings) **NABL (N-able) — Check 2x daily:** - Johan's ex-employer, doing poorly - Watch for: PE takeover/going private rumors, acquisition news, major price moves - Any "NABL acquired", "N-able buyout", "take private" headlines **Breaking Financial News (search for high-engagement posts):** - Multi-asset moves (stocks + metals + crypto moving together) — crashes OR rallies - Big swings in either direction (up 8% is as interesting as down 8%) - Gold, Silver, S&P500, NASDAQ, BTC major moves - **Always find the WHY** — "MSFT -10% despite strong earnings" or "NVDA +15% on new chip announcement" - Search terms: "crash", "surge", "plunge", "rally", "trillion", "markets" - Look for 500K+ views posts — that's where the real news breaks - Example: @BullTheoryio style macro breakdowns with charts **X Accounts to Check (every briefing):** - @openclaw — OpenClaw/Clawdbot official account (releases, features, community highlights) - @steipete — Peter Steinberger, OpenClaw creator - @AlexFinn — Moltbot power user, Creator Buddy founder, AI insights - @OpenAI — model releases, product announcements - @MiniMax_AI — MiniMax official (M2.5 and other models) - @Kimi_Moonshot — Moonshot AI / Kimi (Chinese frontier lab) - @ZhipuAI — Zhipu AI / z.ai (GLM models) - @GeminiApp — Google Gemini updates - @realDonaldTrump — What's Trump posting/doing - @RapidResponse47 — Trump admin rapid response political news - @trumpdailyposts — Trump daily activity digest - ~~@moltbot~~ — gone as of Feb 22; ask Johan if he knows the new handle **OpenClaw Issues (check weekly):** - Session reset deleting transcripts from memory search index (our patch: include `.deleted` files) - `dangerouslyDisableDeviceAuth` clearing operator scopes (our patch: preserve scopes) - Watch releases + GitHub issues for official fixes → remove patches when fixed upstream **Netherlands:** - Dutch news (usually quiet, but flag anything significant) - Tax on unrealized profits (vermogensbelasting box 3) — this is a big deal **Post findings to dashboard:** ```bash curl -X POST http://localhost:9200/api/news -H 'Content-Type: application/json' \ -d '{"title":"...","body":"...","type":"info|success|warning|error","source":"..."}' ``` Only ping Johan on Discord if truly urgent (big position moves, breaking news). Update `memory/heartbeat-state.json` with `lastTechScan` timestamp after running. **Johan's Open Positions:** - **Short S (SentinelOne)** — track daily, alert on big moves either direction --- ## Intra-Day X Watch (spawn subagent — never run inline) **Goal:** Catch high-signal posts from key accounts the same day, not the next morning. **Frequency:** Every 3–4 hours during awake hours. Skip during sleep blocks. **State:** Track `lastIntraDayXScan` in `memory/heartbeat-state.json`. Skip if checked < 2h ago. ### ⚠️ ALWAYS SPAWN A SUBAGENT — never run inline X scanning = multiple bird calls + web searches = context pollution. Offload it. ### De-duplication (MANDATORY — do both before posting) **1. Last 24h only:** Only surface posts with timestamps within the last 24 hours. Discard anything older regardless of how interesting it is. **2. Check what was already posted:** Before posting anything to the dashboard or pinging Johan, fetch recent dashboard news: ```bash curl -s http://localhost:9200/api/news | python3 -c "import json,sys; items=json.load(sys.stdin).get('news',[]); [print(i['title']) for i in items[:20]]" ``` If a story with a similar title or topic was already posted today, **skip it**. Don't post NemoClaw twice. Don't post the same OC release three times. **3. Save what you surfaced:** After the scan, write a short summary of what was posted to `memory/x-watch-last.md` (overwrite each time): ``` # Last X Watch: - NemoClaw announced at GTC (steipete, NVIDIA) - MiniMax M2.7 benchmarks circulating ``` The next subagent reads this file at the start and skips anything already covered. **Hard dedup rule:** If a topic/story appears in `x-watch-last.md`, **do not surface it again** — not as an update, not as a confirmation, not as "more details emerged." The only exception is if there's a concrete new fact that changes the picture (e.g., a release date, a product launch, a number). "steipete is still building NemoClaw" is not a new fact. Treat the last scan file as a blacklist. ### How to start each scan 1. Read `memory/x-watch-last.md` — know what was already covered 2. Fetch dashboard news (last 20 items) — know what's already posted 3. Run bird scans, filter to last 24h only 4. Only post what's genuinely new ### Accounts to scan every run Check recent posts (last ~24h) from each: - **@Cloudflare** — MCP, Workers, AI integrations, platform announcements - **@openclaw** — releases, features, community highlights - **@steipete** — Peter Steinberger, OpenClaw creator - **@AlexFinn** — AI insights, Creator Buddy, Moltbot power use - **@OpenAI** — model releases, product announcements - **@MiniMax_AI** — MiniMax official (GLM, M2.5 models) - **@Kimi_Moonshot** — Moonshot AI / Kimi (Chinese frontier lab) - **@ZhipuAI** — Zhipu AI / z.ai (GLM models, Johan has dev account) - **@GeminiApp** — Google Gemini updates - **@realDonaldTrump** — executive orders, breaking political moves - **@RapidResponse47** — Trump admin rapid response / political news - **@trumpdailyposts** — Trump daily activity digest - ~~@moltbot~~ — account not found as of Feb 22, 2026; handle may have changed Use: `bird user-tweets @handle` → filter for posts newer than last scan timestamp. ### What's worth surfacing **Always ping Johan:** - Major platform announcing MCP support (Cloudflare, Vercel, GitHub, etc.) - New AI model releases (open-weight, frontier, pricing changes) - Breaking political move with broad economic impact - NABL acquisition/PE news - @openclaw or @steipete announcing a release **Post to dashboard, no ping:** - Interesting tech commentary worth reading - Minor updates, incremental news - OpenClaw community highlights **Drop silently:** - Retweets of old news - Promotional content, event promos, partnership announcements - Engagement bait, opinion threads, personal posts - Anything from @OpenAI/@MiniMax_AI/@Kimi_Moonshot/@ZhipuAI/@GeminiApp that isn't a model release, pricing change, or major product launch — these accounts post constantly, only hard news counts ### Subagent reports back with - Any items surfaced (title + URL) — new ones only, not repeats - "Nothing new since last scan" if quiet - **Do NOT list accounts with no news** — not even in a "dropped" or "nothing from X" section. Only mention accounts that had something worth surfacing. - Always write `memory/x-watch-last.md` even if nothing new — update the timestamp so the next scan knows when it last ran. --- ## K2.5 Watchdog (check every heartbeat) **CRITICAL:** K2.5 can get stuck in infinite loops, burning thousands of dollars in tokens. Check `~/.clawdbot/agents/k2-browser/sessions/` for signs of runaway: 1. **Session file > 100 lines** → WARNING, monitor closely 2. **Session file > 200 lines** → KILL immediately 3. **Session file growing rapidly** (modified in last 2 min + >50 lines) → KILL **Kill procedure:** ```bash /home/johan/clawd/scripts/k2-emergency-stop.sh ``` Or manually: ```bash rm -rf ~/.clawdbot/agents/k2-browser/sessions/*.jsonl systemctl --user restart clawdbot-gateway ``` **Alert Johan** if you kill K2 — he needs to know tokens were burned. **Cron backup:** `scripts/k2-watchdog.sh` runs every 5 minutes as a safety net. --- ## WhatsApp Monitor **Handled by webhook** — message-bridge POSTs to `/hooks/message` on new messages. No heartbeat polling needed. --- ## Message Center (check every heartbeat) **Goal:** Process all new messages (email + WhatsApp) from MC. Also catch any stragglers missed by webhook. ### Inbox Cleanup (every heartbeat) Webhook handles real-time, but messages can slip through (restarts, migration, webhook downtime). Check for un-actioned messages (listing now auto-excludes archived/deleted): ```bash curl -s "http://localhost:8025/messages?source=tj_jongsma_me" | jq 'length' curl -s "http://localhost:8025/messages?source=johan_jongsma_me" | jq 'length' ``` These counts reflect only messages with NO action taken. If anything's sitting there, triage it per `memory/email-triage.md`. Use `?all=true` to see everything including actioned messages (for debugging only). ### New Messages Check **Check:** ```bash curl -s "http://localhost:8025/messages/new" | jq 'length' ``` **If new messages exist:** 1. Fetch full list: `GET /messages/new` 2. Triage each per `memory/email-triage.md` rules 3. Take action: `/messages/{id}/archive`, `/delete`, `/reply`, or `/to-docs` 4. Actions persist in orchestration DB — won't reappear **Replay if needed:** ```bash curl -s "http://localhost:8025/messages?since=24h" | jq 'length' ``` **If falling behind (>5 new):** - Do NOT alert Johan - Process backlog silently - Log patterns to improve triage rules **Only ping Johan for:** - Anything Sophia-related - Requires his decision/action (not just FYI) - Can't fix it myself (stuck, need access, unclear rules) **Silent processing** — just triage and fix, don't ping for routine messages. --- ## Document Inbox (check every heartbeat) **Inbox:** `~/documents/inbox/` If files exist, for each document: 1. **Read** - OCR/extract full text 2. **Triage** - classify type (tax, expense, bill, medical, etc.) 3. **Store PDF** - move to `~/documents/store/[hash].pdf` 4. **Create record** - markdown in `~/documents/records/[category]/` - Summary, metadata, full text, link to PDF 5. **Embed** - generate vector embedding, store in `~/documents/index/embeddings.db` 6. **Index** - update `~/documents/index/master.json` 7. **Export** - if expense/invoice, append to `~/documents/exports/expenses.csv` **Silent processing** — don't ping for every doc, just do the work. Only ping if: IRS correspondence, something urgent, or can't categorize. --- ## Claude.ai Usage Monitor (include in every briefing) **Goal:** Track weekly usage limits to avoid running out mid-week. **Data file:** `memory/claude-usage.json` **Fetch script:** `scripts/claude-usage-fetch.py --save` **Cookies config:** `config/claude-cookies.json` **In every briefing:** 1. Run `scripts/claude-usage-fetch.py --save` to get fresh data 2. Include in briefing as one line: `📊 Claude: {weekly_percent}% used (resets {weekly_resets})` **Pace = (weekly_percent / time_elapsed_percent) × 100** e.g. 60% used at 50% of the week = pace 120% (burning too fast). 60% used at 80% of the week = pace 75% (fine). Week runs Thu 10PM → Thu 10PM ET (Anthropic changed reset window — previously Sat 2PM). **Alert rules — read carefully:** - **Pace ≤ 100%:** NOT an alert. Tracking correctly. Mention in briefing, nothing more. - **Pace > 100% (burning faster than week allows):** Send ntfy alert (forge-alerts). No Fully tablet. - **Sudden jump ≥ 4% in ≤ 4h:** Send ntfy alert immediately (forge-alerts). No Fully tablet. - **NEVER post Claude usage to the Fully tablet (port 9202).** It's not urgent enough for that surface. **The James dashboard (port 9200) status bar** shows current usage — that's enough passive visibility. **If fetch fails (Cloudflare challenge = expired cookies):** - **ALERT JOHAN IMMEDIATELY via ntfy (forge-alerts)** — don't go silent! - Message: "⚠️ Claude usage fetch failed - cookies expired. Need fresh cookies from Chrome." - Instructions: F12 → Application → Cookies → claude.ai → copy sessionKey + cf_clearance - Update `config/claude-cookies.json` **K2.5 Backup ready at:** `config-backups/k2.5-emergency-switch.md` Switch command: `/model fireworks/accounts/fireworks/models/kimi-k2p5`