clawd/HEARTBEAT.md

16 KiB
Raw Blame History

HEARTBEAT.md

Johan's Schedule (US Eastern)

Sleep blocks — DO NOT PING:

  • ~7:30pm 10:15pm (first sleep)
  • ~5:15am 9/10am (second sleep, weekdays)
  • ~7:00am 11am+ (second sleep, weekends & holidays — sleeps in, no work)

Available/Awake:

  • ~10am 7:30pm (daytime)
  • ~10:30pm 5:00am weekdays / 7:00am weekends & holidays (night shift, caring for Sophia so Babuska can rest)

At 10:30pm+ he's WORKING, not sleeping. Don't assume "late night = quiet time."

Default assumption: If not in a sleep block, Johan is working or available. When in doubt, ping.


Daily Tasks (run once per day)

Check memory/heartbeat-state.json for last run times. Only run if >24h since last check.

Update Summary (in briefings)

Automated at 9:00 AM ET via systemd timer (daily-updates.timer). Updates OpenClaw, Claude Code, and OS packages. Only restarts gateway if OpenClaw changed. Log: memory/updates/YYYY-MM-DD.json

In every morning briefing:

  1. Read today's update log from memory/updates/
  2. Summarize what was updated (versions, package names)
  3. For OpenClaw/Claude Code updates: fetch release notes from GitHub and include key changes
  4. Flag if reboot required (kernel updates)

No manual check needed — the timer handles it. I just read the log and report.

inou Daily Suggestion (daily, once per day)

Goal: Keep inou moving forward even on days Johan doesn't mention it. Track: lastInouSuggestion in memory/heartbeat-state.json. Skip if run in last 20h. Timing: Morning or early afternoon — not at night. Mode: Building only. We are NOT ready to promote yet. No marketing suggestions.

Spawn a subagent to think through ONE concrete suggestion:

sessions_spawn(task="inou daily suggestion: review memory/inou-context.md and recent inou work, then propose ONE specific thing Johan could do today to move inou forward (building/product focus only, not marketing). Post to dashboard news and send Signal. Keep it to 2-3 sentences max.", label="inou-nudge")

Good suggestions: a feature gap to close, a bug to fix, a UX improvement, an integration to build, a technical decision to make, a missing data source to add, a doctor/specialist workflow to improve. Bad suggestions: anything about promoting, tweeting, getting users, press, or going public.


Memory Review (daily)

Quick scan of today's conversations for durable facts worth remembering:

  • New people, relationships, preferences
  • Decisions made, lessons learned
  • Project milestones, status changes
  • Anything Johan said "remember this" about

If found: update MEMORY.md with dated entries. Keep it lean. Track in memory/heartbeat-state.json with lastMemoryReview timestamp.

Weather (in briefings)

Florida = "every day is the same." Don't report normal weather.

  • Format: Just temp range + icon (e.g., "☀️ 68-82°F")
  • Only alert on: Hurricanes, tropical storms, unusual temps, severe weather
  • If nothing exceptional: one-liner or skip entirely

Briefings Dashboard (ALWAYS!)

After generating any briefing (morning, afternoon, or ad-hoc):

  1. POST to dashboard: curl -X POST http://localhost:9200/api/briefings -H 'Content-Type: application/json' -d '{"title":"...","date":"YYYY-MM-DD","weather":"...","markets":"...","news":"...","tasks":"...","summary":"..."}'
  2. Include link in Signal message: http://100.123.216.65:9200
  3. Verify it worked: curl -s http://localhost:9200/api/briefings | jq '.briefings | length'

This is NON-NEGOTIABLE. Johan expects briefings on the dashboard.

Weekly Docker & HAOS Update (Sundays)

Check and update Docker containers on 192.168.1.253 and HAOS on 192.168.1.252:

  1. HAOS: Check update.home_assistant_operating_system_update — install if available
  2. Docker (253): For each service in /home/johan/services/:
    • docker compose pulldocker compose up -ddocker image prune -f
  3. Report what was updated in the weekly briefing

Services: immich, clickhouse, jellyfin, signal, qbittorrent-vpn qbittorrent-vpn: PULL ONLY, do NOT start. Johan uses it on-demand. SSH: ssh johan@192.168.1.253

Weekly Memory Synthesis (Sundays)

Deeper review:

  1. Read through memory/YYYY-MM-DD.md files from the past week
  2. Read memory/corrections.md — look for patterns in pushback
  3. Distill patterns, insights, recurring themes into MEMORY.md
  4. Promote recurring corrections to AGENTS.md as rules
  5. Flag anything stale in MEMORY.md (ask Johan or remove)

Schedule: Sunday mornings (cron handles this separately)

Corrections Process (ongoing)

When Johan pushes back:

  1. Log immediately to memory/corrections.md
  2. Capture the principle, not just the symptom
  3. Ask: "Why was this wrong? Where else does this apply?"
  4. Format:
    ### PRINCIPLE: [Name it]
    **Trigger:** What happened
    **Why:** Why it matters
    **Applies to:** Other situations
    **Test:** Question to ask before acting
    

Tech News Scan (daily, morning)

Search X/Twitter for news Johan cares about:

Tech/AI:

  • Storage optimizations, compression, deduplication
  • Vector embeddings improvements (better/smaller/faster)
  • Backup technology advances
  • Data reduction techniques
  • New AI models (faster, smaller, cheaper, open source)
  • WebMCP (Chrome's Web Model Context Protocol) — spec updates, browser adoption, site implementations, healthcare/enterprise use cases

Clawdbot / AI Assistants:

  • @moltbot (Mr. Lobster 🦞, Clawdbot creator) - releases, features, issues
  • @AlexFinn - Moltbot power user, AI insights, Creator Buddy
  • Clawdbot GitHub releases/issues: https://github.com/clawdbot/clawdbot
  • ClawdHub skills/add-ons: https://clawdhub.com
  • Claude Code updates
  • New/interesting skills, plugins, integrations
  • Major platform integrations (Cloudflare, Vercel, etc. adding Moltbot support)

Politics/General:

  • What did Trump do today (executive orders, major moves)
  • What's making noise / trending topics

Infrastructure/Markets (only if news):

  • Data center power (grid constraints, pricing, outages, nuclear/renewables for DCs)
  • Memory pricing (DRAM, NAND, HBM — only significant moves)
  • Big stock moves (MSFT, NVDA, etc. — 5%+ swings)

NABL (N-able) — Check 2x daily:

  • Johan's ex-employer, doing poorly
  • Watch for: PE takeover/going private rumors, acquisition news, major price moves
  • Any "NABL acquired", "N-able buyout", "take private" headlines

Breaking Financial News (search for high-engagement posts):

  • Multi-asset moves (stocks + metals + crypto moving together) — crashes OR rallies
  • Big swings in either direction (up 8% is as interesting as down 8%)
  • Gold, Silver, S&P500, NASDAQ, BTC major moves
  • Always find the WHY — "MSFT -10% despite strong earnings" or "NVDA +15% on new chip announcement"
  • Search terms: "crash", "surge", "plunge", "rally", "trillion", "markets"
  • Look for 500K+ views posts — that's where the real news breaks
  • Example: @BullTheoryio style macro breakdowns with charts

X Accounts to Check (every briefing):

  • @openclaw — OpenClaw/Clawdbot official account (releases, features, community highlights)
  • @steipete — Peter Steinberger, OpenClaw creator
  • @AlexFinn — Moltbot power user, Creator Buddy founder, AI insights
  • @OpenAI — model releases, product announcements
  • @MiniMax_AI — MiniMax official (M2.5 and other models)
  • @Kimi_Moonshot — Moonshot AI / Kimi (Chinese frontier lab)
  • @ZhipuAI — Zhipu AI / z.ai (GLM models)
  • @Gemini — Google Gemini updates
  • @realDonaldTrump — What's Trump posting/doing
  • @RapidResponse47 — Trump admin rapid response political news
  • @moltbot — gone as of Feb 22; ask Johan if he knows the new handle

OpenClaw Issues (check weekly):

  • Session reset deleting transcripts from memory search index (our patch: include .deleted files)
  • dangerouslyDisableDeviceAuth clearing operator scopes (our patch: preserve scopes)
  • Watch releases + GitHub issues for official fixes → remove patches when fixed upstream

Netherlands:

  • Dutch news (usually quiet, but flag anything significant)
  • Tax on unrealized profits (vermogensbelasting box 3) — this is a big deal

Post findings to dashboard:

curl -X POST http://localhost:9200/api/news -H 'Content-Type: application/json' \
  -d '{"title":"...","body":"...","type":"info|success|warning|error","source":"..."}'

Only ping Johan on Signal if truly urgent (big position moves, breaking news). Update memory/heartbeat-state.json with lastTechScan timestamp after running.

Johan's Open Positions:

  • Short S (SentinelOne) — track daily, alert on big moves either direction

Intra-Day X Watch (spawn subagent — never run inline)

Goal: Catch high-signal posts from key accounts the same day, not the next morning. Frequency: Every 34 hours during awake hours. Skip during sleep blocks. State: Track lastIntraDayXScan in memory/heartbeat-state.json. Skip if checked < 2h ago.

⚠️ ALWAYS SPAWN A SUBAGENT — never run inline

sessions_spawn(task="Intra-day X scan: ...", label="x-watch")

X scanning = multiple bird calls + web searches = context pollution. Offload it.

Accounts to scan every run

Check recent posts (last ~4h) from each:

  • @Cloudflare — MCP, Workers, AI integrations, platform announcements
  • @openclaw — releases, features, community highlights
  • @steipete — Peter Steinberger, OpenClaw creator
  • @AlexFinn — AI insights, Creator Buddy, Moltbot power use
  • @OpenAI — model releases, product announcements
  • @MiniMax_AI — MiniMax official (GLM, M2.5 models)
  • @Kimi_Moonshot — Moonshot AI / Kimi (Chinese frontier lab)
  • @ZhipuAI — Zhipu AI / z.ai (GLM models, Johan has dev account)
  • @Gemini — Google Gemini updates
  • @realDonaldTrump — executive orders, breaking political moves
  • @RapidResponse47 — Trump admin rapid response / political news
  • @moltbot — account not found as of Feb 22, 2026; handle may have changed

Use: bird user-tweets @handle → filter for posts newer than last scan timestamp.

What's worth surfacing

Always ping Johan:

  • Major platform announcing MCP support (Cloudflare, Vercel, GitHub, etc.)
  • New AI model releases (open-weight, frontier, pricing changes)
  • Breaking political move with broad economic impact
  • NABL acquisition/PE news
  • @openclaw or @steipete announcing a release

Post to dashboard, no ping:

  • Interesting tech commentary worth reading
  • Minor updates, incremental news
  • OpenClaw community highlights

Drop silently:

  • Retweets of old news
  • Promotional content, event promos, partnership announcements
  • Engagement bait, opinion threads, personal posts
  • Anything from @OpenAI/@MiniMax_AI/@Kimi_Moonshot/@ZhipuAI/@Gemini that isn't a model release, pricing change, or major product launch — these accounts post constantly, only hard news counts

Subagent reports back with

  • Count of new posts checked
  • Any items surfaced (title + URL)
  • "Nothing significant" if quiet

K2.5 Watchdog (check every heartbeat)

CRITICAL: K2.5 can get stuck in infinite loops, burning thousands of dollars in tokens.

Check ~/.clawdbot/agents/k2-browser/sessions/ for signs of runaway:

  1. Session file > 100 lines → WARNING, monitor closely
  2. Session file > 200 lines → KILL immediately
  3. Session file growing rapidly (modified in last 2 min + >50 lines) → KILL

Kill procedure:

/home/johan/clawd/scripts/k2-emergency-stop.sh

Or manually:

rm -rf ~/.clawdbot/agents/k2-browser/sessions/*.jsonl
systemctl --user restart clawdbot-gateway

Alert Johan if you kill K2 — he needs to know tokens were burned.

Cron backup: scripts/k2-watchdog.sh runs every 5 minutes as a safety net.


Signal Reply Watch (temporary — email migration)

Sent email migration instructions to kids on 2026-02-18 ~2AM. Watch for replies:

  • Misha +17272381189
  • Roos +31646563377
  • Jacques +31624403744

If any of them reply via Signal, ping Johan immediately regardless of time (he's on night shift). Remove this section once all three have confirmed setup or it's been >7 days.


WhatsApp Monitor

Handled by webhook — message-bridge POSTs to /hooks/message on new messages. No heartbeat polling needed.


Message Center (check every heartbeat)

Goal: Process all new messages (email + WhatsApp) from MC. Also catch any stragglers missed by webhook.

Inbox Cleanup (every heartbeat)

Webhook handles real-time, but messages can slip through (restarts, migration, webhook downtime). Check for un-actioned messages (listing now auto-excludes archived/deleted):

curl -s "http://localhost:8025/messages?source=tj_jongsma_me" | jq 'length'
curl -s "http://localhost:8025/messages?source=johan_jongsma_me" | jq 'length'

These counts reflect only messages with NO action taken. If anything's sitting there, triage it per memory/email-triage.md. Use ?all=true to see everything including actioned messages (for debugging only).

New Messages Check

Check:

curl -s "http://localhost:8025/messages/new" | jq 'length'

If new messages exist:

  1. Fetch full list: GET /messages/new
  2. Triage each per memory/email-triage.md rules
  3. Take action: /messages/{id}/archive, /delete, /reply, or /to-docs
  4. Actions persist in orchestration DB — won't reappear

Replay if needed:

curl -s "http://localhost:8025/messages?since=24h" | jq 'length'

If falling behind (>5 new):

  • Do NOT alert Johan
  • Process backlog silently
  • Log patterns to improve triage rules

Only ping Johan for:

  • Anything Sophia-related
  • Requires his decision/action (not just FYI)
  • Can't fix it myself (stuck, need access, unclear rules)

Silent processing — just triage and fix, don't ping for routine messages.


Document Inbox (check every heartbeat)

Inbox: ~/documents/inbox/

If files exist, for each document:

  1. Read - OCR/extract full text
  2. Triage - classify type (tax, expense, bill, medical, etc.)
  3. Store PDF - move to ~/documents/store/[hash].pdf
  4. Create record - markdown in ~/documents/records/[category]/
    • Summary, metadata, full text, link to PDF
  5. Embed - generate vector embedding, store in ~/documents/index/embeddings.db
  6. Index - update ~/documents/index/master.json
  7. Export - if expense/invoice, append to ~/documents/exports/expenses.csv

Silent processing — don't ping for every doc, just do the work. Only ping if: IRS correspondence, something urgent, or can't categorize.


Claude.ai Usage Monitor (include in every briefing)

Goal: Track weekly usage limits to avoid running out mid-week.

Data file: memory/claude-usage.json Fetch script: scripts/claude-usage-fetch.py --save Cookies config: config/claude-cookies.json

In every briefing:

  1. Run scripts/claude-usage-fetch.py --save to get fresh data
  2. Include in briefing as one line: 📊 Claude: {weekly_percent}% used (resets {weekly_resets})

Pace = (weekly_percent / time_elapsed_percent) × 100 e.g. 60% used at 50% of the week = pace 120% (burning too fast). 60% used at 80% of the week = pace 75% (fine). Week runs Sat 2PM → Sat 2PM ET. Sat 7AM2PM excluded (dead zone, Johan asleep).

Alert rules — read carefully:

  • Pace ≤ 100%: NOT an alert. Tracking correctly. Mention in briefing, nothing more.
  • Pace > 100% (burning faster than week allows): Signal Johan. No Fully tablet.
  • Sudden jump ≥ 4% in ≤ 4h: Signal Johan immediately. No Fully tablet.
  • NEVER post Claude usage to the Fully tablet (port 9202). It's not urgent enough for that surface.

The James dashboard (port 9200) status bar shows current usage — that's enough passive visibility.

If fetch fails (Cloudflare challenge = expired cookies):

  • ALERT JOHAN IMMEDIATELY via Signal — don't go silent!
  • Message: "⚠️ Claude usage fetch failed - cookies expired. Need fresh cookies from Chrome."
  • Instructions: F12 → Application → Cookies → claude.ai → copy sessionKey + cf_clearance
  • Update config/claude-cookies.json

K2.5 Backup ready at: config-backups/k2.5-emergency-switch.md Switch command: /model fireworks/accounts/fireworks/models/kimi-k2p5