clawd/HEARTBEAT.md

11 KiB
Raw Blame History

HEARTBEAT.md

Johan's Schedule (US Eastern)

Sleep blocks — DO NOT PING:

  • ~7:30pm 10:15pm (first sleep)
  • ~5:15am 9/10am (second sleep, weekdays)
  • ~7:00am 11am+ (second sleep, weekends & holidays — sleeps in, no work)

Available/Awake:

  • ~10am 7:30pm (daytime)
  • ~10:30pm 5:00am weekdays / 7:00am weekends & holidays (night shift, caring for Sophia so Babuska can rest)

At 10:30pm+ he's WORKING, not sleeping. Don't assume "late night = quiet time."

Default assumption: If not in a sleep block, Johan is working or available. When in doubt, ping.


Daily Tasks (run once per day)

Check memory/heartbeat-state.json for last run times. Only run if >24h since last check.

Update Summary (in briefings)

Automated at 9:00 AM ET via systemd timer (daily-updates.timer). Updates OpenClaw, Claude Code, and OS packages. Only restarts gateway if OpenClaw changed. Log: memory/updates/YYYY-MM-DD.json

In every morning briefing:

  1. Read today's update log from memory/updates/
  2. Summarize what was updated (versions, package names)
  3. For OpenClaw/Claude Code updates: fetch release notes from GitHub and include key changes
  4. Flag if reboot required (kernel updates)

No manual check needed — the timer handles it. I just read the log and report.

Memory Review (daily)

Quick scan of today's conversations for durable facts worth remembering:

  • New people, relationships, preferences
  • Decisions made, lessons learned
  • Project milestones, status changes
  • Anything Johan said "remember this" about

If found: update MEMORY.md with dated entries. Keep it lean. Track in memory/heartbeat-state.json with lastMemoryReview timestamp.

Weather (in briefings)

Florida = "every day is the same." Don't report normal weather.

  • Format: Just temp range + icon (e.g., "☀️ 68-82°F")
  • Only alert on: Hurricanes, tropical storms, unusual temps, severe weather
  • If nothing exceptional: one-liner or skip entirely

Briefings Dashboard (ALWAYS!)

After generating any briefing (morning, afternoon, or ad-hoc):

  1. POST to dashboard: curl -X POST http://localhost:9200/api/briefings -H 'Content-Type: application/json' -d '{"title":"...","date":"YYYY-MM-DD","weather":"...","markets":"...","news":"...","tasks":"...","summary":"..."}'
  2. Include link in Signal message: http://100.123.216.65:9200
  3. Verify it worked: curl -s http://localhost:9200/api/briefings | jq '.briefings | length'

This is NON-NEGOTIABLE. Johan expects briefings on the dashboard.

Weekly Docker & HAOS Update (Sundays)

Check and update Docker containers on 192.168.1.253 and HAOS on 192.168.1.252:

  1. HAOS: Check update.home_assistant_operating_system_update — install if available
  2. Docker (253): For each service in /home/johan/services/:
    • docker compose pulldocker compose up -ddocker image prune -f
  3. Report what was updated in the weekly briefing

Services: immich, clickhouse, jellyfin, signal, qbittorrent-vpn qbittorrent-vpn: PULL ONLY, do NOT start. Johan uses it on-demand. SSH: ssh johan@192.168.1.253

Weekly Memory Synthesis (Sundays)

Deeper review:

  1. Read through memory/YYYY-MM-DD.md files from the past week
  2. Read memory/corrections.md — look for patterns in pushback
  3. Distill patterns, insights, recurring themes into MEMORY.md
  4. Promote recurring corrections to AGENTS.md as rules
  5. Flag anything stale in MEMORY.md (ask Johan or remove)

Schedule: Sunday mornings (cron handles this separately)

Corrections Process (ongoing)

When Johan pushes back:

  1. Log immediately to memory/corrections.md
  2. Capture the principle, not just the symptom
  3. Ask: "Why was this wrong? Where else does this apply?"
  4. Format:
    ### PRINCIPLE: [Name it]
    **Trigger:** What happened
    **Why:** Why it matters
    **Applies to:** Other situations
    **Test:** Question to ask before acting
    

Tech News Scan (daily, morning)

Search X/Twitter for news Johan cares about:

Tech/AI:

  • Storage optimizations, compression, deduplication
  • Vector embeddings improvements (better/smaller/faster)
  • Backup technology advances
  • Data reduction techniques
  • New AI models (faster, smaller, cheaper, open source)
  • WebMCP (Chrome's Web Model Context Protocol) — spec updates, browser adoption, site implementations, healthcare/enterprise use cases

Clawdbot / AI Assistants:

  • @moltbot (Mr. Lobster 🦞, Clawdbot creator) - releases, features, issues
  • @AlexFinn - Moltbot power user, AI insights, Creator Buddy
  • Clawdbot GitHub releases/issues: https://github.com/clawdbot/clawdbot
  • ClawdHub skills/add-ons: https://clawdhub.com
  • Claude Code updates
  • New/interesting skills, plugins, integrations
  • Major platform integrations (Cloudflare, Vercel, etc. adding Moltbot support)

Politics/General:

  • What did Trump do today (executive orders, major moves)
  • What's making noise / trending topics

Infrastructure/Markets (only if news):

  • Data center power (grid constraints, pricing, outages, nuclear/renewables for DCs)
  • Memory pricing (DRAM, NAND, HBM — only significant moves)
  • Big stock moves (MSFT, NVDA, etc. — 5%+ swings)

NABL (N-able) — Check 2x daily:

  • Johan's ex-employer, doing poorly
  • Watch for: PE takeover/going private rumors, acquisition news, major price moves
  • Any "NABL acquired", "N-able buyout", "take private" headlines

Breaking Financial News (search for high-engagement posts):

  • Multi-asset moves (stocks + metals + crypto moving together) — crashes OR rallies
  • Big swings in either direction (up 8% is as interesting as down 8%)
  • Gold, Silver, S&P500, NASDAQ, BTC major moves
  • Always find the WHY — "MSFT -10% despite strong earnings" or "NVDA +15% on new chip announcement"
  • Search terms: "crash", "surge", "plunge", "rally", "trillion", "markets"
  • Look for 500K+ views posts — that's where the real news breaks
  • Example: @BullTheoryio style macro breakdowns with charts

X Accounts to Check (every briefing):

  • @openclaw — OpenClaw/Clawdbot official account (releases, features, community highlights)
  • @AlexFinn — Moltbot power user, Creator Buddy founder, AI insights
  • @moltbot — Mr. Lobster, Clawdbot/Moltbot creator
  • @realDonaldTrump — What's Trump posting/doing

Netherlands:

  • Dutch news (usually quiet, but flag anything significant)
  • Tax on unrealized profits (vermogensbelasting box 3) — this is a big deal

Post findings to dashboard:

curl -X POST http://localhost:9200/api/news -H 'Content-Type: application/json' \
  -d '{"title":"...","body":"...","type":"info|success|warning|error","source":"..."}'

Only ping Johan on Signal if truly urgent (big position moves, breaking news). Update memory/heartbeat-state.json with lastTechScan timestamp after running.

Johan's Open Positions:

  • Short S (SentinelOne) — track daily, alert on big moves either direction

K2.5 Watchdog (check every heartbeat)

CRITICAL: K2.5 can get stuck in infinite loops, burning thousands of dollars in tokens.

Check ~/.clawdbot/agents/k2-browser/sessions/ for signs of runaway:

  1. Session file > 100 lines → WARNING, monitor closely
  2. Session file > 200 lines → KILL immediately
  3. Session file growing rapidly (modified in last 2 min + >50 lines) → KILL

Kill procedure:

/home/johan/clawd/scripts/k2-emergency-stop.sh

Or manually:

rm -rf ~/.clawdbot/agents/k2-browser/sessions/*.jsonl
systemctl --user restart clawdbot-gateway

Alert Johan if you kill K2 — he needs to know tokens were burned.

Cron backup: scripts/k2-watchdog.sh runs every 5 minutes as a safety net.


WhatsApp Monitor

Handled by webhook — message-bridge POSTs to /hooks/message on new messages. No heartbeat polling needed.


Message Center (check every heartbeat)

Goal: Process all new messages (email + WhatsApp) from MC. Also catch any stragglers missed by webhook.

Inbox Cleanup (every heartbeat)

Webhook handles real-time, but messages can slip through (restarts, migration, webhook downtime). Check for un-actioned messages (listing now auto-excludes archived/deleted):

curl -s "http://localhost:8025/messages?source=tj_jongsma_me" | jq 'length'
curl -s "http://localhost:8025/messages?source=johan_jongsma_me" | jq 'length'

These counts reflect only messages with NO action taken. If anything's sitting there, triage it per memory/email-triage.md. Use ?all=true to see everything including actioned messages (for debugging only).

New Messages Check

Check:

curl -s "http://localhost:8025/messages/new" | jq 'length'

If new messages exist:

  1. Fetch full list: GET /messages/new
  2. Triage each per memory/email-triage.md rules
  3. Take action: /messages/{id}/archive, /delete, /reply, or /to-docs
  4. Actions persist in orchestration DB — won't reappear

Replay if needed:

curl -s "http://localhost:8025/messages?since=24h" | jq 'length'

If falling behind (>5 new):

  • Do NOT alert Johan
  • Process backlog silently
  • Log patterns to improve triage rules

Only ping Johan for:

  • Anything Sophia-related
  • Requires his decision/action (not just FYI)
  • Can't fix it myself (stuck, need access, unclear rules)

Silent processing — just triage and fix, don't ping for routine messages.


Document Inbox (check every heartbeat)

Inbox: ~/documents/inbox/

If files exist, for each document:

  1. Read - OCR/extract full text
  2. Triage - classify type (tax, expense, bill, medical, etc.)
  3. Store PDF - move to ~/documents/store/[hash].pdf
  4. Create record - markdown in ~/documents/records/[category]/
    • Summary, metadata, full text, link to PDF
  5. Embed - generate vector embedding, store in ~/documents/index/embeddings.db
  6. Index - update ~/documents/index/master.json
  7. Export - if expense/invoice, append to ~/documents/exports/expenses.csv

Silent processing — don't ping for every doc, just do the work. Only ping if: IRS correspondence, something urgent, or can't categorize.


Claude.ai Usage Monitor (include in every briefing)

Goal: Track weekly usage limits to avoid running out mid-week.

Data file: memory/claude-usage.json Fetch script: scripts/claude-usage-fetch.py --save Cookies config: config/claude-cookies.json

In every briefing:

  1. Run scripts/claude-usage-fetch.py --save to get fresh data
  2. Include in briefing:
📊 Claude Usage: {weekly_percent}% weekly (resets {weekly_resets})

Thresholds:

  • > 70%: Add warning to briefing
  • > 85%: Alert Johan immediately + remind about K2.5 backup

If fetch fails (Cloudflare challenge = expired cookies):

  • ALERT JOHAN IMMEDIATELY via Signal — don't go silent!
  • Message: "⚠️ Claude usage fetch failed - cookies expired. Need fresh cookies from Chrome."
  • Instructions: F12 → Application → Cookies → claude.ai → copy sessionKey + cf_clearance
  • Update config/claude-cookies.json

K2.5 Backup ready at: config-backups/k2.5-emergency-switch.md Switch command: /model fireworks/accounts/fireworks/models/kimi-k2p5