11 KiB
HEARTBEAT.md
Johan's Schedule (US Eastern)
Sleep blocks — DO NOT PING:
- ~7:30pm – 10:15pm (first sleep)
- ~5:15am – 9/10am (second sleep)
Available/Awake:
- ~10am – 7:30pm (daytime)
- ~10:30pm – 5:00am weekdays / 7:00am weekends (night shift, caring for Sophia)
At 10:30pm+ he's WORKING, not sleeping. Don't assume "late night = quiet time."
Default assumption: If not in a sleep block, Johan is working or available. When in doubt, ping.
Daily Tasks (run once per day)
Check memory/heartbeat-state.json for last run times. Only run if >24h since last check.
Update Summary (in briefings)
Automated at 9:00 AM ET via systemd timer (daily-updates.timer).
Updates OpenClaw, Claude Code, and OS packages. Only restarts gateway if OpenClaw changed.
Log: memory/updates/YYYY-MM-DD.json
In every morning briefing:
- Read today's update log from
memory/updates/ - Summarize what was updated (versions, package names)
- For OpenClaw/Claude Code updates: fetch release notes from GitHub and include key changes
- Flag if reboot required (kernel updates)
No manual check needed — the timer handles it. I just read the log and report.
Memory Review (daily)
Quick scan of today's conversations for durable facts worth remembering:
- New people, relationships, preferences
- Decisions made, lessons learned
- Project milestones, status changes
- Anything Johan said "remember this" about
If found: update MEMORY.md with dated entries. Keep it lean.
Track in memory/heartbeat-state.json with lastMemoryReview timestamp.
Weather (in briefings)
Florida = "every day is the same." Don't report normal weather.
- Format: Just temp range + icon (e.g., "☀️ 68-82°F")
- Only alert on: Hurricanes, tropical storms, unusual temps, severe weather
- If nothing exceptional: one-liner or skip entirely
Briefings Dashboard (ALWAYS!)
After generating any briefing (morning, afternoon, or ad-hoc):
- POST to dashboard:
curl -X POST http://localhost:9200/api/briefings -H 'Content-Type: application/json' -d '{"title":"...","date":"YYYY-MM-DD","weather":"...","markets":"...","news":"...","tasks":"...","summary":"..."}' - Include link in Signal message: http://100.123.216.65:9200
- Verify it worked:
curl -s http://localhost:9200/api/briefings | jq '.briefings | length'
This is NON-NEGOTIABLE. Johan expects briefings on the dashboard.
Weekly Docker & HAOS Update (Sundays)
Check and update Docker containers on 192.168.1.253 and HAOS on 192.168.1.252:
- HAOS: Check
update.home_assistant_operating_system_update— install if available - Docker (253): For each service in
/home/johan/services/:docker compose pull→docker compose up -d→docker image prune -f
- Report what was updated in the weekly briefing
Services: immich, clickhouse, jellyfin, signal, qbittorrent-vpn
qbittorrent-vpn: PULL ONLY, do NOT start. Johan uses it on-demand.
SSH: ssh johan@192.168.1.253
Weekly Memory Synthesis (Sundays)
Deeper review:
- Read through
memory/YYYY-MM-DD.mdfiles from the past week - Read
memory/corrections.md— look for patterns in pushback - Distill patterns, insights, recurring themes into MEMORY.md
- Promote recurring corrections to AGENTS.md as rules
- Flag anything stale in MEMORY.md (ask Johan or remove)
Schedule: Sunday mornings (cron handles this separately)
Corrections Process (ongoing)
When Johan pushes back:
- Log immediately to
memory/corrections.md - Capture the principle, not just the symptom
- Ask: "Why was this wrong? Where else does this apply?"
- Format:
### PRINCIPLE: [Name it] **Trigger:** What happened **Why:** Why it matters **Applies to:** Other situations **Test:** Question to ask before acting
Tech News Scan (daily, morning)
Search X/Twitter for news Johan cares about:
Tech/AI:
- Storage optimizations, compression, deduplication
- Vector embeddings improvements (better/smaller/faster)
- Backup technology advances
- Data reduction techniques
- New AI models (faster, smaller, cheaper, open source)
Clawdbot / AI Assistants:
- @moltbot (Mr. Lobster 🦞, Clawdbot creator) - releases, features, issues
- @AlexFinn - Moltbot power user, AI insights, Creator Buddy
- Clawdbot GitHub releases/issues: https://github.com/clawdbot/clawdbot
- ClawdHub skills/add-ons: https://clawdhub.com
- Claude Code updates
- New/interesting skills, plugins, integrations
- Major platform integrations (Cloudflare, Vercel, etc. adding Moltbot support)
Politics/General:
- What did Trump do today (executive orders, major moves)
- What's making noise / trending topics
Infrastructure/Markets (only if news):
- Data center power (grid constraints, pricing, outages, nuclear/renewables for DCs)
- Memory pricing (DRAM, NAND, HBM — only significant moves)
- Big stock moves (MSFT, NVDA, etc. — 5%+ swings)
NABL (N-able) — Check 2x daily:
- Johan's ex-employer, doing poorly
- Watch for: PE takeover/going private rumors, acquisition news, major price moves
- Any "NABL acquired", "N-able buyout", "take private" headlines
Breaking Financial News (search for high-engagement posts):
- Multi-asset moves (stocks + metals + crypto moving together) — crashes OR rallies
- Big swings in either direction (up 8% is as interesting as down 8%)
- Gold, Silver, S&P500, NASDAQ, BTC major moves
- Always find the WHY — "MSFT -10% despite strong earnings" or "NVDA +15% on new chip announcement"
- Search terms: "crash", "surge", "plunge", "rally", "trillion", "markets"
- Look for 500K+ views posts — that's where the real news breaks
- Example: @BullTheoryio style macro breakdowns with charts
X Accounts to Check (every briefing):
- @openclaw — OpenClaw/Clawdbot official account (releases, features, community highlights)
- @AlexFinn — Moltbot power user, Creator Buddy founder, AI insights
- @moltbot — Mr. Lobster, Clawdbot/Moltbot creator
- @realDonaldTrump — What's Trump posting/doing
Netherlands:
- Dutch news (usually quiet, but flag anything significant)
- Tax on unrealized profits (vermogensbelasting box 3) — this is a big deal
Post findings to dashboard:
curl -X POST http://localhost:9200/api/news -H 'Content-Type: application/json' \
-d '{"title":"...","body":"...","type":"info|success|warning|error","source":"..."}'
Only ping Johan on Signal if truly urgent (big position moves, breaking news).
Update memory/heartbeat-state.json with lastTechScan timestamp after running.
Johan's Open Positions:
- Short S (SentinelOne) — track daily, alert on big moves either direction
K2.5 Watchdog (check every heartbeat)
CRITICAL: K2.5 can get stuck in infinite loops, burning thousands of dollars in tokens.
Check ~/.clawdbot/agents/k2-browser/sessions/ for signs of runaway:
- Session file > 100 lines → WARNING, monitor closely
- Session file > 200 lines → KILL immediately
- Session file growing rapidly (modified in last 2 min + >50 lines) → KILL
Kill procedure:
/home/johan/clawd/scripts/k2-emergency-stop.sh
Or manually:
rm -rf ~/.clawdbot/agents/k2-browser/sessions/*.jsonl
systemctl --user restart clawdbot-gateway
Alert Johan if you kill K2 — he needs to know tokens were burned.
Cron backup: scripts/k2-watchdog.sh runs every 5 minutes as a safety net.
WhatsApp Monitor
Handled by webhook — message-bridge POSTs to /hooks/message on new messages.
No heartbeat polling needed.
Message Center (check every heartbeat)
Goal: Process all new messages (email + WhatsApp) from MC. Also catch any stragglers missed by webhook.
Inbox Cleanup (every heartbeat)
Webhook handles real-time, but messages can slip through (restarts, migration, webhook downtime). Always check BOTH accounts for unprocessed mail:
curl -s "http://localhost:8025/messages?source=tj_jongsma_me" | jq 'length'
curl -s "http://localhost:8025/messages?source=johan_jongsma_me" | jq 'length'
Don't filter by seen — messages can be marked seen (fetched) but never actioned (orphaned by a crash/restart). Anything still returned by the listing endpoint is still in the inbox and needs triage.
If anything's sitting there, triage it per memory/email-triage.md. Don't wait for the webhook.
New Messages Check
Check:
curl -s "http://localhost:8025/messages/new" | jq 'length'
If new messages exist:
- Fetch full list:
GET /messages/new - Triage each per
memory/email-triage.mdrules - Take action:
/messages/{id}/archive,/delete,/reply, or/to-docs - Actions persist in orchestration DB — won't reappear
Replay if needed:
curl -s "http://localhost:8025/messages?since=24h" | jq 'length'
If falling behind (>5 new):
- Do NOT alert Johan
- Process backlog silently
- Log patterns to improve triage rules
Only ping Johan for:
- Anything Sophia-related
- Requires his decision/action (not just FYI)
- Can't fix it myself (stuck, need access, unclear rules)
Silent processing — just triage and fix, don't ping for routine messages.
Document Inbox (check every heartbeat)
Inbox: ~/documents/inbox/
If files exist, for each document:
- Read - OCR/extract full text
- Triage - classify type (tax, expense, bill, medical, etc.)
- Store PDF - move to
~/documents/store/[hash].pdf - Create record - markdown in
~/documents/records/[category]/- Summary, metadata, full text, link to PDF
- Embed - generate vector embedding, store in
~/documents/index/embeddings.db - Index - update
~/documents/index/master.json - Export - if expense/invoice, append to
~/documents/exports/expenses.csv
Silent processing — don't ping for every doc, just do the work. Only ping if: IRS correspondence, something urgent, or can't categorize.
Claude.ai Usage Monitor (include in every briefing)
Goal: Track weekly usage limits to avoid running out mid-week.
Data file: memory/claude-usage.json
Fetch script: scripts/claude-usage-fetch.py --save
Cookies config: config/claude-cookies.json
In every briefing:
- Run
scripts/claude-usage-fetch.py --saveto get fresh data - Include in briefing:
📊 Claude Usage: {weekly_percent}% weekly (resets {weekly_resets})
Thresholds:
- > 70%: Add warning to briefing
- > 85%: Alert Johan immediately + remind about K2.5 backup
If fetch fails (Cloudflare challenge = expired cookies):
- ALERT JOHAN IMMEDIATELY via Signal — don't go silent!
- Message: "⚠️ Claude usage fetch failed - cookies expired. Need fresh cookies from Chrome."
- Instructions: F12 → Application → Cookies → claude.ai → copy sessionKey + cf_clearance
- Update
config/claude-cookies.json
K2.5 Backup ready at: config-backups/k2.5-emergency-switch.md
Switch command: /model fireworks/accounts/fireworks/models/kimi-k2p5