feat(refactor): ready for manual QA after main sync (#274)

* fix: preserve gateway token query in websocket URLs

* fix: classify secure-context device identity handshake errors

* fix: normalize trailing dot in host allowlist checks

* fix: support proxied gateway websocket paths and tailnet host normalization

* fix: auto-connect startup to primary configured gateway

* fix: keep gateway tokens server-side only

* fix: allow authenticated viewers to resolve gateway connect credentials

* fix: identify mission control as operator gateway client

* fix: redirect remote http sessions to https for gateway auth

* fix: support URL-style gateway hosts in health probes

* fix: resolve primary gateway credentials from detected openclaw runtime

* fix: hide duplicate gateway connection summary when managed

* refactor: remove super admin and workspaces panels from UI navigation

* fix: treat configured gateways as full-mode capability

* refactor: move promo banner copy into subtle footer note

* fix: stabilize gateway websocket connect protocol detection

* test: cover https forwarded proto for gateway websocket url

* fix: load canonical agent files and memory in detail panel

* fix: resolve agent files from openclaw workspace conventions

* fix: persist websocket client across route remounts

* feat: allow deleting agents with optional workspace removal

* feat: refresh mission control branding and favicon assets

* feat: complete github parity sync implementation

* chore: remove e2e temp artifacts from repo

* feat: add embedded /chat panel with shared chat workspace

* feat: unify sessions navigation into chat panel

* feat: show local Claude/Codex sessions in chat list

* feat: enable local session continuation and chat tagging

* fix: correct local codex session recency detection

* fix: refresh local session age and anchor chat composer

* refactor: make chat provider-session-first by mode

* fix: add local provider and MC health rows in overview

* feat: finalize tenant-scoped workspaces and full e2e coverage

* feat: improve session workbench controls and smoke coverage

* refactor: extract SaaS code to separate pro repo

- Add registerAuthResolver() hook to auth.ts
- Add registerMigrations() hook to migrations.ts
- Remove saas config block, SaaS modules, Pro API routes
- Keep adapters, super-admin routes, migration 032

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add framework adapters and self-update mechanism

- Framework adapter layer for multi-agent registration (autogen, crewai, langgraph, claude-sdk, openclaw, generic)
- Self-update API endpoint (admin-only git pull + install + build)
- Update banner UI component showing available versions with dismiss

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: update README stats, remove stale Super Admin refs, improve self-update

- Panel count 28→32, API routes 66→95, migrations 21→30
- Remove Super Admin from UI-facing docs (APIs remain)
- Document framework adapters and self-update mechanism
- Mark workspace isolation, adapters, projects as completed in roadmap
- Self-update now uses tag-based checkout instead of branch pull
- Plugin hook comments: "Pro" → "extensions"

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add skills hub with registry integration, bidirectional sync, and local agent discovery

- Bidirectional disk↔DB skill sync via scheduler (60s interval, SHA-256 change detection, disk-wins conflict resolution)
- ClawdHub + skills.sh registry proxy with search, install, and security scanning (9 rules: prompt injection, credential leaks, data exfiltration, obfuscated content)
- Local agent discovery from ~/.agents/, ~/.codex/agents/, ~/.claude/agents/ with bidirectional sync
- DB-backed skills API with filesystem fallback, admin-only install, rate limiting
- Skills panel: installed/registry tabs, security badges, friendly source labels, OpenClaw gateway support
- Agent panel: local sync button, source badges (local/gateway)
- Migrations 033 (skills table) and 034 (agents source/hash/workspace columns)
- Full test coverage: 24 unit tests, 34 E2E tests (286 total suite green)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add per-agent rate limiting and agent self-registration

- Per-agent rate limiter keyed on x-agent-name header (falls back to IP)
- Agent heartbeat: 30 req/min per agent, task polling: 20 req/min per agent
- Rate limit response headers (Retry-After, X-RateLimit-*) for agent backoff
- POST /api/agents/register: self-service registration with viewer-level auth
- Idempotent registration (re-registering updates last_seen, returns existing)
- Name validation, role whitelist, capabilities/framework in config
- Self-registration rate-limited to 5/min per IP
- 9 E2E tests for self-registration (295 total suite green)
- README: updated API route count (97), test counts, new endpoints

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: enhance agent cost panel, OAuth approval UI, and framework adapter gateway

- Agent Cost Panel: add task cost attribution drill-down, cost share
  percentages, bar chart comparison, 5th summary card for task-attributed
  costs, 30s auto-refresh, and tabbed expanded view (tasks/models)
- OAuth Approval UI: replace window.prompt() with inline role selector
  and note input, add avatar display, show animated pending count badge,
  add dedicated "awaiting approval" state on login page
- Framework Adapter Gateway: wire GenericAdapter.getAssignments() to
  query task queue, add POST /api/adapters route for framework-agnostic
  agent actions (register, heartbeat, report, assignments, disconnect)
- Clean up dead api-keys import and DB-backed key resolution from
  auth.ts (moved to Pro repo)
- Resolve README merge conflicts, update route count to 98

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: complete free-tier functionality — adapters, workspace CRUD, agent sync, and UI polish

- Implement all 5 framework adapter stubs (claude-sdk, crewai, langgraph, autogen, openclaw)
  with shared queryPendingAssignments() helper to eliminate SQL duplication
- Add recurring gateway_agent_sync scheduler task (was startup-only)
- Add workspace CRUD API: POST/PUT/DELETE /api/workspaces with tenant isolation
- Add local agent discovery for flat .md files (Claude Code agent format with YAML frontmatter)
- Add per-agent cost breakdown API (GET /api/tokens/by-agent)
- Add API key rotation endpoint (GET/POST /api/tokens/rotate)
- Add Google OAuth disconnect endpoint
- Polish login page with inline Google Sign-In button
- Enhance settings panel with Security tab (API key management, OAuth)
- Enhance agent cost panel with per-agent DB view
- Add Awesome OpenClaw as third skill registry source
- Add integration connectivity test fallback

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: include session-message component (required by chat-workspace)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: gateway dot color should reflect live connection state

When WebSocket is connected, show green dot regardless of stored
probe status. Prevents misleading red dot + green CONNECTED badge.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: agent creation progress modal and openclaw CLI flag

- Remove invalid --name flag from openclaw agents add CLI invocation
- Add multi-step progress UI to CreateAgentModal showing DB insert,
  gateway write, and workspace provisioning steps with animated status
- Progress view replaces review content during creation with spinner,
  checkmark, and error states per step
- Auto-close on success after 1.5s, retry/close buttons on error
- Squad panel: add status-based card edge colors and glow styles

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: task dispatch — scheduler polls assigned tasks and runs agents via openclaw CLI

Adds a task_dispatch scheduler job that picks up tasks in 'assigned' status,
executes them via `openclaw agent --local --json`, and moves them to 'review'
with the agent's response as resolution + comment.

* feat: link dispatched tasks to agent sessions — view session from task detail

- task-dispatch: extract sessionId from openclaw JSON response, store in task metadata
- task detail modal: "View Session" button navigates to /chat with the agent's session transcript
- shows pulsing "Live" indicator when task is in_progress
- agent squad panel: show quality_review and done counts in task stats

* feat: automated Aegis quality review — scheduler polls review tasks and approves/rejects

- aegis_review scheduler job picks up tasks in 'review' status every 60s
- runs openclaw agent to evaluate task resolution quality
- approved → done, rejected → in_progress with feedback as comment
- quality-review API: rejected reviews now push task back to in_progress
- approved reviews work for any reviewer (not just aegis)

* feat: natural language recurring tasks + Claude Code task bridge

Add NL schedule parser (zero deps) for creating recurring tasks via
"every morning at 9am" style input. Template-clone pattern spawns dated
child tasks on cron schedule with Aegis quality gates per instance.

Read-only bridge surfaces Claude Code team tasks and configs from
~/.claude/tasks/ and ~/.claude/teams/ on the MC dashboard.

New files: schedule-parser.ts, recurring-tasks.ts, claude-tasks.ts,
API routes for /claude-tasks and /schedule-parse.
Modified: scheduler.ts (recurring_task_spawn), migrations.ts (036),
task-board-panel.tsx (schedule UI + badges + CC section),
cron-management-panel.tsx (CC teams section).

* docs: update README with recurring tasks and Claude Code task bridge

Add sections for natural language recurring tasks, Claude Code task
bridge, new API endpoints, architecture tree entries, and roadmap items.

* fix: agent card redesign, gateway badge tooltip, and ws:// for localhost gateways

- Compact agent cards: remove 4 colored stat boxes, show inline task stats,
  display model name from config, remove session info box, remove Busy button
- Gateway ConnectionBadge: rich hover tooltip with host, latency, WS/SSE status
- Fix gateway connect over Tailscale/HTTPS: use ws:// for localhost gateway
  hosts since they have no TLS (browsers allow mixed content for localhost)
- Extract agent-card-helpers.ts with formatModelName, buildTaskStatParts,
  extractWsHost for testability
- Add 16 tests for agent card helpers, update 12 gateway-url tests
- Sanitize test fixtures to remove personal Tailscale hostnames

* fix: gateway auto-connect via Tailscale Serve and informative mode badge

- Detect Tailscale Serve mode from OpenClaw config and build
  wss://<dashboard-host>/gw URL for remote browser connections
- Replace static mode badge with ModeBadge showing live WS status,
  latency, and rich hover tooltip (host, WS/SSE, retries)
- Fall back to host rewrite for non-Tailscale remote access

* feat: discover OS-level gateways and show in Gateway Manager

- Add GET /api/gateways/discover — scans /home/*/.openclaw/openclaw.json
  for gateway configs and checks if they're listening (via ss)
- Show discovered gateways in Gateway Manager with user, port, bind mode,
  active status, and Tailscale mode badge
- One-click Register button to add discovered gateways to the DB
- Refine Tailscale Serve detection in connect route with config caching

* fix: GitHub sync panel loading hang and gateway discovery via systemd

- GitHub panel: use Promise.allSettled + AbortSignal.timeout(8000) to
  prevent indefinite loading spinner when any API call hangs
- Show helpful "not configured" notice when GITHUB_TOKEN is missing
- Always render Sync History and Linked Tasks sections with empty states
- Gateway discover: rewrite to use systemctl + ss for port detection
  instead of reading other users' config files (permission-denied)
- Gateway panel: filter discovered gateways that are already registered

* feat: complete audit trail action type coverage with grouped filters

Add labels, colors, and icons for 22 new action types (agents, workspaces,
system, config, auth). Replace flat filter dropdown with optgroup categories.
Extend formatDetail() for settings, backups, heartbeats, cleanup, and export.

* refactor: consolidate spawn into task board and editable sub-agent config

- Move spawn form into collapsible section in task board header
- Make sub-agent config editable in agent detail ConfigTab
- Remove /spawn as standalone page and nav item
- Use violet color for sub-agents to distinguish from agents (blue)

* refactor: remove agent comms panel from agents page

* refactor: redesign agent detail modal — minimal header, compact overview, model selector

- Compact modal header with inline status badge and underline tabs
- Delete actions moved to hover dropdown (trash icon)
- Overview tab: two-column layout with key fields + message panel
- Added model selector to agent overview (editable, saved via PUT /api/agents)
- Status controls as compact pill buttons instead of bulky cards
- Heartbeat shown as inline compact bar instead of full card
- Task stats as horizontal row instead of grid

* feat: add memory knowledge graph visualization for gateway mode

Add interactive ReactFlow-based node graph showing OpenClaw per-agent
memory topology. New /api/memory/graph endpoint reads SQLite memory
databases and returns chunk/file statistics per agent. Graph tab in
Memory Browser (gateway mode only) shows agent hub nodes sized by
chunk count with drill-down to file-level views.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* docs: update README for spawn consolidation, modal redesign, test count

* fix: harden agent-comms session threads and runtime tool visibility

* refactor: remove orphaned agent-spawn-panel after spawn/task unification

Spawn functionality now lives inline in task-board-panel and sub-agent
config is in agent-detail-tabs. This file had no imports anywhere.

* fix: exclude /brand/ assets from auth middleware matcher

The login page logo was broken because requests to /brand/mc-logo-128.png
were intercepted by the auth gate and redirected to /login.

* fix: redesign cron calendar — aggregate by job, add detail table

Calendar was broken: each cron occurrence was rendered individually,
causing 135+ jobs * N daily runs = thousands of entries per day cell
(all showing 00:00). Now:

- Week/month cells show unique jobs per day with run counts
- Agent color coding for visual distinction
- Human-readable frequency labels (every 5m, hourly at :00, etc.)
- Selected day panel shows job summaries not raw occurrences
- Agenda view capped at 500 entries (was unlimited)
- Job list replaced with compact sortable table
- Job detail panel redesigned: config, command, timing, logs in 2-col

* fix: clean agent payload in task comments, deduplicate sidebar Agents

Comments:
- Display-side: parse OpenClaw JSON payloads, extract text, strip ANSI
  codes, show model/tokens/duration as compact badge
- API-side: normalize agent result JSON on ingestion — store clean text
  with optional metadata footer instead of raw payload dump

Sidebar:
- Move Agent Costs and Memory as children under core Agents item
- Remove duplicate "Agents" group from Observe section

* fix: make Agents nav item clickable + split parent/chevron, move Memory standalone

- Clicking "Agents" label navigates to agents page and auto-expands children
- Chevron is a separate toggle button for expand/collapse
- Active indicator shown when on agents page
- Memory moved to standalone item below Skills in core nav
- Agent Costs remains as child under Agents

* refactor: move Office panel to Observe nav category

* feat: Obsidian-style memory graph + browser panel redesign

- Replace React Flow with d3-force canvas renderer for memory graph
- Force-directed physics: node repulsion, link attraction, organic settling
- Canvas rendering for 500+ node performance at 60fps
- Drag/pin nodes, hover highlighting, zoom/pan, search glow
- Redesign memory browser panel with Obsidian-inspired layout:
  - Slim 240px collapsible sidebar file tree
  - Top bar with Files/Graph view switcher
  - Dense mono-font file items with text-char icons
  - Graph view as default landing
  - Improved markdown renderer with code block support
- Add d3-force dependency

* fix: memory panel sidebar scroll overflow + graph canvas blank on mount

- Add min-h-0 to sidebar flex container so file tree scrolls within bounds
- Guard ResizeObserver callback against 0×0 dimensions to prevent invisible canvas
- Add requestAnimationFrame re-measure to catch post-paint layout
- Set minHeight floor on canvas container to prevent flex collapse
- Propagate flex height through page → panel → graph component chain

* fix: memory panel viewport height — use calc(100vh) instead of h-full

h-full doesn't resolve in a scrollable parent. Use explicit viewport
calc matching the pattern from chat-page-panel.

* fix: memory panel overflow — add overflow-hidden to prevent page scroll

* fix: agent detail crash when model is object instead of string

React error #31 — agent.model can be { primary: "..." } object,
not a string. Handle both shapes in the display fallback.

* fix: agent squad panel crash on unknown agent status

statusCardStyles only covered offline/idle/busy/error but agents can
have other statuses (active, online). Fall back to default style.

* fix: handle double-nested model.primary in agent config

The testdev agent had config.model.primary stored as an object
{ primary: "anthropic/..." } instead of a plain string. Defend
against this at all render and initialization sites.

* test: add E2E tests for onboarding, security-scan, diagnostics, and injection guard endpoints

- onboarding-api.spec.ts: 11 tests covering GET/POST auth, step completion, skip, reset, full lifecycle
- security-scan-api.spec.ts: 5 tests covering auth, response shape, score range, categories
- diagnostics-api.spec.ts: 7 tests covering auth, all response sections and field types
- injection-guard-endpoints.spec.ts: 7 tests verifying 422 blocking on workflows, spawn, agent message, chat forward
- auth-guards.spec.ts: added 3 new endpoints to protected GET list

* feat: phases 1-8 — Docker hardening, diagnostics, installer, skills, security, onboarding, injection guard

- Docker: hardened compose override, non-root user, read-only fs, health checks
- Diagnostics API: system info, security checks, database stats, gateway probe
- Installer: interactive install.sh, generate-env.sh, station-doctor.sh
- Skills: mission-control-installer and mission-control-manage OpenClaw skills
- Security: proxy hardening (HSTS, CSP, host allowlist), cookie improvements,
  security-scan API with 5 categories, security-audit.sh script
- Onboarding: wizard UI components, onboarding API with step tracking
- Injection guard: prompt/command injection scanning on workflows, spawn,
  agent messages, and chat forwarding endpoints (42 unit tests)
- Status API: enhanced with agent/session/gateway diagnostics
- Settings: onboarding integration in settings panel
- Docs: security hardening guide, landing page handoff, hardening guide

* fix: remove duplicate GW badges and optimize header for smaller screens

- Remove ConnectionBadge and MobileConnectionDot (ModeBadge covers all sizes)
- Lower stats breakpoint from 2xl to xl (visible at ≥1280px)
- Move DigitalClock into stats group
- Remove redundant Chat button (accessible from sidebar)

* feat: agent-optimized onboarding wizard with live capability detection

Rewrite wizard step content for human + agent dual audience:
- Welcome: live status chips (sessions, gateway, agents), mode-adaptive capability cards
- Credentials: explain impact on both dashboard access and agent self-registration
- Agent Setup: comprehensive feature explainer with descriptions per mode
- Security: agent-security framing with category tags before auto-scan
- Get Started: highlighted primary CTA, detailed feature descriptions, self-register tip

Add SystemCapabilities fetch on mount (parallel /api/status + /api/agents).
Show step titles below progress dots. Rename API step titles.

* feat: Google Workspace integration + TUI-style agent comms feed

Add Google Workspace CLI as a productivity integration with gws binary
detection. Rewrite agent-comms-panel to a TUI-style feed with
FeedCategory taxonomy (chat/tools/trace/system/safety) matching the
OpenClaw CLI. Extract inline loading spinners into shared Loader
component. Add autoScan and copy-fix to security scan card. Bump to
v2.0.0.

* fix: cross-codebase audit — SSRF hardening, race conditions, spawn security, memory leaks

- Expand SSRF blocklist with IPv4 CIDR matching for private ranges (10/8, 172.16/12, 192.168/16, 169.254/16, 127/8) while allowlisting user-configured gateway hosts
- Add 1MB file size limit on agent workspace file reads to prevent OOM
- Reorder agent config write-back: DB first (transactional), then gateway file
- Wrap gateway health DB updates in a single transaction
- Add 60s TTL to Tailscale Serve detection cache
- Expand injection guard to scan spawn label field
- Narrow spawn compatibility fallback to only retry on tools/profile schema errors
- Add audit logging for spawn operations
- Cap WebSocket ping timestamp map at 10 entries to prevent memory leak
- Apply rate limiting to GET /api/spawn history endpoint

* fix: cap unbounded store arrays, add fetch cleanup, add missing rate limits

- Cap spawnRequests (500), notifications (500), tokenUsage (2000) in Zustand store to prevent memory leaks in long-running browser sessions
- Add cancellation flag to sidebar fetch to prevent state updates after unmount
- Add error handling to notifications panel markRead/markAllRead operations
- Add missing mutationLimiter to PUT/DELETE /api/workflows endpoints

* fix: multi-tenancy isolation — scope search, export, SSE, webhooks, agent files

- Scope search endpoint: messages and webhooks filtered by workspace_id, audit search restricted to admin role, pipelines filtered by workspace_id
- Scope export endpoint: pipeline_runs export filtered by workspace_id, audit export annotated as intentionally instance-global (admin-only)
- Filter SSE events by workspace_id to prevent cross-workspace data leakage
- Add SSRF blocklist to webhook URL validation (private IPs, localhost, cloud metadata)
- Add 1MB file size limit to agent workspace file writes

* fix: auth timing attack, session revocation, validation bounds, cleanup scoping

- Prevent timing-based username enumeration by always running verifyPassword
  against a dummy hash when user not found or ineligible
- Revoke all sessions on password change and issue fresh session cookie
- Add loginLimiter rate limiting to Google OAuth POST endpoint
- Tighten Zod schemas: bound timestamps, cap arrays at 50, limit string
  items, constrain numeric fields (hours, timeout, template_id)
- Scope cleanup retention deletes by workspace_id (activities, notifications,
  pipeline_runs); audit_log remains instance-global by design
- Clamp config retention values to [0, 3650] days and gateway port to
  [1, 65535] with NaN fallback to defaults

* feat: plugin capabilities system + Hyperbrowser integration

- Add plugin registry (src/lib/plugins.ts) with registries for
  integrations, categories, nav items, panels, and tool providers
- Add Hyperbrowser as built-in integration with API key test handler
- Wire plugin hooks into integrations route, content router, nav rail,
  and agent template tool groups
- Add plugin loader stub and example plugin file

* feat: Ars Contexta-inspired memory knowledge system

Add wiki-link connections, schema enforcement, processing pipeline,
MOC generation, health diagnostics, and context injection to the
memory subsystem. Includes 4 new API routes (/api/memory/links,
/health, /context, /process), a shared utility library, enhanced
memory browser panel with graph/health/pipeline views, 15 unit
tests, and 14 E2E tests.

* feat: composable dashboard widgets, new panels, boot sequence loader

Refactor monolithic dashboard into composable widget grid with 10
extracted widgets (metric cards, task flow, event stream, gateway
health, etc.). Add channels, debug, exec-approval, and nodes panels
with corresponding API routes. Improve boot loader with stepped
init sequence. Enhance token dashboard, websocket reconnect, and
message bubble rendering.

* feat: security audit panel, agent eval framework, optimization endpoint

- Add security event logging (auth failures, rate limits, injection attempts, secret exposures)
- Add secret scanner with regex patterns for AWS keys, GitHub tokens, Stripe keys, JWTs, private keys, DB URIs
- Add MCP call auditing with tool-use frequency and success/failure tracking
- Add agent trust scoring with weighted recalculation
- Add four-layer agent eval stack (output, trace, component, drift detection)
- Add agent optimization engine (token efficiency, tool patterns, fleet benchmarks)
- Add hook profiles (minimal/standard/strict) for security strictness control
- Add security audit panel with posture gauge, timeline, trust scores, MCP audit charts
- Add API endpoints: /api/security-audit, /api/agents/optimize, /api/agents/evals
- Wire security events into auth, rate-limit, injection-guard, agent messages
- Add 3 DB migrations (security_events, agent_trust_scores, mcp_call_log, eval tables, session costs)
- Add unit tests (60 tests) and e2e tests for all new endpoints

* docs: update README and security hardening guide for security audit, evals, optimization

- Update panel count (32), API route count (101), migration count (39), test count (282)
- Add security audit panel, agent eval framework, optimization endpoint to features list
- Add architecture tree entries for new libraries
- Add Security & Evals API reference section
- Add feature descriptions for security audit, eval framework, agent optimization
- Update SECURITY-HARDENING.md with security event system, hook profiles, eval framework docs
- Renumber hardening sections to accommodate new content

* docs: add onboarding wizard to README features and API reference

* feat: OpenClaw auto-update detection with version banner

Add /api/openclaw/version endpoint that checks installed OpenClaw version
against latest GitHub release, with 1-hour ISR cache. Cyan-themed banner
displays when an update is available, with copy-to-clipboard for the
update command and per-version dismiss persistence via localStorage.

* fix: add dark class to SSR html element to prevent white login flash

The html element was rendered without any class in SSR, causing the
:root (white) CSS variables to apply until client-side scripts added
the dark class. Setting className="dark" as the server default ensures
dark theme renders immediately. The FOUC script and next-themes will
adjust for light theme users on hydration.

* fix: login button stays disabled with browser autofill

Browser autofill populates input values without firing React onChange,
so the controlled state stays empty and the disabled check fails.
Remove the username/password emptiness check from disabled — HTML
required attributes already prevent empty submission.

* fix: login fails with browser autofill due to empty React state

When browser autofills credentials, React onChange never fires so state
stays empty. Read actual DOM input values on form submit as fallback.

* fix: login redirect fails due to router.push race with cookie

Replace router.push('/') + router.refresh() with window.location.href
to force a full page reload after login. The soft navigation could
race with the RSC payload cache, causing /api/auth/me to fire before
the session cookie was available.

* fix: CSP nonce blocks inline scripts, breaking theme and login

The CSP had both 'unsafe-inline' and a nonce in script-src. Per the
CSP spec, browsers ignore 'unsafe-inline' when a nonce is present.
Since no scripts actually use the nonce attribute, all inline scripts
(FOUC prevention, next-themes) were blocked — causing white flash
and broken client-side behavior. Remove the unused nonce so
'unsafe-inline' is respected.

* feat: OpenClaw update-now button triggers server-side update

Add POST /api/openclaw/update endpoint that runs `openclaw update
--channel stable` with 5-minute timeout, audit logs the version
change, and returns the new version. Banner now shows Update Now
button with updating/success/error states alongside the existing
Copy Command and View Release actions.

* feat: security scan auto-fix, gateway session chat, boot loader improvements

- Add POST /api/security-scan/fix endpoint with per-issue and fix-all support
  - Accepts optional { ids: ["check_id"] } to fix specific issues
  - Handles env permissions, host allowlist, HSTS, cookies, API key,
    OpenClaw config (auth, bind, elevated, DM, exec), world-writable files
  - Audit logs all fixes
- Add "Fix All Issues" button and per-check "Fix" buttons to security scan card
  - Auto-re-scans after fixes complete
- Add GET /api/sessions/transcript/gateway for fetching gateway session messages
  - Proxies to OpenClaw gateway HTTP API with format normalization
- Enable chat input for gateway sessions (forwards via chat messages API)
- Move boot loader state to Zustand store (bootComplete) so it only shows
  after login, not on every panel navigation
- Add sessionKey and agent fields to Conversation session metadata

* feat: add OpenClaw security hardening checks to security scan

New checks aligned with `openclaw security audit`:
- Control UI device auth (dangerouslyDisableDeviceAuth)
- Control UI insecure auth (allowInsecureAuth)
- Filesystem workspace isolation (tools.fs.workspaceOnly)
- Dangerous tool groups deny list
- Log redaction (logging.redactSensitive)
- Agent sandbox mode (agents.defaults.sandbox.mode)
- Safe bins interpreter profiling

Auto-fix support for control_ui_device_auth, control_ui_insecure_auth,
fs_workspace_only, and log_redaction.

* fix: read gateway session transcripts from JSONL files on disk

The gateway doesn't expose an HTTP API for session messages.
OpenClaw stores transcripts as JSONL files at:
  {STATE_DIR}/agents/{agent}/sessions/{sessionId}.jsonl

Rewrote the endpoint to:
1. Extract agent name from session key
2. Look up sessionId from agent's sessions.json
3. Read and parse the JSONL transcript file directly
4. Extract type:"message" entries with Claude API content format

* feat: merge agent costs and token dashboard into unified Cost Tracker panel

- New CostTrackerPanel with 4 tabs: Overview, Agents, Sessions, Tasks
- Combines data from both /api/tokens and /api/tokens/by-agent endpoints
- Flat sidebar entry under OBSERVE (replaces Tokens), no dropdown
- Old routes (tokens, agent-costs) still resolve to new panel
- Removed agent-costs child from Agents nav item

* fix: use correct openclaw CLI command for agent deletion

`openclaw agents remove` doesn't exist — the correct command is
`openclaw agents delete <id> --force`.

* fix: click-based delete dropdown and progress loaders for agent CRUD

Replace CSS group-hover dropdown with click toggle + click-outside
listener so the delete menu stays open. Add spinner loaders for
save and delete operations.

* fix: channels auth, chat send, task UX, skills defaults, memory graph

- Channels: add Bearer token auth headers to gateway API calls
- Chat: add missing `from` field in gateway session message send
- Tasks: remove setLoading(true) on refresh to prevent full-page skeleton
- Skills: pre-select openclaw source when in gateway mode
- Memory graph: add delayed resize retries for flex layout settling
- Memory graph API: drop SUM(LENGTH(text)) for faster query (17 DBs)

* fix: memory graph auto-fit to view and faster API query

- Add fitToView() that computes bounding box and auto-zooms to fit
  all nodes after 60 simulation ticks
- Drop SUM(LENGTH(text)) from graph API for faster queries across
  17 SQLite databases (523 MB total)
- Add delayed resize retries for flex layout settling

* fix: resolve security audit crash and sidebar scroll jump

- Transform authEvents object to array before rendering to prevent
  .map() crash on security audit panel
- Remove pathname from boot effect deps to prevent sidebar scroll
  reset on panel navigation

* fix: always merge session-derived token data instead of fallback-only

The token data pipeline treated gateway session data as a last-resort
fallback, only used when both DB and JSON file were empty. Stale e2e
test records in the JSON file prevented real session data from ever
appearing. Now all three sources (DB, file, sessions) are always merged
and deduplicated.

* refactor: merge Activity Feed and Agent History into unified Activity panel

Consolidates two panels that shared the same /api/activities data source.
The merged panel shows a flat feed when "All" is selected and switches
to a day-grouped timeline with agent sidebar when a specific agent is
picked. Removes the History nav entry and AgentHistoryPanel import.

* fix: agent comms feed stream, clickable session/agent chips, target selector

- Add agent_% pattern to comms SQL predicate so chat messages appear in feed
- Make SessionChip and agent bar chips clickable to select message target
- Replace hardcoded "Admin -> Coordinator" label with dismissible target chip
- Route messages to selected target with correct conversation_id and sessionKey

* fix: gateway dispatch, inline session feed, and header z-index

- Switch task dispatch and Aegis reviews from `openclaw agent` CLI to
  gateway two-step pattern (call agent → agent.wait) matching the
  proven chat route invocation path
- Include previous Aegis rejection feedback in re-dispatch prompts
- Add tags to task dispatch prompts
- Replace navigate-away "View Session" button with inline Session tab
  in task detail modal, with auto-refresh, live indicator, and
  SessionMessage rendering
- Fix header z-index so theme dropdown renders above page content

* fix: normalize all security audit API fields to match frontend types

Transform agentTrust, secretAlerts, toolAudit, rateLimits,
injectionAttempts, and timeline from nested API objects into the flat
arrays the UI components expect, preventing .map() crashes.

* feat: animated OpenClaw + Claude converging logos on loading screen

Replace static PNG logo with inline SVG OpenClaw and Claude marks that
animate inward from opposite sides, converging at center with a glow
burst. Includes prefers-reduced-motion support.

* feat: onboarding wizard shows both modes with mode-themed colors

- Add modeColors() helper returning amber (local) or cyan (gateway) classes
- StepWelcome: two side-by-side mode cards with active/inactive styling
- StepGateway: both feature columns always visible, inactive dimmed + locked
- All steps: progress bar, dots, and buttons themed to detected mode
- Thread isGateway prop to StepCredentials and StepSecurity
- Remove unused CapabilityCard and FeatureItem components

* fix: channels panel gateway status, response transform, and boot improvements

- Transform gateway's rich channel data model into flat ChannelAccount[] the frontend expects
- Fall back to /api/health check when /api/channels/status fails, avoiding false "disconnected"
- Use Zustand WebSocket connection state as fallback for gateway status in channels panel
- Show context-aware empty state messages (connected vs disconnected)
- Preload workspace data (agents, sessions, projects) during boot sequence
- Add anti-self-XSS console warning on boot
- Forward explicit sessionKey in chat message dispatch

* fix: move useMissionControl hook before early returns in channels panel

Hook was called after conditional returns (loading/error), violating
Rules of Hooks and causing React error #310.

* fix: loading screen uses real OpenClaw logo, converge→MC mark animation

- Replace placeholder talon SVG with actual OpenClaw favicon (green gradient claw)
- Add MissionControlMark SVG (network graph matching app icon)
- Animation: OpenClaw + Claude converge → pair fades out → MC mark emerges
- Progress steps fade in after logo animation, completed steps collapse away
- Add reduced-motion fallback for new animations

* fix: use real OpenClaw lobster logo and MC brand mark on loading screen

- Replace SVG approximations with actual brand assets (img tags)
- OpenClaw: lobster character from x.com/openclaw profile
- Mission Control: network graph mark from /brand/mc-logo-128.png
- Animation: OpenClaw + Claude converge → fade out → MC mark emerges

* feat: aggregate all gateway session transcripts into agent-feed

Add /api/sessions/transcript/aggregate endpoint that fans out to all
active session JSONL files on disk and returns a merged chronological
event stream. Agent-comms panel now merges transcript events as a third
data source alongside WS logs and DB comms, with deduplication and
category classification (tools, trace, chat, system).

Extract shared JSONL parsing logic into src/lib/transcript-parser.ts
to avoid duplication between gateway and aggregate routes.

* fix: replace collapsing step list with single active label on loading screen

Eliminates layout shifts by replacing the collapsing step list with a
fixed-height single active step label that crossfades between steps.
Progress section now appears at 2.4s delay to let the brand mark land.

* feat: agent-feed send error details, memory activity events, chat height cap

- Parse injection (422) and auth (403) errors from chat send endpoint
  with specific user-facing messages instead of generic "Failed to send"
- Log memory file save/create/delete operations to activities table
- Support comma-separated type filter on activities endpoint (IN query)
- Poll memory activity events at 30s cadence and merge into agent-feed
- Cap feed stream container at max-h-[500px] with existing scroll

* feat: replace memory graph Canvas 2D + d3-force with reagraph WebGL

- Rewrite memory-graph.tsx to use reagraph GraphCanvas with Obsidian-style
  dark theme (glow effects, connected-node highlighting, force-directed layout)
- Fix parent layout overflow-auto → overflow-hidden to prevent height collapse
- Add next/dynamic SSR-safe import for WebGL/Three.js compatibility
- Remove d3-force dependency, add reagraph

* fix: use infrastructure scan for security audit posture score

The security audit page scored ~95 based only on event history (no
incidents = high score). Now it runs the full infrastructure scan
(credentials, network, OpenClaw, runtime, OS) and blends it 70/30
with event history, matching what the onboarding security scan shows.

- Extract scan logic to shared src/lib/security-scan.ts
- Simplify /api/security-scan route to use shared lib
- Add scan data + expandable categories to security audit panel
- Blended score: 70% infrastructure config, 30% event history

* fix: add worker-src CSP directive and persist panel data across tab switches

Add worker-src 'self' blob: to CSP so reagraph WebGL force layout workers
are not blocked. Move agents, skills, and memory graph data from component
local state into zustand store so data survives tab switches without refetch.

* fix: add worker-src to proxy middleware CSP (mirrors next.config.js)

The middleware in proxy.ts was overwriting the next.config.js CSP header
without the worker-src directive, blocking reagraph blob: workers.

* fix: add blob: to script-src CSP for worker importScripts

worker-src allows worker creation but importScripts() inside workers
falls back to script-src, which also needs blob: for reagraph's
workerize-transferable chain.

* fix: allow cdn.jsdelivr.net in connect-src for reagraph font loading

troika-three-text (used by reagraph for WebGL text) fetches unicode font
resolver data from cdn.jsdelivr.net at runtime.

* feat: obsidian-style memory graph with hover tooltips and breadcrumb nav

- Full-bleed graph canvas with Catppuccin Mocha color palette
- Floating glass-morphism overlays: breadcrumb nav, stats, legend
- Hover tooltip shows file path, chunk count, and text size
- Click hub to drill in, click breadcrumb or hub again to go back
- Color legend for file categories (sessions, memory, knowledge, etc.)

* fix: auto-fit memory graph into view after layout settles

Calls fitNodesInView at 800ms and 2000ms after nodes change to ensure
the graph is visible without needing a manual zoom.

* feat: prefetch memory graph and skills data on app boot

Add memory graph and skills API fetches to the existing Promise.all
block so data is warm before the user navigates to those panels.

* feat: rewrite onboarding copy with mothership/docking narrative

Reframe all 6 onboarding steps to use consistent station/docking
metaphor: Mission Control is the mothership, agents dock here to
gain capabilities. Replaces generic SaaS copy with agent-centric
language (docking credentials, solo/fleet station, skills hangar).

Copy-only changes — no structural, layout, or API changes.

* feat: add essential/full interface mode toggle

- Add interfaceMode to store (essential | full)
- Filter nav-rail items by essential flag in essential mode
- Persist preference via general.interface_mode setting
- Add toggle button to sidebar footer and settings panel
- Redirect to overview if current panel hidden when switching
- Add changelog toggle to openclaw update banner

* fix: scope comms panel auto-scroll to its own container

scrollIntoView was bubbling up to the page-level <main>, causing the
overview page to scroll to the bottom on load. Use scrollTo on the
feed container ref instead.

* refactor: replace ad-hoc spinners with shared Loader component

Standardize loading states across 12 panel files to use the shared
Loader component (panel variant for full-panel states, inline variant
for section-level states) instead of individual animate-spin spinners.

* refactor: move interface mode toggle into user dropdown menu

Remove standalone toggle button from sidebar footer and integrate it
as a segmented control inside the ContextSwitcher dropdown. Add
Settings and Activity quick-nav links to the dropdown.

* feat: add gateway state backup via `openclaw backup create`

Add ?target=gateway variant to POST /api/backup that runs
`openclaw backup create` (60s timeout) and logs an openclaw.backup
audit event. The existing MC SQLite backup remains the default.

Surface both backup actions in Settings panel under a new Backups
row so admins can trigger either backup type with one click.

* refactor: use shared Loader in log viewer panel

Replace ad-hoc spinner with the shared Loader component,
consistent with the rest of the codebase.

* fix: write gateway backup to MC backup dir to avoid path conflict

openclaw backup create writes archives to CWD by default, which is
the state dir — causing a "must not be written inside a source path"
error. Use --output to write to the MC backup directory instead.

* fix: handle openclaw backup non-zero exit with successful output

openclaw backup create may exit non-zero in some environments even
when the archive is successfully created. Check for "Created" in
the combined output before treating it as a failure.

* feat: add global exec approval overlay modal + refactor panel

- Add fixed overlay modal (ExecApprovalOverlay) that shows pending
  exec approvals regardless of active panel, matching OpenClaw
  reference UI pattern
- Decisions sent via WebSocket RPC (exec.approval.resolve) with
  HTTP fallback
- Handle both exec.approval and exec.approval.requested/resolved
  event variants for gateway compatibility
- Refactor ExecApprovalPanel to read from Zustand store (populated
  by WebSocket) instead of its own HTTP polling loop
- Add cwd, host, resolvedPath fields to ExecApprovalRequest type

* feat: streamline onboarding wizard to 3 steps + add persistent checklist

- Reduce wizard from 6 steps to 3 (welcome, interface mode, credentials)
- Remove agent setup, security scan, and next steps from blocking modal
- Add "Station Online" completion animation with endowed progress bar
- Add persistent onboarding checklist widget to dashboard (6 items, 3 pre-checked)
- Checklist auto-detects completion from store data and auto-dismisses
- Add "Replay Onboarding" button in settings panel
- Improve empty states in agent, cost tracker, and task board panels

* feat: enhance security scanner with severity scoring, new checks, and agent endpoint

- Add CheckSeverity and FixSafety types with severity-weighted scoring
- Add ~20 new platform-specific checks (Linux, macOS, Windows hardening)
- Add cachedExec and tryExecBatch helpers for batching OS checks
- Consolidate auth_pass_set/auth_pass_strong into single auth_pass ID
- Add POST /api/security-scan/agent endpoint with scan/fix/scan-and-fix
  actions, fix scope control, dry-run mode, and structured response
- Add fixSafety field to fix route responses
- Add 10 new secret patterns (Slack/Discord webhooks, OpenAI, Anthropic,
  Twilio, SendGrid, Mailgun, GCP, Azure, SSH key content)
- Add 3 new injection guard rules (SSRF, template injection, SQL injection)
- Show severity badges and fix safety warnings in onboarding and audit UI
- Sort failing checks by severity in audit panel
- Add tests for new fields, patterns, injection rules, and agent endpoint

* feat: close OpenClaw UI gap analysis — schema config, channels, chat, devices, sessions, cron, usage, agents, exec approval, websocket, logs

Phase 1 (Critical):
- Config editor: schema-driven form with typed fields, section sidebar, search, hot-apply, hash concurrency
- Channels: per-platform cards (WhatsApp QR, Telegram bot, Discord guilds, Slack workspace, Nostr profile editor)
- Chat: file attachments (picker/drag-drop/paste), abort generation, focus mode, scroll indicator, RTL detection, compaction/fallback toasts
- Devices: approve/reject pending pairing, token rotation/revocation with confirmation
- WebSocket: event sequence tracking with gap detection, caps negotiation, 1.7x backoff (15s cap), protocol error codes

Phase 2 (Important):
- Usage dashboard: filter chips, client-side CSV export, cache tokens, timezone selector, cost-by-provider chart
- Sessions: thinking/verbose/reasoning level controls, editable labels, deletion, time window filtering
- Agents: 5 new tabs (files browser+editor, tools allow/deny, channels, cron, model fallback chain)
- Cron: clone job, force/due run modes, run history browser, schedule/enabled filters, field validation, stagger
- Exec approval: per-agent command allowlist editor with glob pattern matching preview

Phase 3 (Nice-to-have):
- Log viewer: export .log/JSON, buffer truncation indicator, log file path display

* test: add unit + E2E tests for gap analysis features

Unit tests (vitest, 132 new tests):
- websocket-utils: error codes, backoff calculation, sequence gaps (23 tests)
- config-schema-utils: schema normalization, field type inference, tags (29 tests)
- chat-utils: RTL detection, attachment validation, file size formatting (18 tests)
- cron-utils: schedule description, expression validation, clone names (26 tests)
- token-utils: provider detection, CSV generation, timezone offsets (17 tests)
- exec-approval-utils: glob pattern matching, multi-pattern search (19 tests)

E2E tests (playwright, 53 new tests):
- gateway-config: GET/PUT config, schema, hash concurrency, auth (5 tests)
- channels-api: list, probe, action validation, auth (4 tests)
- device-management: nodes, approve/reject/rotate/revoke validation (11 tests)
- session-controls: thinking/verbose/reasoning/label/delete, auth (12 tests)
- cron-operations: list, clone, trigger modes, history, auth (13 tests)
- exec-approval-allowlist: CRUD, round-trip, hash concurrency, auth (8 tests)

Refactor: extracted pure logic into standalone utility modules for testability

* feat: integrate Hermes task/cron system and memory into MC observability

- Add read-only cron job scanner (hermes-tasks.ts) with 30s throttled cache
- Add read-only memory scanner (hermes-memory.ts) for MEMORY.md/USER.md
- Add /api/hermes/tasks and /api/hermes/memory API routes
- Enrich /api/hermes GET with cronJobCount and memoryEntries
- Add HermesCronSection to task board (collapsible, purple accent)
- Add Hermes memory tab to memory browser with capacity bars
- Add cron/memory stat badges to settings panel Hermes section
- Enrich dashboard Hermes card subtitle with cron count

* feat: add Hermes observability, branded loader, and conversation UX improvements

* feat: register MC as default dashboard and add gateway onboarding step

Auto-writes gateway.controlUi.dashboardUrl and allowedOrigins to
openclaw.json on capabilities check. Adds a dynamic "Gateway Link"
step to the onboarding wizard when a gateway is detected.

* fix: disable device auth when registering MC as dashboard

MC authenticates via gateway token — device pairing is unnecessary
and causes "pairing required" WebSocket errors. Auto-set
dangerouslyDisableDeviceAuth when writing dashboardUrl.

* fix: remove invalid dashboardUrl write that crashes gateway

The gateway validates its config strictly — unknown keys like
`dashboardUrl` cause startup failures. registerMcAsDashboard() now
only manages valid keys: allowedOrigins and dangerouslyDisableDeviceAuth.

Updated onboarding wizard text to reflect origin registration
instead of dashboard URL configuration.

* fix: auto-detect Tailscale Serve /gw route instead of relying on gateway config

The previous approach read gateway.tailscale.mode from openclaw.json, but
setting mode to "off" (to stop gateway from auto-managing routes) also broke
MC's WebSocket URL resolution. Now checks `tailscale serve status --json`
directly for a /gw handler, with the config check as a legacy fallback.

* fix: retry gateway websocket without stale device identity

* feat: use real codex and hermes session logos

* feat: update hermes session logo

* fix: make standalone deploys include static assets

* fix: wait for standalone server and bind explicit host

* fix: harden nextjs image and typed client boundaries

* refactor: reduce nextjs image and jsx lint debt

* refactor: fix react compiler and channels typing

* fix(refactor): onboarding/walkthrough hardening (#272)

* docs(plan): add onboarding walkthrough hardening plan

* fix(onboarding): harden wizard step flow and keyboard navigation

feat(loader): use real brand logo assets for Claude/OpenClaw/Codex/Hermes

* fix(onboarding): harden API state transitions and reset semantics

* test(onboarding): align e2e API spec with current step model

* test(e2e): isolate openclaw harness gateway port and fail fast on startup errors (#273)

* fix: restore agent key auth and actor attribution regressions

* fix: replay onboarding once per fresh browser session

* Add support section with donation links to README

Added a section to encourage support for the project, including donation links.

* fix: restore memory panel and onboarding scanner

* fix: accept gateway config hash in validation schema

* fix: refine onboarding overlay and boot loader

* fix: harden gateway config updates and nav latency

* fix: add openclaw doctor warning and fix banner

* fix: fallback to openclaw cli for channel status

* fix: align channels and chat with gateway rpc

* fix: speed memory tree and scroll doctor details

* fix: clarify security scan autofix results

* fix: correct runtime data directory resolution

* fix: isolate deploy builds from runtime sqlite

* fix: migrate sqlite data with backup

* fix: trust dynamic self hostnames in proxy

* fix: preserve active hosts in security autofix

* fix: pin mission control node runtime

* fix: make doctor warnings template-safe

* fix: isolate build database overrides

* fix: scope doctor warnings to active state dir

* fix: auto-resolve doctor session drift

* fix: harden standalone deploy restart

* fix: detect standalone listener on jarv

* fix: preserve e2e env after security autofix

* fix: scope onboarding to users and sessions

* docs: clarify node 22 support

* fix: load hermes local session transcripts

* fix: fully remove deleted agents from openclaw state

* fix: normalize openclaw model config writes

* fix: isolate e2e runtime state

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
nyk 2026-03-11 19:09:24 +07:00 committed by GitHub
parent d92e01d64f
commit 3c96623e0f
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
362 changed files with 54604 additions and 6850 deletions

View File

@ -67,7 +67,8 @@ NEXT_PUBLIC_GATEWAY_HOST=
NEXT_PUBLIC_GATEWAY_PORT=18789
NEXT_PUBLIC_GATEWAY_PROTOCOL=
NEXT_PUBLIC_GATEWAY_URL=
# NEXT_PUBLIC_GATEWAY_TOKEN= # Optional, set if gateway requires auth token
# Do not expose gateway tokens via NEXT_PUBLIC_* variables.
# Keep gateway auth secrets server-side only (OPENCLAW_GATEWAY_TOKEN / GATEWAY_TOKEN).
# Gateway client id used in websocket handshake (role=operator UI client).
NEXT_PUBLIC_GATEWAY_CLIENT_ID=openclaw-control-ui

View File

@ -24,7 +24,7 @@ jobs:
- name: Setup Node
uses: actions/setup-node@v4
with:
node-version: 20
node-version-file: '.nvmrc'
cache: 'pnpm'
- name: Install dependencies

6
.gitignore vendored
View File

@ -35,6 +35,12 @@ aegis/
# Playwright
test-results/
playwright-report/
.tmp/
.playwright-mcp/
# Local QA screenshots
/e2e-debug-*.png
/e2e-channels-*.png
# Claude Code context files
CLAUDE.md

1
.node-version Normal file
View File

@ -0,0 +1 @@
22

1
.nvmrc Normal file
View File

@ -0,0 +1 @@
22

View File

@ -1,4 +1,4 @@
FROM node:20-slim AS base
FROM node:22.22.0-slim AS base
RUN corepack enable && corepack prepare pnpm@latest --activate
WORKDIR /app
@ -20,7 +20,14 @@ COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN pnpm build
FROM node:20-slim AS runtime
FROM node:22.22.0-slim AS runtime
ARG MC_VERSION=dev
LABEL org.opencontainers.image.source="https://github.com/openclaw/mission-control"
LABEL org.opencontainers.image.description="Mission Control - operations dashboard"
LABEL org.opencontainers.image.licenses="MIT"
LABEL org.opencontainers.image.version="${MC_VERSION}"
WORKDIR /app
ENV NODE_ENV=production
RUN addgroup --system --gid 1001 nodejs && adduser --system --uid 1001 nextjs
@ -30,11 +37,11 @@ COPY --from=build /app/.next/static ./.next/static
COPY --from=build /app/src/lib/schema.sql ./src/lib/schema.sql
# Create data directory with correct ownership for SQLite
RUN mkdir -p .data && chown nextjs:nodejs .data
RUN apt-get update && apt-get install -y curl --no-install-recommends && rm -rf /var/lib/apt/lists/*
RUN echo 'const http=require("http");const r=http.get("http://localhost:"+(process.env.PORT||3000)+"/api/status?action=health",s=>{process.exit(s.statusCode===200?0:1)});r.on("error",()=>process.exit(1));r.setTimeout(4000,()=>{r.destroy();process.exit(1)})' > /app/healthcheck.js
USER nextjs
ENV PORT=3000
EXPOSE 3000
ENV HOSTNAME=0.0.0.0
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD curl -f http://localhost:${PORT:-3000}/login || exit 1
CMD ["node", "/app/healthcheck.js"]
CMD ["node", "server.js"]

204
README.md
View File

@ -24,20 +24,47 @@ Manage agent fleets, track tasks, monitor costs, and orchestrate workflows — a
Running AI agents at scale means juggling sessions, tasks, costs, and reliability across multiple models and channels. Mission Control gives you:
- **28 panels** — Tasks, agents, logs, tokens, memory, cron, alerts, webhooks, pipelines, and more
- **32 panels** — Tasks, agents, skills, logs, tokens, memory, security, cron, alerts, webhooks, pipelines, and more
- **Real-time everything** — WebSocket + SSE push updates, smart polling that pauses when you're away
- **Zero external dependencies** — SQLite database, single `pnpm start` to run, no Redis/Postgres/Docker required
- **Role-based access** — Viewer, operator, and admin roles with session + API key auth
- **Quality gates** — Built-in review system that blocks task completion without sign-off
- **Quality gates** — Built-in Aegis review system that blocks task completion without sign-off
- **Recurring tasks** — Natural language scheduling ("every morning at 9am") with cron-based template spawning
- **Claude Code bridge** — Read-only integration surfaces Claude Code team tasks and configs on the dashboard
- **Skills Hub** — Browse, install, and security-scan agent skills from ClawdHub and skills.sh registries
- **Multi-gateway** — Connect to multiple agent gateways simultaneously (OpenClaw, and more coming soon)
## Quick Start
> **Requires [pnpm](https://pnpm.io/installation)** — Mission Control uses pnpm for dependency management. Install it with `npm install -g pnpm` or `corepack enable`.
### One-Command Install (Docker)
```bash
git clone https://github.com/builderz-labs/mission-control.git
cd mission-control
bash install.sh --docker
```
The installer auto-generates secure credentials, starts the container, and runs an OpenClaw fleet health check. Open `http://localhost:3000` and log in with the printed credentials.
### One-Command Install (Local)
```bash
git clone https://github.com/builderz-labs/mission-control.git
cd mission-control
bash install.sh --local
```
Requires Node.js 22.x (LTS) and pnpm (auto-installed via corepack if missing).
### Manual Setup
> **Requires [pnpm](https://pnpm.io/installation)** and **Node.js 22.x (LTS)**.
> Mission Control is validated against Node 22 across local dev, CI, Docker, and standalone deploys. Use `nvm use 22` (or your version manager equivalent) before installing or starting the app.
```bash
git clone https://github.com/builderz-labs/mission-control.git
cd mission-control
nvm use 22
pnpm install
cp .env.example .env # edit with your values
pnpm dev # http://localhost:3000
@ -46,6 +73,25 @@ pnpm dev # http://localhost:3000
Initial login is seeded from `AUTH_USER` / `AUTH_PASS` on first run.
If `AUTH_PASS` contains `#`, quote it (e.g. `AUTH_PASS="my#password"`) or use `AUTH_PASS_B64`.
### Docker Hardening (Production)
For production deployments, use the hardened compose overlay:
```bash
docker compose -f docker-compose.yml -f docker-compose.hardened.yml up -d
```
This adds read-only filesystem, capability dropping, log rotation, HSTS, and network isolation. See [Security Hardening](docs/SECURITY-HARDENING.md) for the full checklist.
### Station Doctor
Run diagnostics on your installation:
```bash
bash scripts/station-doctor.sh
bash scripts/security-audit.sh
```
## Project Status
### What Works
@ -65,7 +111,22 @@ If `AUTH_PASS` contains `#`, quote it (e.g. `AUTH_PASS="my#password"`) or use `A
- Ed25519 device identity for secure gateway handshake
- Agent SOUL system with workspace file sync and templates
- Agent inter-agent messaging and comms
- Update available banner with GitHub release check
- Skills Hub with ClawdHub and skills.sh registry integration (search, install, security scan)
- Bidirectional skill sync — disk ↔ DB with SHA-256 change detection
- Local agent discovery from `~/.agents/`, `~/.codex/agents/`, `~/.claude/agents/`
- Natural language recurring tasks — schedule parser converts "every 2 hours" to cron, spawns dated child tasks
- Claude Code task bridge — read-only scanner surfaces team tasks and configs from `~/.claude/tasks/` and `~/.claude/teams/`
- Skill security scanner (prompt injection, credential leaks, data exfiltration, obfuscated content)
- Update available banner with GitHub release check and one-click self-update
- Framework adapter layer for multi-agent registration (OpenClaw, CrewAI, LangGraph, AutoGen, Claude SDK, generic)
- Multi-project task organization with per-project ticket prefixes
- Per-agent rate limiting with `x-agent-name` identity-based quotas
- Agent self-registration endpoint for autonomous agent onboarding
- Security audit panel with posture scoring, secret detection, trust scoring, and MCP call auditing
- Four-layer agent eval framework (output, trace, component, drift detection)
- Agent optimization endpoint with token efficiency, tool patterns, and fleet benchmarks
- Hook profiles (minimal/standard/strict) for tunable security strictness
- Guided onboarding wizard with credential setup, agent discovery, and security scan
### Known Limitations
@ -81,10 +142,10 @@ If `AUTH_PASS` contains `#`, quote it (e.g. `AUTH_PASS="my#password"`) or use `A
## Features
### Agent Management
Monitor agent status, spawn new sessions, view heartbeats, and manage the full agent lifecycle from registration to retirement.
Monitor agent status, configure models, view heartbeats, and manage the full agent lifecycle from registration to retirement. Agent detail modal with compact overview, inline model selector, and editable sub-agent configuration.
### Task Board
Kanban board with six columns (inbox → backlog → todo → in-progress → review → done), drag-and-drop, priority levels, assignments, and threaded comments.
Kanban board with six columns (inbox → assigned → in progress → review → quality review → done), drag-and-drop, priority levels, assignments, threaded comments, and inline sub-agent spawning.
### Real-time Monitoring
Live activity feed, session inspector, and log viewer with filtering. WebSocket connection to OpenClaw gateway for instant event delivery.
@ -93,7 +154,10 @@ Live activity feed, session inspector, and log viewer with filtering. WebSocket
Token usage dashboard with per-model breakdowns, trend charts, and cost analysis powered by Recharts.
### Background Automation
Scheduled tasks for database backups, stale record cleanup, and agent heartbeat monitoring. Configurable via UI or API.
Scheduled tasks for database backups, stale record cleanup, agent heartbeat monitoring, and recurring task spawning. Configurable via UI or API.
### Natural Language Recurring Tasks
Create recurring tasks with natural language like "every morning at 9am" or "every 2 hours". The built-in schedule parser (zero dependencies) converts expressions to cron and stores them in task metadata. A template-clone pattern keeps the original task as a template and spawns dated child tasks (e.g., "Daily Report - Mar 07") on schedule. Each spawned task gets its own Aegis quality gate.
### Direct CLI Integration
Connect Claude Code, Codex, or any CLI tool directly to Mission Control without requiring a gateway. Register connections, send heartbeats with inline token reporting, and auto-register agents.
@ -101,28 +165,52 @@ Connect Claude Code, Codex, or any CLI tool directly to Mission Control without
### Claude Code Session Tracking
Automatically discovers and tracks local Claude Code sessions by scanning `~/.claude/projects/`. Extracts token usage, model info, message counts, cost estimates, and active status from JSONL transcripts. Scans every 60 seconds via the background scheduler.
### Claude Code Task Bridge
Read-only integration that surfaces Claude Code team tasks and team configs on the Mission Control dashboard. Scans `~/.claude/tasks/<team>/<N>.json` for structured task data (subject, status, owner, blockers) and `~/.claude/teams/<name>/config.json` for team metadata (members, lead agent, model assignments). Visible in both the Task Board (collapsible section) and Cron Management (teams overview) panels.
### GitHub Issues Sync
Inbound sync from GitHub repositories with label and assignee mapping. Synced issues appear on the task board alongside agent-created tasks.
### Skills Hub
Browse, install, and manage agent skills from local directories and external registries (ClawdHub, skills.sh). Bidirectional sync detects manual additions on disk and pushes UI edits back to `SKILL.md` files. Built-in security scanner checks for prompt injection, credential leaks, data exfiltration, obfuscated content, and dangerous shell commands before installation. Supports 5 skill roots: `~/.agents/skills`, `~/.codex/skills`, project-local `.agents/skills` and `.codex/skills`, and `~/.openclaw/skills` for gateway mode.
### Local Agent Discovery
Automatically discovers agent definitions from `~/.agents/`, `~/.codex/agents/`, and `~/.claude/agents/` directories. Detection looks for marker files (AGENT.md, soul.md, identity.md, config.json). Discovered agents sync bidirectionally — edit in the UI and changes write back to disk.
### Agent SOUL System
Define agent personality, capabilities, and behavioral guidelines via SOUL markdown files. Edit in the UI or directly in workspace `soul.md` files — changes sync bidirectionally between disk and database.
### Agent Messaging
Inter-agent communication via the comms API. Agents can send messages to each other, enabling coordinated multi-agent workflows.
Session-threaded inter-agent communication via the comms API (`a2a:*`, `coord:*`, `session:*`) with coordinator inbox support and runtime tool-call visibility in the `agent-comms` feed.
### Onboarding Wizard
Guided first-run setup wizard that walks new users through five steps: Welcome (system capabilities detection), Credentials (verify AUTH_PASS and API_KEY strength), Agent Setup (gateway connection or local Claude Code discovery), Security Scan (automated configuration audit with pass/fail checks), and Get Started (quick links to key panels). Automatically appears on first login and can be re-launched from Settings. Progress is persisted per-user so you can resume where you left off.
### Security Audit & Agent Trust
Dedicated security audit panel with real-time posture scoring (0-100), secret detection across agent messages, MCP tool call auditing, injection attempt tracking, and per-agent trust scores. Hook profiles (minimal/standard/strict) let operators tune security strictness per deployment. Auth failures, rate limit hits, and injection attempts are logged automatically as security events.
### Agent Eval Framework
Four-layer evaluation stack for agent quality: output evals (task completion scoring against golden datasets), trace evals (convergence scoring — >3.0 indicates looping), component evals (tool reliability with p50/p95/p99 latency from MCP call logs), and drift detection (10% threshold vs 4-week rolling baseline). Manage golden datasets and trigger eval runs via API or UI.
### Agent Optimization
API endpoint agents can call for self-improvement recommendations. Analyzes token efficiency (tokens/task vs fleet average), tool usage patterns (success/failure rates, redundant calls), and generates prioritized recommendations. Fleet benchmarks provide percentile rankings across all agents.
### Integrations
Outbound webhooks with delivery history, configurable alert rules with cooldowns, and multi-gateway connection management. Optional 1Password CLI integration for secret management.
### Workspace Management
Workspaces (tenant instances) are created and managed through the **Super Admin** panel, accessible from the sidebar under **Admin > Super Admin**. From there, admins can:
Workspaces (tenant instances) are managed via the `/api/super/*` API endpoints. Admins can:
- **Create** new client instances (slug, display name, Linux user, gateway port, plan tier)
- **Monitor** provisioning jobs and their step-by-step progress
- **Decommission** tenants with optional cleanup of state directories and Linux users
Each workspace gets its own isolated environment with a dedicated OpenClaw gateway, state directory, and workspace root. See the [Super Admin API](#api-overview) endpoints under `/api/super/*` for programmatic access.
Each workspace gets its own isolated environment with a dedicated OpenClaw gateway, state directory, and workspace root.
### Update Checker
Automatic GitHub release check notifies you when a new version is available, displayed as a banner in the dashboard.
Automatic GitHub release check notifies you when a new version is available, displayed as a banner in the dashboard. Admins can trigger a one-click update directly from the banner — the server runs `git pull`, `pnpm install`, and `pnpm build`, then prompts for a restart. Dirty working trees are rejected, and all updates are logged to the audit trail.
### Framework Adapters
Built-in adapter layer for multi-agent registration across frameworks. Supported adapters: OpenClaw, CrewAI, LangGraph, AutoGen, Claude SDK, and a generic fallback. Each adapter normalizes agent registration, heartbeats, and task reporting to a common interface.
## Architecture
@ -133,22 +221,35 @@ mission-control/
│ ├── app/
│ │ ├── page.tsx # SPA shell — routes all panels
│ │ ├── login/page.tsx # Login page
│ │ └── api/ # 66 REST API routes
│ │ └── api/ # 101 REST API routes
│ ├── components/
│ │ ├── layout/ # NavRail, HeaderBar, LiveFeed
│ │ ├── dashboard/ # Overview dashboard
│ │ ├── panels/ # 28 feature panels
│ │ ├── panels/ # 32 feature panels
│ │ └── chat/ # Agent chat UI
│ ├── lib/
│ │ ├── auth.ts # Session + API key auth, RBAC
│ │ ├── db.ts # SQLite (better-sqlite3, WAL mode)
│ │ ├── claude-sessions.ts # Local Claude Code session scanner
│ │ ├── migrations.ts # 21 schema migrations
│ │ ├── claude-tasks.ts # Claude Code team task/config scanner
│ │ ├── schedule-parser.ts # Natural language → cron expression parser
│ │ ├── recurring-tasks.ts # Recurring task template spawner
│ │ ├── migrations.ts # 39 schema migrations
│ │ ├── scheduler.ts # Background task scheduler
│ │ ├── webhooks.ts # Outbound webhook delivery
│ │ ├── websocket.ts # Gateway WebSocket client
│ │ ├── device-identity.ts # Ed25519 device identity for gateway auth
│ │ └── agent-sync.ts # OpenClaw config → MC database sync
│ │ ├── agent-sync.ts # OpenClaw config → MC database sync
│ │ ├── skill-sync.ts # Bidirectional disk ↔ DB skill sync
│ │ ├── skill-registry.ts # ClawdHub + skills.sh registry client & security scanner
│ │ ├── local-agent-sync.ts # Local agent discovery from ~/.agents, ~/.codex, ~/.claude
│ │ ├── secret-scanner.ts # Regex-based secret detection (AWS, GitHub, Stripe, JWT, PEM, DB URIs)
│ │ ├── security-events.ts # Security event logger + agent trust scoring
│ │ ├── mcp-audit.ts # MCP tool call auditing
│ │ ├── agent-evals.ts # Four-layer agent eval framework
│ │ ├── agent-optimizer.ts # Agent optimization engine
│ │ ├── hook-profiles.ts # Security strictness profiles (minimal/standard/strict)
│ │ └── adapters/ # Framework adapters (openclaw, crewai, langgraph, autogen, claude-sdk, generic)
│ └── store/index.ts # Zustand state management
└── .data/ # Runtime data (SQLite DB, token logs)
```
@ -166,7 +267,7 @@ mission-control/
| Real-time | WebSocket + Server-Sent Events |
| Auth | scrypt hashing, session tokens, RBAC |
| Validation | Zod 4 |
| Testing | Vitest + Playwright (148 E2E tests) |
| Testing | Vitest (282 unit) + Playwright (295 E2E) |
## Authentication
@ -211,7 +312,9 @@ All endpoints require authentication unless noted. Full reference below.
| `POST` | `/api/agents` | operator | Register/update agent |
| `GET` | `/api/agents/[id]` | viewer | Agent details |
| `GET` | `/api/agents/[id]/attribution` | viewer | Self-scope attribution/audit/cost report (`?privileged=1` admin override) |
| `POST` | `/api/agents/sync` | operator | Sync agents from openclaw.json |
| `POST` | `/api/agents/sync` | operator | Sync agents from openclaw.json or local disk (`?source=local`) |
| `POST` | `/api/agents/register` | viewer | Agent self-registration (idempotent, rate-limited) |
| `GET/POST` | `/api/adapters` | viewer/operator | List adapters / Framework-agnostic agent action dispatch |
| `GET/PUT` | `/api/agents/[id]/soul` | operator | Agent SOUL content (reads from workspace, writes to both) |
| `GET/POST` | `/api/agents/comms` | operator | Agent inter-agent communication |
| `POST` | `/api/agents/message` | operator | Send message to agent |
@ -235,6 +338,19 @@ All endpoints require authentication unless noted. Full reference below.
- `hours`: integer window `1..720` (default `24`)
- `section`: comma-separated subset of `identity,audit,mutations,cost` (default all)
<details>
<summary><strong>Security & Evals</strong></summary>
| Method | Path | Role | Description |
|--------|------|------|-------------|
| `GET` | `/api/security-audit` | admin | Security posture, events, trust scores, MCP audit (`?timeframe=day`) |
| `GET` | `/api/security-scan` | admin | Static security configuration scan |
| `GET` | `/api/agents/optimize` | operator | Agent optimization recommendations (`?agent=&hours=24`) |
| `GET` | `/api/agents/evals` | operator | Agent eval results (`?agent=`, `?action=history&weeks=4`) |
| `POST` | `/api/agents/evals` | operator | Trigger eval run (`action: 'run'`) or manage golden datasets (`action: 'golden-set'`) |
</details>
<details>
<summary><strong>Monitoring</strong></summary>
@ -259,6 +375,7 @@ All endpoints require authentication unless noted. Full reference below.
| `GET/PUT` | `/api/settings` | admin | App settings |
| `GET/PUT` | `/api/gateway-config` | admin | OpenClaw gateway config |
| `GET/POST` | `/api/cron` | admin | Cron management |
| `GET/POST` | `/api/onboarding` | viewer | Onboarding wizard state and step progression |
</details>
@ -297,7 +414,7 @@ All endpoints require authentication unless noted. Full reference below.
</details>
<details>
<summary><strong>Super Admin (Workspace/Tenant Management)</strong></summary>
<summary><strong>Workspace/Tenant Management</strong></summary>
| Method | Path | Role | Description |
|--------|------|------|-------------|
@ -310,6 +427,23 @@ All endpoints require authentication unless noted. Full reference below.
</details>
<details>
<summary><strong>Skills</strong></summary>
| Method | Path | Role | Description |
|--------|------|------|-------------|
| `GET` | `/api/skills` | viewer | List skills (DB-backed with filesystem fallback) |
| `GET` | `/api/skills?mode=content&source=…&name=…` | viewer | Read SKILL.md content with inline security report |
| `GET` | `/api/skills?mode=check&source=…&name=…` | viewer | On-demand security scan |
| `POST` | `/api/skills` | operator | Create skill |
| `PUT` | `/api/skills` | operator | Update skill content |
| `DELETE` | `/api/skills` | operator | Delete skill |
| `GET` | `/api/skills/registry?source=…&q=…` | viewer | Search external registry (ClawdHub, skills.sh) |
| `POST` | `/api/skills/registry` | admin | Install skill from registry |
| `PUT` | `/api/skills/registry` | viewer | Security-check content without installing |
</details>
<details>
<summary><strong>Direct CLI</strong></summary>
@ -351,6 +485,8 @@ All endpoints require authentication unless noted. Full reference below.
|--------|------|------|-------------|
| `GET` | `/api/claude/sessions` | viewer | List discovered sessions (filter: `?active=1`, `?project=`) |
| `POST` | `/api/claude/sessions` | operator | Trigger manual session scan |
| `GET` | `/api/claude-tasks` | viewer | List Claude Code team tasks and configs (`?force=true` to bypass cache) |
| `GET` | `/api/schedule-parse` | viewer | Parse natural language schedule (`?input=every+2+hours`) |
</details>
@ -402,15 +538,11 @@ See [`.env.example`](.env.example) for the complete list. Key variables:
### Workspace Creation Flow
To add a new workspace/client instance in the UI:
To add a new workspace/client instance, use the `/api/super/tenants` endpoint or the Workspaces panel (if enabled):
1. Open `Workspaces` from the left navigation.
2. Expand `Show Create Client Instance`.
3. Fill tenant/workspace fields (`slug`, `display_name`, optional ports/gateway owner).
4. Click `Create + Queue`.
5. Approve/run the generated provisioning job in the same panel.
`Workspaces` and `Super Admin` currently point to the same provisioning control plane.
1. Provide tenant/workspace fields (`slug`, `display_name`, optional ports/gateway owner).
2. The system queues a bootstrap provisioning job.
3. Approve/run the provisioning job via `/api/super/provision-jobs/[id]/action`.
### Projects and Ticket Prefixes
@ -529,8 +661,16 @@ See [open issues](https://github.com/builderz-labs/mission-control/issues) for p
**Up next:**
- [x] Workspace isolation for multi-team usage ([#75](https://github.com/builderz-labs/mission-control/issues/75))
- [x] Framework adapter layer — multi-agent registration across OpenClaw, CrewAI, LangGraph, AutoGen, Claude SDK, and generic
- [x] Self-update mechanism — admin-only one-click update with audit logging
- [x] Multi-project task organization with per-project ticket prefixes
- [x] Skills Hub — browse, install, and security-scan skills from ClawdHub and skills.sh registries
- [x] Bidirectional skill sync — disk ↔ DB with SHA-256 change detection (60s scheduler)
- [x] Local agent discovery — auto-detect agents from `~/.agents/`, `~/.codex/agents/`, `~/.claude/agents/`
- [x] Natural language recurring tasks with cron-based template spawning
- [x] Claude Code task bridge — read-only team task and config integration
- [ ] Agent-agnostic gateway support — connect any orchestration framework (OpenClaw, ZeroClaw, OpenFang, NeoBot, IronClaw, etc.), not just OpenClaw
- [ ] Workspace isolation for multi-team usage ([#75](https://github.com/builderz-labs/mission-control/issues/75))
- [ ] **[Flight Deck](https://github.com/splitlabs/flight-deck)** — native desktop companion app (Tauri v2) with real PTY terminal grid, stall inbox with native OS notifications, and system tray HUD. Currently in private beta.
- [ ] First-class per-agent cost breakdowns — dedicated panel with per-agent token usage and spend (currently derivable from per-session data)
- [ ] OAuth approval UI improvements
@ -544,6 +684,16 @@ Contributions are welcome. See [CONTRIBUTING.md](CONTRIBUTING.md) for setup inst
To report a vulnerability, see [SECURITY.md](SECURITY.md).
## ❤️ Support the Project
If you find this project useful, consider supporting my open-source work.
[![Buy Me A Coffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-support-orange?logo=buymeacoffee)](https://buymeacoffee.com/nyk_builderz)
**Solana donations**
`BYLu8XD8hGDUtdRBWpGWu5HKoiPrWqCxYFSh4oxXuvPg`
## License
[MIT](LICENSE) © 2026 [Builderz Labs](https://github.com/builderz-labs)

View File

@ -27,5 +27,42 @@ Mission Control handles authentication credentials and API keys. When deploying:
- Always set strong values for `AUTH_PASS` and `API_KEY`.
- Use `MC_ALLOWED_HOSTS` to restrict network access in production.
- Keep `.env` files out of version control (already in `.gitignore`).
- Enable `MC_COOKIE_SECURE=true` when serving over HTTPS.
- Enable `MC_COOKIE_SECURE=1` when serving over HTTPS.
- Review the [Environment Variables](README.md#environment-variables) section for all security-relevant configuration.
## Hardening Checklist
Run `bash scripts/security-audit.sh` to check your deployment automatically.
### Credentials
- [ ] `AUTH_PASS` is a strong, unique password (12+ characters)
- [ ] `API_KEY` is a random hex string (not the default)
- [ ] `AUTH_SECRET` is a random string
- [ ] `.env` file permissions are `600` (owner read/write only)
### Network
- [ ] `MC_ALLOWED_HOSTS` is configured (not `MC_ALLOW_ANY_HOST=1`)
- [ ] Dashboard is behind a reverse proxy with TLS (Caddy, nginx, Tailscale)
- [ ] `MC_ENABLE_HSTS=1` is set for HTTPS deployments
- [ ] `MC_COOKIE_SECURE=1` is set for HTTPS deployments
- [ ] `MC_COOKIE_SAMESITE=strict`
### Docker (if applicable)
- [ ] Use the hardened compose overlay: `docker compose -f docker-compose.yml -f docker-compose.hardened.yml up`
- [ ] Container runs as non-root user (default: `nextjs`, UID 1001)
- [ ] Read-only filesystem with tmpfs for temp dirs
- [ ] All Linux capabilities dropped except `NET_BIND_SERVICE`
- [ ] `no-new-privileges` security option enabled
- [ ] Log rotation configured (max-size, max-file)
### OpenClaw Gateway
- [ ] Gateway bound to localhost (`OPENCLAW_GATEWAY_HOST=127.0.0.1`)
- [ ] Gateway token configured (`OPENCLAW_GATEWAY_TOKEN`)
- [ ] Gateway token NOT exposed via `NEXT_PUBLIC_*` variables
### Monitoring
- [ ] Rate limiting is active (`MC_DISABLE_RATE_LIMIT` is NOT set)
- [ ] Audit logging is enabled with appropriate retention
- [ ] Regular database backups configured
See [docs/SECURITY-HARDENING.md](docs/SECURITY-HARDENING.md) for the full hardening guide.

278
SKILL.md Normal file
View File

@ -0,0 +1,278 @@
---
name: mission-control
description: "Interact with Mission Control — AI agent orchestration dashboard. Use when registering agents, managing tasks, syncing skills, or querying agent/task status via MC APIs."
---
# Mission Control Agent Skill
Mission Control (MC) is an AI agent orchestration dashboard with real-time SSE/WebSocket, a skill registry, framework adapters, and RBAC. This skill teaches agents how to interact with MC APIs programmatically.
## Quick Start
**Base URL:** `http://localhost:3000` (default Next.js dev) or your deployed host.
**Auth header:** `x-api-key: <your-api-key>`
**Register + heartbeat in two calls:**
```bash
# 1. Register
curl -X POST http://localhost:3000/api/adapters \
-H "Content-Type: application/json" \
-H "x-api-key: $MC_API_KEY" \
-d '{
"framework": "generic",
"action": "register",
"payload": { "agentId": "my-agent-01", "name": "My Agent" }
}'
# 2. Heartbeat (repeat every 5 minutes)
curl -X POST http://localhost:3000/api/adapters \
-H "Content-Type: application/json" \
-H "x-api-key: $MC_API_KEY" \
-d '{
"framework": "generic",
"action": "heartbeat",
"payload": { "agentId": "my-agent-01", "status": "online" }
}'
```
## Authentication
MC supports two auth methods:
| Method | Header | Use Case |
|--------|--------|----------|
| API Key | `x-api-key: <key>` or `Authorization: Bearer <key>` | Agents, scripts, CI/CD |
| Session cookie | `Cookie: mc-session=<token>` | Browser UI |
**Roles (hierarchical):** `viewer` < `operator` < `admin`
- **viewer** — Read-only access (GET endpoints)
- **operator** — Create/update agents, tasks, skills, use adapters
- **admin** — Full access including user management
API key auth grants `admin` role by default. The key is set via `API_KEY` env var or the `security.api_key` DB setting.
Agents can identify themselves with the optional `X-Agent-Name` header for attribution in audit logs.
## Agent Lifecycle
```
register → heartbeat (5m interval) → fetch assignments → report task status → disconnect
```
All lifecycle actions go through the adapter protocol (`POST /api/adapters`).
### 1. Register
```json
{
"framework": "generic",
"action": "register",
"payload": {
"agentId": "my-agent-01",
"name": "My Agent",
"metadata": { "version": "1.0", "capabilities": ["code", "review"] }
}
}
```
### 2. Heartbeat
Send every ~5 minutes to stay marked as online.
```json
{
"framework": "generic",
"action": "heartbeat",
"payload": {
"agentId": "my-agent-01",
"status": "online",
"metrics": { "tasks_completed": 5, "uptime_seconds": 3600 }
}
}
```
### 3. Fetch Assignments
Returns up to 5 pending tasks sorted by priority (critical → low), then due date.
```json
{
"framework": "generic",
"action": "assignments",
"payload": { "agentId": "my-agent-01" }
}
```
**Response:**
```json
{
"assignments": [
{ "taskId": "42", "description": "Fix login bug\nUsers cannot log in with SSO", "priority": 1 }
],
"framework": "generic"
}
```
### 4. Report Task Progress
```json
{
"framework": "generic",
"action": "report",
"payload": {
"taskId": "42",
"agentId": "my-agent-01",
"progress": 75,
"status": "in_progress",
"output": "Fixed SSO handler, running tests..."
}
}
```
`status` values: `in_progress`, `done`, `failed`, `blocked`
### 5. Disconnect
```json
{
"framework": "generic",
"action": "disconnect",
"payload": { "agentId": "my-agent-01" }
}
```
## Core API Reference
### Agents — `/api/agents`
| Method | Min Role | Description |
|--------|----------|-------------|
| GET | viewer | List agents. Query: `?status=online&role=dev&limit=50&offset=0` |
| POST | operator | Create agent. Body: `{ name, role, status?, config?, template?, session_key?, soul_content? }` |
| PUT | operator | Update agent. Body: `{ name, status?, role?, config?, session_key?, soul_content?, last_activity? }` |
**GET response shape:**
```json
{
"agents": [{
"id": 1, "name": "scout", "role": "researcher", "status": "online",
"config": {}, "taskStats": { "total": 10, "assigned": 2, "in_progress": 1, "completed": 7 }
}],
"total": 1, "page": 1, "limit": 50
}
```
### Tasks — `/api/tasks`
| Method | Min Role | Description |
|--------|----------|-------------|
| GET | viewer | List tasks. Query: `?status=in_progress&assigned_to=scout&priority=high&project_id=1&limit=50&offset=0` |
| POST | operator | Create task. Body: `{ title, description?, status?, priority?, assigned_to?, project_id?, tags?, metadata?, due_date?, estimated_hours? }` |
| PUT | operator | Bulk status update. Body: `{ tasks: [{ id, status }] }` |
**Priority values:** `critical`, `high`, `medium`, `low`
**Status values:** `inbox`, `assigned`, `in_progress`, `review`, `done`, `failed`, `blocked`, `cancelled`
Note: Moving a task to `done` via PUT requires an Aegis quality review approval.
**POST response:**
```json
{
"task": {
"id": 42, "title": "Fix login bug", "status": "assigned",
"priority": "high", "assigned_to": "scout", "ticket_ref": "GEN-001",
"tags": ["bug"], "metadata": {}
}
}
```
### Skills — `/api/skills`
| Method | Min Role | Description |
|--------|----------|-------------|
| GET | viewer | List all skills across roots |
| GET `?mode=content&source=...&name=...` | viewer | Read a skill's SKILL.md content |
| GET `?mode=check&source=...&name=...` | viewer | Run security check on a skill |
| POST | operator | Create/upsert skill. Body: `{ source, name, content }` |
| PUT | operator | Update skill content. Body: `{ source, name, content }` |
| DELETE `?source=...&name=...` | operator | Delete a skill |
**Skill sources:** `user-agents`, `user-codex`, `project-agents`, `project-codex`, `openclaw`
### Status — `/api/status`
| Action | Min Role | Description |
|--------|----------|-------------|
| GET `?action=overview` | viewer | System status (uptime, memory, disk, sessions) |
| GET `?action=dashboard` | viewer | Aggregated dashboard data with DB stats |
| GET `?action=gateway` | viewer | Gateway process status and port check |
| GET `?action=models` | viewer | Available AI models (catalog + local Ollama) |
| GET `?action=health` | viewer | Health checks (gateway, disk, memory) |
| GET `?action=capabilities` | viewer | Feature flags: gateway reachable, Claude home, subscriptions |
### Adapters — `/api/adapters`
| Method | Min Role | Description |
|--------|----------|-------------|
| GET | viewer | List available framework adapter names |
| POST | operator | Execute adapter action (see Agent Lifecycle above) |
## Framework Adapter Protocol
All agent lifecycle operations use a single endpoint:
```
POST /api/adapters
Content-Type: application/json
x-api-key: <key>
{
"framework": "<adapter-name>",
"action": "<action>",
"payload": { ... }
}
```
**Available frameworks:** `generic`, `openclaw`, `crewai`, `langgraph`, `autogen`, `claude-sdk`
**Available actions:** `register`, `heartbeat`, `report`, `assignments`, `disconnect`
All adapters implement the same `FrameworkAdapter` interface — choose the one matching your agent framework, or use `generic` as a universal fallback.
**Payload shapes by action:**
| Action | Required Fields | Optional Fields |
|--------|----------------|-----------------|
| `register` | `agentId`, `name` | `metadata` |
| `heartbeat` | `agentId` | `status`, `metrics` |
| `report` | `taskId`, `agentId` | `progress`, `status`, `output` |
| `assignments` | `agentId` | — |
| `disconnect` | `agentId` | — |
## Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `API_KEY` | — | API key for agent/script authentication |
| `OPENCLAW_GATEWAY_HOST` | `127.0.0.1` | Gateway host address |
| `OPENCLAW_GATEWAY_PORT` | `18789` | Gateway port |
| `MISSION_CONTROL_DB_PATH` | `.data/mission-control.db` | SQLite database path |
| `OPENCLAW_STATE_DIR` | `~/.openclaw` | OpenClaw state directory |
| `OPENCLAW_CONFIG_PATH` | `<state-dir>/openclaw.json` | Gateway config file path |
| `MC_CLAUDE_HOME` | `~/.claude` | Claude home directory |
## Real-Time Events
MC broadcasts events via SSE (`/api/events`) and WebSocket. Key event types:
- `agent.created`, `agent.updated`, `agent.status_changed`
- `task.created`, `task.updated`, `task.status_changed`
Subscribe to SSE for live dashboard updates when building integrations.

View File

@ -0,0 +1,21 @@
# Production hardening overlay
# Usage: docker compose -f docker-compose.yml -f docker-compose.hardened.yml up -d
services:
mission-control:
logging:
driver: json-file
options:
max-size: "10m"
max-file: "3"
environment:
- MC_ALLOWED_HOSTS=localhost,127.0.0.1
- MC_COOKIE_SECURE=1
- MC_COOKIE_SAMESITE=strict
- MC_ENABLE_HSTS=1
networks:
mc-internal:
networks:
mc-internal:
driver: bridge
internal: true

View File

@ -11,7 +11,29 @@ services:
required: false
volumes:
- mc-data:/app/.data
read_only: true
tmpfs:
- /tmp
- /app/.next/cache
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE
security_opt:
- no-new-privileges:true
pids_limit: 256
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
networks:
- mc-net
restart: unless-stopped
volumes:
mc-data:
networks:
mc-net:
driver: bridge

View File

@ -0,0 +1,248 @@
# Mission Control — Landing Page Handoff
> Last updated: 2026-03-07 | Version: 1.3.0 | Branch: `fix/refactor` (bb5029e)
This document contains all copy, stats, features, and structure needed to build or update the Mission Control landing page. Everything below reflects the current state of the shipped product.
---
## Hero Section
**Headline:**
The Open-Source Dashboard for AI Agent Orchestration
**Subheadline:**
Manage agent fleets, track tasks, monitor costs, and orchestrate workflows — all from a single pane of glass. Zero external dependencies. One `pnpm start` to run.
**CTA:** `Get Started` -> GitHub repo | `Live Demo` -> demo instance (if available)
**Badges:**
- MIT License
- Next.js 16
- React 19
- TypeScript 5.7
- SQLite (WAL mode)
- 165 unit tests (Vitest)
- 295 E2E tests (Playwright)
**Hero image:** `docs/mission-control.jpg` (current dashboard screenshot — should be refreshed with latest UI)
---
## Key Stats (above the fold)
| Stat | Value |
|------|-------|
| Panels | 31 feature panels |
| API routes | 98 REST endpoints |
| Schema migrations | 36 |
| Test coverage | 165 unit + 295 E2E |
| Total commits | 239+ |
| External dependencies required | 0 (SQLite only, no Redis/Postgres/Docker) |
| Auth methods | 3 (session, API key, Google OAuth) |
| Framework adapters | 6 (OpenClaw, CrewAI, LangGraph, AutoGen, Claude SDK, Generic) |
---
## Feature Grid
### 1. Task Board (Kanban)
Six-column kanban (Inbox > Assigned > In Progress > Review > Quality Review > Done) with drag-and-drop, priority levels, assignments, threaded comments, and inline sub-agent spawning. Multi-project support with per-project ticket prefixes (e.g. `PA-001`).
### 2. Agent Management
Full lifecycle — register, heartbeat, wake, retire. Redesigned agent detail modal with compact overview, inline model selector, editable sub-agent configuration, and SOUL personality system. Local agent discovery from `~/.agents/`, `~/.codex/agents/`, `~/.claude/agents/`.
### 3. Real-Time Monitoring
Live activity feed, session inspector, and log viewer with filtering. WebSocket + SSE push updates with smart polling that pauses when you're away. Gateway connection state with live dot indicators.
### 4. Cost Tracking
Token usage dashboard with per-model breakdowns, trend charts, and cost analysis. Per-agent cost panels with session-level granularity.
### 5. Quality Gates (Aegis)
Built-in review system that blocks task completion without sign-off. Automated Aegis quality review — scheduler polls review tasks and approves/rejects based on configurable criteria.
### 6. Recurring Tasks
Natural language scheduling — "every morning at 9am", "every 2 hours". Zero-dependency schedule parser converts to cron. Template-clone pattern spawns dated child tasks (e.g. "Daily Report — Mar 07").
### 7. Task Dispatch
Scheduler polls assigned tasks and runs agents via CLI. Dispatched tasks link to agent sessions for full traceability.
### 8. Skills Hub
Browse, install, and manage agent skills from local directories and external registries (ClawdHub, skills.sh). Built-in security scanner checks for prompt injection, credential leaks, data exfiltration, and obfuscated content. Bidirectional disk-DB sync with SHA-256 change detection.
### 9. Claude Code Integration
- **Session tracking** — auto-discovers sessions from `~/.claude/projects/`, extracts tokens, model info, costs
- **Task bridge** — read-only integration surfaces Claude Code team tasks and configs
- **Direct CLI** — connect Claude Code, Codex, or any CLI directly without a gateway
### 10. Memory Knowledge Graph
Visual knowledge graph for agent memory in gateway mode. Interactive node-edge visualization of agent memory relationships.
### 11. Agent Messaging (Comms)
Session-threaded inter-agent communication via comms API (`a2a:*`, `coord:*`, `session:*`). Coordinator inbox support with runtime tool-call visibility.
### 12. Multi-Gateway
Connect to multiple agent gateways simultaneously. OS-level gateway discovery (systemd, Tailscale Serve). Auto-connect with health probes.
### 13. Framework Adapters
Built-in adapter layer for multi-agent registration: OpenClaw, CrewAI, LangGraph, AutoGen, Claude SDK, and generic fallback. Each normalizes registration, heartbeats, and task reporting.
### 14. Background Automation
Scheduled tasks for DB backups, stale record cleanup, agent heartbeat monitoring, recurring task spawning, and automated quality reviews.
### 15. Webhooks & Alerts
Outbound webhooks with delivery history, retry with exponential backoff, circuit breaker, and HMAC-SHA256 signature verification. Configurable alert rules with cooldowns.
### 16. GitHub Sync
Bidirectional GitHub Issues sync with label and assignee mapping. Full parity sync implementation.
### 17. Security
- Ed25519 device identity for gateway handshake
- scrypt password hashing
- RBAC (viewer, operator, admin)
- CSRF origin checks
- CSP headers
- Rate limiting with trusted proxy support
- Per-agent rate limiting with `x-agent-name` identity-based quotas
- Skill security scanner
### 18. Self-Update
GitHub release check with banner notification. One-click admin update (git pull, pnpm install, pnpm build). Dirty working trees rejected. All updates audit-logged.
### 19. Audit Trail
Complete action type coverage with grouped filters. Full audit history for compliance and debugging.
### 20. Pipelines & Workflows
Pipeline orchestration with workflow templates. Start, monitor, and manage multi-step agent workflows.
---
## "How It Works" Section
```
1. Clone & Start git clone ... && pnpm install && pnpm dev
2. Agents Register Via gateway, CLI, or self-registration endpoint
3. Tasks Flow Kanban board with automatic dispatch and quality gates
4. Monitor & Scale Real-time dashboards, cost tracking, recurring automation
```
---
## Tech Stack Section
| Layer | Technology |
|-------|------------|
| Framework | Next.js 16 (App Router) |
| UI | React 19, Tailwind CSS 3.4 |
| Language | TypeScript 5.7 |
| Database | SQLite via better-sqlite3 (WAL mode) |
| State | Zustand 5 |
| Charts | Recharts 3 |
| Real-time | WebSocket + Server-Sent Events |
| Auth | scrypt hashing, session tokens, RBAC |
| Validation | Zod 4 |
| Testing | Vitest + Playwright |
---
## Auth & Access Section
**Three auth methods:**
1. Session cookie — username/password login (7-day expiry)
2. API key — `x-api-key` header for headless/agent access
3. Google Sign-In — OAuth with admin approval workflow
**Three roles:**
| Role | Access |
|------|--------|
| Viewer | Read-only dashboard access |
| Operator | Read + write (tasks, agents, chat, spawn) |
| Admin | Full access (users, settings, system ops, webhooks) |
---
## Architecture Diagram (simplified)
```
mission-control/
src/
app/api/ 98 REST API routes
components/
panels/ 31 feature panels
dashboard/ Overview dashboard
chat/ Agent chat workspace
layout/ NavRail, HeaderBar, LiveFeed
lib/
auth.ts Session + API key + Google OAuth
db.ts SQLite (WAL mode, 36 migrations)
scheduler.ts Background automation
websocket.ts Gateway WebSocket client
adapters/ 6 framework adapters
.data/ Runtime SQLite DB + token logs
```
---
## Quick Start Section
```bash
git clone https://github.com/builderz-labs/mission-control.git
cd mission-control
pnpm install
cp .env.example .env # edit with your values
pnpm dev # http://localhost:3000
```
Initial login seeded from `AUTH_USER` / `AUTH_PASS` on first run.
---
## Social Proof / Traction
- 239+ commits of active development
- Open-source MIT license
- Used in production for multi-agent orchestration
- Supports 6 agent frameworks out of the box
- Zero-config SQLite — no Docker, Redis, or Postgres required
---
## Roadmap / Coming Soon
- Agent-agnostic gateway support (OpenClaw, ZeroClaw, OpenFang, NeoBot, IronClaw, etc.)
- **Flight Deck** — native desktop companion app (Tauri v2) with real PTY terminal grid and system tray HUD (private beta)
- First-class per-agent cost breakdowns panel
- OAuth approval UI improvements
- API token rotation UI
---
## Recent Changelog (latest 20 notable changes)
1. **Memory knowledge graph** — interactive visualization for agent memory in gateway mode
2. **Agent detail modal redesign** — minimal header, compact overview, inline model selector
3. **Spawn/task unification** — spawn moved inline to task board, sub-agent config to agent detail
4. **Agent comms hardening** — session-threaded messaging with runtime tool visibility
5. **Audit trail** — complete action type coverage with grouped filters
6. **OS-level gateway discovery** — detect gateways via systemd and Tailscale Serve
7. **GitHub sync** — full parity sync with loading state fixes
8. **Automated Aegis quality review** — scheduler-driven approve/reject
9. **Task dispatch** — scheduler polls and runs agents via CLI with session linking
10. **Natural language recurring tasks** — zero-dep schedule parser + template spawning
11. **Claude Code task bridge** — read-only team task and config integration
12. **Agent card redesign** — gateway badge tooltips, ws:// localhost support
13. **Skills Hub** — registry integration, bidirectional sync, security scanner
14. **Per-agent rate limiting** — identity-based quotas via `x-agent-name`
15. **Agent self-registration** — autonomous onboarding endpoint
16. **Framework adapters** — OpenClaw, CrewAI, LangGraph, AutoGen, Claude SDK, generic
17. **Self-update mechanism** — one-click update with audit logging
18. **Local agent discovery** — auto-detect from ~/.agents, ~/.codex, ~/.claude
19. **Chat workspace** — embedded chat with local session continuation
20. **Ed25519 device identity** — secure gateway challenge-response handshake
---
## Footer
MIT License | 2026 Builderz Labs
GitHub: github.com/builderz-labs/mission-control

277
docs/SECURITY-HARDENING.md Normal file
View File

@ -0,0 +1,277 @@
# Security Hardening Guide
Comprehensive security hardening guide for Mission Control and OpenClaw Gateway deployments.
## Quick Assessment
Run the automated security audit:
```bash
bash scripts/security-audit.sh # Check .env and configuration
bash scripts/station-doctor.sh # Check runtime health
```
Or use the diagnostics API (admin only):
```bash
curl -H "x-api-key: $API_KEY" http://localhost:3000/api/diagnostics
curl -H "x-api-key: $API_KEY" http://localhost:3000/api/security-audit?timeframe=day
```
The `posture.score` field (0-100) gives a quick posture assessment. The **Security Audit Panel** (`/security` in the dashboard) provides a full real-time view with timeline charts, agent trust scores, and eval results.
---
## Mission Control Hardening
### 1. Credentials
**Generate strong credentials** using the included script:
```bash
bash scripts/generate-env.sh # Generates .env with random secrets
chmod 600 .env # Lock down permissions
```
The installer (`install.sh`) does this automatically. If you set up manually, ensure:
- `AUTH_PASS` is 12+ characters, not a dictionary word
- `API_KEY` is 32+ hex characters
- `AUTH_SECRET` is a unique random string
- `.env` file permissions are `600`
### 2. Network Access Control
Mission Control uses a host allowlist in production:
```env
# Only allow connections from these hosts (comma-separated)
MC_ALLOWED_HOSTS=localhost,127.0.0.1
# For Tailscale: MC_ALLOWED_HOSTS=localhost,127.0.0.1,100.*
# For a domain: MC_ALLOWED_HOSTS=mc.example.com,localhost
# NEVER set this in production:
# MC_ALLOW_ANY_HOST=1
```
Deploy behind a reverse proxy with TLS (Caddy, nginx, Tailscale Funnel) for any network-accessible deployment.
### 3. HTTPS & Cookies
For HTTPS deployments:
```env
MC_COOKIE_SECURE=1 # Cookies only sent over HTTPS
MC_COOKIE_SAMESITE=strict # CSRF protection
MC_ENABLE_HSTS=1 # HTTP Strict Transport Security
```
### 4. Rate Limiting
Rate limiting is enabled by default:
| Endpoint Type | Limit |
|--------------|-------|
| Login | 5 attempts/min (always active) |
| Mutations | 60 requests/min |
| Reads | 120 requests/min |
| Heavy operations | 10 requests/min |
| Agent heartbeat | 30/min per agent |
| Agent task polling | 20/min per agent |
Never set `MC_DISABLE_RATE_LIMIT=1` in production.
### 5. Docker Hardening
Use the production compose overlay:
```bash
docker compose -f docker-compose.yml -f docker-compose.hardened.yml up -d
```
This enables:
- **Read-only filesystem** with tmpfs for `/tmp` and `/app/.next/cache`
- **Capability dropping** — all Linux capabilities dropped, only `NET_BIND_SERVICE` retained
- **No new privileges** — prevents privilege escalation
- **PID limit** — prevents fork bombs
- **Memory/CPU limits** — prevents resource exhaustion
- **Log rotation** — prevents disk filling from verbose logging
- **HSTS, secure cookies** — forced via environment
### 6. Security Headers
Mission Control sets these headers automatically:
| Header | Value |
|--------|-------|
| `Content-Security-Policy` | `default-src 'self'; script-src 'self' 'unsafe-inline' 'nonce-...'` |
| `X-Frame-Options` | `DENY` |
| `X-Content-Type-Options` | `nosniff` |
| `Referrer-Policy` | `strict-origin-when-cross-origin` |
| `Permissions-Policy` | `camera=(), microphone=(), geolocation=()` |
| `X-Request-Id` | Unique per-request UUID for log correlation |
| `Strict-Transport-Security` | Set when `MC_ENABLE_HSTS=1` |
### 7. Audit Logging
All security-relevant events are logged to the audit trail:
- Login attempts (success and failure)
- Task mutations
- User management actions
- Settings changes
- Update operations
Additionally, the **security event system** automatically logs:
- Auth failures (invalid passwords, expired tokens, access denials)
- Rate limit hits (429 responses with IP/agent correlation)
- Injection attempts (prompt injection, command injection, exfiltration)
- Secret exposures (AWS keys, GitHub tokens, Stripe keys, JWTs, private keys detected in agent messages)
- MCP tool calls (agent, tool, duration, success/failure)
These events feed into the **Security Audit Panel** (`/security`) which provides:
- **Posture score** (0-100) with level badges (hardened/secure/needs-attention/at-risk)
- **Agent trust scores** — weighted calculation based on auth failures, injection attempts, and task success rates
- **MCP call audit** — tool-use frequency, success/failure rates per agent
- **Timeline visualization** — event density over selected timeframe
Configure retention: `MC_RETAIN_AUDIT_DAYS=365` (default: 1 year).
### 8. Hook Profiles
Security strictness is tunable via hook profiles in Settings > Security Profiles:
| Profile | Secret Scanning | MCP Auditing | Block on Secrets | Rate Limit Multiplier |
|---------|----------------|--------------|------------------|----------------------|
| **minimal** | Off | Off | No | 2x (relaxed) |
| **standard** (default) | On | On | No | 1x |
| **strict** | On | On | Yes (blocks messages) | 0.5x (tighter) |
Set via the Settings panel or the `hook_profile` key in the settings API.
### 9. Agent Eval Framework
The four-layer eval stack helps detect degrading agent quality:
- **Output evals** — score task completion against golden datasets
- **Trace evals** — convergence scoring (>3.0 indicates looping behavior)
- **Component evals** — tool reliability from MCP call logs (p50/p95/p99 latency)
- **Drift detection** — 10% threshold vs 4-week rolling baseline triggers alerts
Access via `/api/agents/evals` or the Security Audit Panel's eval section.
### 10. Data Retention
```env
MC_RETAIN_ACTIVITIES_DAYS=90 # Activity feed
MC_RETAIN_AUDIT_DAYS=365 # Security audit trail
MC_RETAIN_LOGS_DAYS=30 # Application logs
MC_RETAIN_NOTIFICATIONS_DAYS=60 # Notifications
MC_RETAIN_PIPELINE_RUNS_DAYS=90 # Pipeline logs
MC_RETAIN_TOKEN_USAGE_DAYS=90 # Token/cost records
MC_RETAIN_GATEWAY_SESSIONS_DAYS=90 # Gateway session history
```
---
## OpenClaw Gateway Hardening
Mission Control acts as the mothership for your OpenClaw fleet. The installer automatically checks and repairs common OpenClaw configuration issues.
### 1. Network Security
- **Never expose the gateway publicly.** It runs on port 18789 by default.
- **Bind to localhost:** Set `gateway.bind: "loopback"` in `openclaw.json`.
- **Use SSH tunneling or Tailscale** for remote access.
- **Docker users:** Be aware that Docker can bypass UFW rules. Use `DOCKER-USER` chain rules.
### 2. Authentication
- **Always enable gateway auth** with a strong random token.
- Generate: `openclaw doctor --generate-gateway-token`
- Store in `OPENCLAW_GATEWAY_TOKEN` env var (never in `NEXT_PUBLIC_*` variables).
- Rotate regularly.
### 3. Hardened Gateway Configuration
```json
{
"gateway": {
"mode": "local",
"bind": "loopback",
"auth": {
"mode": "token",
"token": "replace-with-long-random-token"
}
},
"session": {
"dmScope": "per-channel-peer"
},
"tools": {
"profile": "messaging",
"deny": ["group:automation", "group:runtime", "group:fs", "sessions_spawn", "sessions_send"],
"fs": { "workspaceOnly": true },
"exec": { "security": "deny", "ask": "always" }
},
"elevated": { "enabled": false }
}
```
### 4. File Permissions
```bash
chmod 700 ~/.openclaw
chmod 600 ~/.openclaw/openclaw.json
chmod 600 ~/.openclaw/credentials/*
```
### 5. Tool Security
- Apply the principle of least privilege — only grant tools the agent needs.
- Audit third-party skills before installing (Mission Control's Skills Hub runs automatic security scans).
- Run agents processing untrusted content in a sandbox with a minimal toolset.
### 6. Monitoring
- Enable comprehensive logging: `logging.redactSensitive: "tools"`
- Store logs separately where agents cannot modify them.
- Use Mission Control's diagnostics API to monitor gateway health.
- Have an incident response plan: stop gateway, revoke API keys, review audit logs.
### 7. Known CVEs
Keep OpenClaw updated. Notable past vulnerabilities:
| CVE | Severity | Description | Fixed In |
|-----|----------|-------------|----------|
| CVE-2026-25253 | Critical | RCE via Control UI token hijack | v2026.1.29 |
| CVE-2026-26327 | High | Auth bypass via gateway spoofing | v2026.2.25 |
| CVE-2026-26322 | High | SSRF | v2026.2.25 |
| CVE-2026-26329 | High | Path traversal | v2026.2.25 |
| CVE-2026-26319 | Medium | Missing webhook auth | v2026.2.25 |
---
## Deployment Architecture
For production, the recommended architecture is:
```
Internet
|
[Reverse Proxy (Caddy/nginx) + TLS]
|
[Mission Control :3000] ---- [SQLite .data/]
|
[OpenClaw Gateway :18789 (localhost only)]
|
[Agent Workspaces]
```
- Reverse proxy handles TLS termination, rate limiting, and access control
- Mission Control listens on localhost or a private network
- OpenClaw Gateway is bound to loopback only
- Agent workspaces are isolated per-agent directories

View File

@ -48,6 +48,32 @@ PORT=3000 pnpm start
**Important:** The production build bundles platform-specific native binaries. You must run `pnpm install` and `pnpm build` on the same OS and architecture as the target server. A build created on macOS will not work on Linux.
## Production (Standalone)
Use this for bare-metal deployments that run Next's standalone server directly.
This path is preferred over ad hoc `node .next/standalone/server.js` because it
syncs `.next/static` and `public/` into the standalone bundle before launch.
```bash
pnpm install --frozen-lockfile
pnpm build
pnpm start:standalone
```
For a full in-place update on the target host:
```bash
BRANCH=fix/refactor PORT=3000 pnpm deploy:standalone
```
What `deploy:standalone` does:
- fetches and fast-forwards the requested branch
- reinstalls dependencies with the lockfile
- rebuilds from a clean `.next/`
- stops the old process bound to the target port
- starts the standalone server through `scripts/start-standalone.sh`
- verifies that the rendered login page references a CSS asset and that the CSS is served as `text/css`
## Production (Docker)
```bash

View File

@ -0,0 +1,31 @@
# Onboarding + Walkthrough hardening plan
Base branch: `fix/refactor`
Working branch: `fix/refactor-onboarding-walkthrough`
## Goals
- Verify current onboarding and walkthrough flows are functional.
- Fix edge cases for first-run, skip, replay, and recovery states.
- Improve UX discoverability of walkthrough entry points.
- Add regression tests to keep flows stable.
## Phase 1: audit and test map
1. Identify current onboarding/walkthrough code paths.
2. Document triggers, persistence flags, and routing.
3. Add failing tests for first-run, skip, replay, and already-seen states.
## Phase 2: implementation hardening
1. Fix state transitions and persistence updates.
2. Ensure walkthrough can be reopened from primary UI.
3. Add visible hint/help entry to improve discoverability.
4. Handle corrupted or partial onboarding state safely.
## Phase 3: verification
1. Run targeted tests for onboarding/walkthrough.
2. Run full project checks.
3. Validate end-to-end flow manually in local dev.
## Deliverables
- Code changes in onboarding/walkthrough modules
- Automated tests covering key onboarding paths
- Updated docs/help text for walkthrough discoverability

429
install.sh Executable file
View File

@ -0,0 +1,429 @@
#!/usr/bin/env bash
# Mission Control — One-Command Installer
# The mothership for your OpenClaw fleet.
#
# Usage:
# curl -fsSL https://raw.githubusercontent.com/builderz-labs/mission-control/main/install.sh | bash
# # or
# bash install.sh [--docker|--local] [--port PORT] [--data-dir DIR]
#
# Installs Mission Control and optionally repairs/configures OpenClaw.
set -euo pipefail
# ── Defaults ──────────────────────────────────────────────────────────────────
MC_PORT="${MC_PORT:-3000}"
MC_DATA_DIR=""
DEPLOY_MODE=""
SKIP_OPENCLAW=false
REPO_URL="https://github.com/builderz-labs/mission-control.git"
INSTALL_DIR="${MC_INSTALL_DIR:-$(pwd)/mission-control}"
# ── Parse arguments ───────────────────────────────────────────────────────────
while [[ $# -gt 0 ]]; do
case "$1" in
--docker) DEPLOY_MODE="docker"; shift ;;
--local) DEPLOY_MODE="local"; shift ;;
--port) MC_PORT="$2"; shift 2 ;;
--data-dir) MC_DATA_DIR="$2"; shift 2 ;;
--skip-openclaw) SKIP_OPENCLAW=true; shift ;;
--dir) INSTALL_DIR="$2"; shift 2 ;;
-h|--help)
echo "Usage: install.sh [--docker|--local] [--port PORT] [--data-dir DIR] [--dir INSTALL_DIR] [--skip-openclaw]"
exit 0 ;;
*) echo "Unknown option: $1"; exit 1 ;;
esac
done
# ── Helpers ───────────────────────────────────────────────────────────────────
info() { echo -e "\033[1;34m[MC]\033[0m $*"; }
ok() { echo -e "\033[1;32m[OK]\033[0m $*"; }
warn() { echo -e "\033[1;33m[!!]\033[0m $*"; }
err() { echo -e "\033[1;31m[ERR]\033[0m $*" >&2; }
die() { err "$*"; exit 1; }
command_exists() { command -v "$1" &>/dev/null; }
detect_os() {
local os arch
os="$(uname -s)"
arch="$(uname -m)"
case "$os" in
Linux) OS="linux" ;;
Darwin) OS="darwin" ;;
*) die "Unsupported OS: $os" ;;
esac
case "$arch" in
x86_64|amd64) ARCH="x64" ;;
aarch64|arm64) ARCH="arm64" ;;
*) die "Unsupported architecture: $arch" ;;
esac
ok "Detected $OS/$ARCH"
}
check_prerequisites() {
local has_docker=false has_node=false
if command_exists docker && docker info &>/dev/null 2>&1; then
has_docker=true
ok "Docker available ($(docker --version | head -1))"
fi
if command_exists node; then
local node_major
node_major=$(node -v | sed 's/v//' | cut -d. -f1)
if [[ "$node_major" -ge 20 ]]; then
has_node=true
ok "Node.js $(node -v) available"
else
warn "Node.js $(node -v) found but v20+ required"
fi
fi
if ! $has_docker && ! $has_node; then
die "Either Docker or Node.js 20+ is required. Install one and retry."
fi
# Auto-select deploy mode if not specified
if [[ -z "$DEPLOY_MODE" ]]; then
if $has_docker; then
DEPLOY_MODE="docker"
info "Auto-selected Docker deployment (use --local to override)"
else
DEPLOY_MODE="local"
info "Auto-selected local deployment (Docker not available)"
fi
fi
# Validate chosen mode
if [[ "$DEPLOY_MODE" == "docker" ]] && ! $has_docker; then
die "Docker deployment requested but Docker is not available"
fi
if [[ "$DEPLOY_MODE" == "local" ]] && ! $has_node; then
die "Local deployment requested but Node.js 20+ is not available"
fi
if [[ "$DEPLOY_MODE" == "local" ]] && ! command_exists pnpm; then
info "Installing pnpm via corepack..."
corepack enable && corepack prepare pnpm@latest --activate
ok "pnpm installed"
fi
}
# ── Clone or update repo ─────────────────────────────────────────────────────
fetch_source() {
if [[ -d "$INSTALL_DIR/.git" ]]; then
info "Updating existing installation at $INSTALL_DIR..."
cd "$INSTALL_DIR"
git fetch --tags
local latest_tag
latest_tag=$(git describe --tags --abbrev=0 origin/main 2>/dev/null || echo "")
if [[ -n "$latest_tag" ]]; then
git checkout "$latest_tag"
ok "Checked out $latest_tag"
else
git pull origin main
ok "Updated to latest main"
fi
else
info "Cloning Mission Control..."
if command_exists git; then
git clone --depth 1 "$REPO_URL" "$INSTALL_DIR"
cd "$INSTALL_DIR"
ok "Cloned to $INSTALL_DIR"
else
die "git is required to clone the repository"
fi
fi
}
# ── Generate .env ─────────────────────────────────────────────────────────────
setup_env() {
if [[ -f "$INSTALL_DIR/.env" ]]; then
info "Existing .env found — keeping current configuration"
return
fi
info "Generating secure .env configuration..."
bash "$INSTALL_DIR/scripts/generate-env.sh" "$INSTALL_DIR/.env"
# Set the port if non-default
if [[ "$MC_PORT" != "3000" ]]; then
if [[ "$(uname)" == "Darwin" ]]; then
sed -i '' "s|^# PORT=3000|PORT=$MC_PORT|" "$INSTALL_DIR/.env"
else
sed -i "s|^# PORT=3000|PORT=$MC_PORT|" "$INSTALL_DIR/.env"
fi
fi
ok "Secure .env generated"
}
# ── Docker deployment ─────────────────────────────────────────────────────────
deploy_docker() {
info "Starting Docker deployment..."
export MC_PORT
docker compose up -d --build
# Wait for healthy
info "Waiting for Mission Control to become healthy..."
local retries=30
while [[ $retries -gt 0 ]]; do
if docker compose ps --format json 2>/dev/null | grep -q '"Health":"healthy"'; then
break
fi
# Fallback: try HTTP check
if curl -sf "http://localhost:$MC_PORT/login" &>/dev/null; then
break
fi
sleep 2
((retries--))
done
if [[ $retries -eq 0 ]]; then
warn "Timeout waiting for health check — container may still be starting"
docker compose logs --tail 20
else
ok "Mission Control is running in Docker"
fi
}
# ── Local deployment ──────────────────────────────────────────────────────────
deploy_local() {
info "Starting local deployment..."
cd "$INSTALL_DIR"
pnpm install --frozen-lockfile 2>/dev/null || pnpm install
ok "Dependencies installed"
info "Building Mission Control..."
pnpm build
ok "Build complete"
# Create systemd service on Linux if systemctl is available
if [[ "$OS" == "linux" ]] && command_exists systemctl; then
setup_systemd
fi
info "Starting Mission Control..."
PORT="$MC_PORT" nohup pnpm start > "$INSTALL_DIR/.data/mc.log" 2>&1 &
local pid=$!
echo "$pid" > "$INSTALL_DIR/.data/mc.pid"
sleep 3
if kill -0 "$pid" 2>/dev/null; then
ok "Mission Control running (PID $pid)"
else
err "Failed to start. Check logs: $INSTALL_DIR/.data/mc.log"
exit 1
fi
}
# ── Systemd service ──────────────────────────────────────────────────────────
setup_systemd() {
local service_file="/etc/systemd/system/mission-control.service"
if [[ -f "$service_file" ]]; then
info "Systemd service already exists"
return
fi
info "Creating systemd service..."
local user
user="$(whoami)"
local node_path
node_path="$(which node)"
cat > /tmp/mission-control.service <<UNIT
[Unit]
Description=Mission Control - OpenClaw Agent Dashboard
After=network.target
[Service]
Type=simple
User=$user
WorkingDirectory=$INSTALL_DIR
ExecStart=$node_path $INSTALL_DIR/.next/standalone/server.js
Restart=on-failure
RestartSec=5
Environment=NODE_ENV=production
Environment=PORT=$MC_PORT
EnvironmentFile=$INSTALL_DIR/.env
[Install]
WantedBy=multi-user.target
UNIT
if [[ "$(id -u)" -eq 0 ]]; then
mv /tmp/mission-control.service "$service_file"
systemctl daemon-reload
systemctl enable mission-control
ok "Systemd service installed and enabled"
else
info "Run as root to install systemd service:"
info " sudo mv /tmp/mission-control.service $service_file"
info " sudo systemctl daemon-reload && sudo systemctl enable --now mission-control"
fi
}
# ── OpenClaw fleet check ─────────────────────────────────────────────────────
check_openclaw() {
if $SKIP_OPENCLAW; then
info "Skipping OpenClaw checks (--skip-openclaw)"
return
fi
echo ""
info "=== OpenClaw Fleet Check ==="
# Check if openclaw binary exists
if command_exists openclaw; then
local oc_version
oc_version="$(openclaw --version 2>/dev/null || echo 'unknown')"
ok "OpenClaw binary found: $oc_version"
elif command_exists clawdbot; then
local cb_version
cb_version="$(clawdbot --version 2>/dev/null || echo 'unknown')"
ok "ClawdBot binary found: $cb_version (legacy)"
warn "Consider upgrading to openclaw CLI"
else
info "OpenClaw CLI not found — install it to enable agent orchestration"
info " See: https://github.com/builderz-labs/openclaw"
return
fi
# Check OpenClaw home directory
local oc_home="${OPENCLAW_HOME:-$HOME/.openclaw}"
if [[ -d "$oc_home" ]]; then
ok "OpenClaw home: $oc_home"
# Check config
local oc_config="$oc_home/openclaw.json"
if [[ -f "$oc_config" ]]; then
ok "Config found: $oc_config"
else
warn "No openclaw.json found at $oc_config"
info "Mission Control will create a default config on first gateway connection"
fi
# Check for stale PID files
local stale_count=0
for pidfile in "$oc_home"/*.pid "$oc_home"/pids/*.pid; do
[[ -f "$pidfile" ]] || continue
local pid
pid="$(cat "$pidfile" 2>/dev/null)" || continue
if ! kill -0 "$pid" 2>/dev/null; then
rm -f "$pidfile"
((stale_count++))
fi
done
if [[ $stale_count -gt 0 ]]; then
ok "Cleaned $stale_count stale PID file(s)"
fi
# Check logs directory size
local logs_dir="$oc_home/logs"
if [[ -d "$logs_dir" ]]; then
local logs_size
if [[ "$(uname)" == "Darwin" ]]; then
logs_size="$(du -sh "$logs_dir" 2>/dev/null | cut -f1)"
else
logs_size="$(du -sh "$logs_dir" 2>/dev/null | cut -f1)"
fi
info "Logs directory: $logs_size ($logs_dir)"
# Clean old logs (> 30 days)
local old_logs
old_logs=$(find "$logs_dir" -name "*.log" -mtime +30 2>/dev/null | wc -l | tr -d ' ')
if [[ "$old_logs" -gt 0 ]]; then
find "$logs_dir" -name "*.log" -mtime +30 -delete 2>/dev/null || true
ok "Cleaned $old_logs log file(s) older than 30 days"
fi
fi
# Check workspace directory
local workspace="$oc_home/workspace"
if [[ -d "$workspace" ]]; then
local agent_count
agent_count=$(find "$workspace" -maxdepth 1 -type d 2>/dev/null | wc -l | tr -d ' ')
((agent_count--)) # subtract the workspace dir itself
info "Workspace: $agent_count agent workspace(s) in $workspace"
fi
else
info "OpenClaw home not found at $oc_home"
info "Set OPENCLAW_HOME in .env to point to your OpenClaw state directory"
fi
# Check gateway port
local gw_host="${OPENCLAW_GATEWAY_HOST:-127.0.0.1}"
local gw_port="${OPENCLAW_GATEWAY_PORT:-18789}"
if nc -z "$gw_host" "$gw_port" 2>/dev/null || (echo > "/dev/tcp/$gw_host/$gw_port") 2>/dev/null; then
ok "Gateway reachable at $gw_host:$gw_port"
else
info "Gateway not reachable at $gw_host:$gw_port (start it with: openclaw gateway start)"
fi
}
# ── Main ──────────────────────────────────────────────────────────────────────
main() {
echo ""
echo " ╔══════════════════════════════════════╗"
echo " ║ Mission Control Installer ║"
echo " ║ The mothership for your fleet ║"
echo " ╚══════════════════════════════════════╝"
echo ""
detect_os
check_prerequisites
# If running from within an existing clone, use current dir
if [[ -f "$(pwd)/package.json" ]] && grep -q '"mission-control"' "$(pwd)/package.json" 2>/dev/null; then
INSTALL_DIR="$(pwd)"
info "Running from existing clone at $INSTALL_DIR"
else
fetch_source
fi
# Ensure data directory exists
mkdir -p "$INSTALL_DIR/.data"
setup_env
case "$DEPLOY_MODE" in
docker) deploy_docker ;;
local) deploy_local ;;
*) die "Unknown deploy mode: $DEPLOY_MODE" ;;
esac
check_openclaw
# ── Print summary ──
echo ""
echo " ╔══════════════════════════════════════╗"
echo " ║ Installation Complete ║"
echo " ╚══════════════════════════════════════╝"
echo ""
info "Dashboard: http://localhost:$MC_PORT"
info "Mode: $DEPLOY_MODE"
info "Data: $INSTALL_DIR/.data/"
echo ""
info "Credentials are in: $INSTALL_DIR/.env"
echo ""
if [[ "$DEPLOY_MODE" == "docker" ]]; then
info "Manage:"
info " docker compose logs -f # view logs"
info " docker compose restart # restart"
info " docker compose down # stop"
else
info "Manage:"
info " cat $INSTALL_DIR/.data/mc.log # view logs"
info " kill \$(cat $INSTALL_DIR/.data/mc.pid) # stop"
fi
echo ""
}
main "$@"

View File

@ -1,6 +1,9 @@
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'standalone',
outputFileTracingExcludes: {
'/*': ['./.data/**/*'],
},
turbopack: {},
// Transpile ESM-only packages so they resolve correctly in all environments
transpilePackages: ['react-markdown', 'remark-gfm'],
@ -11,12 +14,13 @@ const nextConfig = {
const csp = [
`default-src 'self'`,
`script-src 'self' 'unsafe-inline'${googleEnabled ? ' https://accounts.google.com' : ''}`,
`script-src 'self' 'unsafe-inline' blob:${googleEnabled ? ' https://accounts.google.com' : ''}`,
`style-src 'self' 'unsafe-inline'`,
`connect-src 'self' ws: wss: http://127.0.0.1:* http://localhost:*`,
`connect-src 'self' ws: wss: http://127.0.0.1:* http://localhost:* https://cdn.jsdelivr.net`,
`img-src 'self' data: blob:${googleEnabled ? ' https://*.googleusercontent.com https://lh3.googleusercontent.com' : ''}`,
`font-src 'self' data:`,
`frame-src 'self'${googleEnabled ? ' https://accounts.google.com' : ''}`,
`worker-src 'self' blob:`,
].join('; ')
return [

124
openclaw_hardening_guide.md Normal file
View File

@ -0,0 +1,124 @@
# OpenClaw Gateway Security and Hardening Best Practices
This document consolidates security and hardening best practices for the OpenClaw Gateway, drawing from official documentation and recent security advisories.
## 1. Core Security Model & Deployment Considerations
OpenClaw is designed primarily for a **personal assistant deployment model**, assuming one trusted operator per gateway. It is **not intended for multi-tenant environments** with untrusted or adversarial users. For such scenarios, run separate gateway instances for each trust boundary.
## 2. Hardened Baseline Configuration
For a secure starting point, consider the following configuration, which keeps the Gateway local, isolates DMs, and disables potentially dangerous tools by default:
```json
{
"gateway": {
"mode": "local",
"bind": "loopback",
"auth": {
"mode": "token",
"token": "replace-with-long-random-token"
}
},
"session": {
"dmScope": "per-channel-peer"
},
"tools": {
"profile": "messaging",
"deny": ["group:automation", "group:runtime", "group:fs", "sessions_spawn", "sessions_send"],
"fs": {
"workspaceOnly": true
},
"exec": {
"security": "deny",
"ask": "always"
}
},
"elevated": {
"enabled": false
},
"channels": {
"whatsapp": {
"dmPolicy": "pairing",
"groups": {
"*": {
"requireMention": true
}
}
}
}
}
```
## 3. Key Hardening Recommendations
### 3.1. Network Security
* **Do Not Expose Publicly:** Never expose the OpenClaw gateway directly to the public internet. It typically runs on port 18789. Publicly exposed gateways are easily discoverable.
* **Bind to Localhost:** Configure the gateway to listen only for connections from the local machine by binding it to `127.0.0.1` (localhost) or `loopback` in your `openclaw.json`.
* **Firewall Rules:** Implement strict firewall rules to block all unnecessary inbound and outbound connections, allowing only essential traffic.
* **Secure Remote Access:** For remote access, use secure methods like SSH tunneling or a VPN (e.g., Tailscale) instead of direct exposure.
* **Docker Considerations:** If using Docker, be aware that it can bypass UFW rules. Configure rules in the `DOCKER-USER` chain to control exposure.
### 3.2. Authentication and Access Control
* **Enable Gateway Authentication:** Always enable gateway authentication and use a strong, randomly generated authentication token. Generate a token with `openclaw doctor --generate-gateway-token`.
* **Manage Access Tokens:** Treat your gateway authentication token like a password. Rotate it regularly and store it securely (e.g., as an environment variable, not in plaintext config files).
* **Restrict Chat and Messaging:** If integrating with chat platforms, use allowlists to specify which user IDs can interact with your agent.
* **Direct Messages (DMs) and Groups:**
* For DMs, use the default `pairing` policy (`dmPolicy: "pairing"`) to require approval for unknown senders.
* For group chats, require the bot to be explicitly mentioned to respond (`requireMention: true`).
* Isolate DM sessions using `session.dmScope: "per-channel-peer"` to prevent context leakage.
### 3.3. Isolation and Sandboxing
* **Run in a Docker Container:** The recommended approach is to run OpenClaw within a Docker container for process isolation, filesystem restrictions, and network controls.
* **Harden Docker Configuration:**
* Do not mount your home directory or the Docker socket.
* Use read-only filesystems where possible.
* Drop unnecessary Linux capabilities.
* Run the container as a non-root user.
* **Enable Sandbox Mode:** For tasks that execute code, enable OpenClaw's sandbox mode to prevent malicious or compromised prompts from accessing your system or network. Configure this in `agents.defaults.sandbox`.
### 3.4. Credential and Secret Management
* **Avoid Plaintext Storage:** Never store API keys, tokens, or other sensitive information in plaintext configuration files.
* **Use Secure Storage Mechanisms:** Load credentials from environment variables or use dedicated secrets management solutions (e.g., Hashicorp Vault, AWS Secrets Manager).
### 3.5. File System Permissions
* Ensure your configuration and state files are private.
* `~/.openclaw/openclaw.json` should have permissions `600` (user read/write only).
* The `~/.openclaw` directory should have permissions `700` (user access only).
* `~/.openclaw/credentials/` and its contents should also be `600`.
### 3.6. Tool and Skill Security
* **Principle of Least Privilege:** Only grant the agent the permissions and tools it absolutely needs.
* **Audit Third-Party Skills:** Be extremely cautious with third-party skills, as they can contain malicious code. Research has shown a significant number of skills on marketplaces may be malicious.
### 3.7. Prompt Injection Mitigation
* Lock down who can message the bot using DM pairing and allowlists.
* Require mentions in group chats.
* Run agents that process untrusted content in a sandbox with a minimal toolset.
* Use the latest, most powerful models, as they are generally more resistant to prompt injection.
### 3.8. Monitoring and Incident Response
* **Enable Logging:** Turn on comprehensive logging for all agent activities (command executions, API calls, file access). Store logs in a secure, separate location where the agent cannot modify them.
* **Log Redaction:** Keep log redaction enabled (`logging.redactSensitive: "tools"`) to prevent sensitive information from leaking into logs.
* **Incident Response Plan:** Have a plan for suspected compromises, including stopping the gateway and revoking API keys.
## 4. Staying Updated and Aware of Vulnerabilities
The OpenClaw project is under active development, and new vulnerabilities are discovered.
* **Keep Software Updated:** Regularly update OpenClaw and its dependencies to ensure you have the latest security patches.
* **Be Aware of Recent Threats:** Stay informed about new vulnerabilities. Notable past vulnerabilities include:
* **ClawJacked (High Severity):** Allowed malicious websites to hijack locally running OpenClaw instances via WebSocket connections and brute-force password. Patched in v2026.2.25.
* **Remote Code Execution (Critical - CVE-2026-25253):** A malicious link could trick the Control UI into sending an auth token to an attacker-controlled server, leading to RCE. Patched in v2026.1.29.
* **Authentication Bypass (High Severity - CVE-2026-26327):** Allowed attackers on the same local network to intercept credentials by spoofing a legitimate gateway.
* **Other Vulnerabilities:** Server-Side Request Forgery (SSRF - CVE-2026-26322), missing webhook authentication (CVE-2026-26319), and path traversal (CVE-2026-26329).
By diligently applying these practices, you can significantly enhance the security posture of your OpenClaw Gateway deployment.

View File

@ -1,28 +1,33 @@
{
"name": "mission-control",
"version": "1.3.0",
"version": "2.0.0",
"description": "OpenClaw Mission Control — open-source agent orchestration dashboard",
"scripts": {
"dev": "next dev --hostname 127.0.0.1 --port ${PORT:-3000}",
"build": "next build",
"start": "next start --hostname 0.0.0.0 --port ${PORT:-3000}",
"lint": "eslint .",
"typecheck": "tsc --noEmit",
"test": "vitest run",
"verify:node": "node scripts/check-node-version.mjs",
"dev": "pnpm run verify:node && next dev --hostname 127.0.0.1 --port ${PORT:-3000}",
"build": "pnpm run verify:node && next build",
"start": "pnpm run verify:node && next start --hostname 0.0.0.0 --port ${PORT:-3000}",
"start:standalone": "pnpm run verify:node && bash scripts/start-standalone.sh",
"deploy:standalone": "pnpm run verify:node && bash scripts/deploy-standalone.sh",
"lint": "pnpm run verify:node && eslint .",
"typecheck": "pnpm run verify:node && tsc --noEmit",
"test": "pnpm run verify:node && vitest run",
"test:watch": "vitest",
"test:ui": "vitest --ui",
"test:e2e": "playwright test",
"test:e2e:openclaw:local": "E2E_GATEWAY_EXPECTED=0 playwright test -c playwright.openclaw.local.config.ts",
"test:e2e:openclaw:gateway": "E2E_GATEWAY_EXPECTED=1 playwright test -c playwright.openclaw.gateway.config.ts",
"test:e2e": "pnpm run verify:node && playwright test",
"test:e2e:openclaw:local": "pnpm run verify:node && E2E_GATEWAY_EXPECTED=0 playwright test -c playwright.openclaw.local.config.ts",
"test:e2e:openclaw:gateway": "pnpm run verify:node && E2E_GATEWAY_EXPECTED=1 playwright test -c playwright.openclaw.gateway.config.ts",
"test:e2e:openclaw": "pnpm test:e2e:openclaw:local && pnpm test:e2e:openclaw:gateway",
"test:all": "pnpm lint && pnpm typecheck && pnpm test && pnpm build && pnpm test:e2e",
"quality:gate": "pnpm test:all"
},
"dependencies": {
"@radix-ui/react-slot": "^1.2.4",
"@scalar/api-reference-react": "^0.8.66",
"@xyflow/react": "^12.10.0",
"autoprefixer": "^10.4.20",
"better-sqlite3": "^12.6.2",
"class-variance-authority": "^0.7.1",
"clsx": "^2.1.1",
"eslint": "^9.18.0",
"eslint-config-next": "^16.1.6",
@ -34,6 +39,7 @@
"react-dom": "^19.0.1",
"react-markdown": "^10.1.0",
"reactflow": "^11.11.4",
"reagraph": "^4.30.8",
"recharts": "^3.7.0",
"remark-gfm": "^4.0.1",
"tailwind-merge": "^3.4.0",
@ -60,7 +66,7 @@
"vitest": "^2.1.5"
},
"engines": {
"node": ">=20"
"node": ">=22 <23"
},
"keywords": [
"openclaw",

View File

@ -18,14 +18,13 @@ export default defineConfig({
{ name: 'chromium', use: { ...devices['Desktop Chrome'] } }
],
webServer: {
command: 'node .next/standalone/server.js',
command: 'node scripts/e2e-openclaw/start-e2e-server.mjs --mode=local',
url: 'http://127.0.0.1:3005',
reuseExistingServer: true,
timeout: 120_000,
env: {
...process.env,
HOSTNAME: process.env.HOSTNAME || '127.0.0.1',
PORT: process.env.PORT || '3005',
MISSION_CONTROL_TEST_MODE: process.env.MISSION_CONTROL_TEST_MODE || '1',
MC_DISABLE_RATE_LIMIT: process.env.MC_DISABLE_RATE_LIMIT || '1',
MC_WORKLOAD_QUEUE_DEPTH_THROTTLE: process.env.MC_WORKLOAD_QUEUE_DEPTH_THROTTLE || '1000',
MC_WORKLOAD_QUEUE_DEPTH_SHED: process.env.MC_WORKLOAD_QUEUE_DEPTH_SHED || '2000',
@ -34,7 +33,6 @@ export default defineConfig({
API_KEY: process.env.API_KEY || 'test-api-key-e2e-12345',
AUTH_USER: process.env.AUTH_USER || 'testadmin',
AUTH_PASS: process.env.AUTH_PASS || 'testpass1234!',
OPENCLAW_MEMORY_DIR: process.env.OPENCLAW_MEMORY_DIR || '.data/e2e-memory',
},
}
})

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 61 KiB

BIN
public/brand/codex-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.8 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 63 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 279 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

BIN
public/mc-logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

BIN
public/mc.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

View File

@ -0,0 +1,16 @@
#!/usr/bin/env node
const REQUIRED_NODE_MAJOR = 22
const current = process.versions.node
const currentMajor = Number.parseInt(current.split('.')[0] || '', 10)
if (currentMajor !== REQUIRED_NODE_MAJOR) {
console.error(
[
`error: Mission Control requires Node ${REQUIRED_NODE_MAJOR}.x, but found ${current}.`,
'use `nvm use 22` (or your version manager equivalent) before installing, building, or starting the app.',
].join('\n')
)
process.exit(1)
}

View File

@ -0,0 +1,251 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
BRANCH="${BRANCH:-$(git -C "$PROJECT_ROOT" branch --show-current)}"
PORT="${PORT:-3000}"
LISTEN_HOST="${MC_HOSTNAME:-0.0.0.0}"
LOG_PATH="${LOG_PATH:-/tmp/mc.log}"
VERIFY_HOST="${VERIFY_HOST:-127.0.0.1}"
PID_FILE="${PID_FILE:-$PROJECT_ROOT/.next/standalone/server.pid}"
SOURCE_DATA_DIR="$PROJECT_ROOT/.data"
BUILD_DATA_DIR="$PROJECT_ROOT/.next/build-runtime"
NODE_VERSION_FILE="$PROJECT_ROOT/.nvmrc"
use_project_node() {
if [[ ! -f "$NODE_VERSION_FILE" ]]; then
return
fi
if [[ -z "${NVM_DIR:-}" ]]; then
export NVM_DIR="$HOME/.nvm"
fi
if [[ -s "$NVM_DIR/nvm.sh" ]]; then
# shellcheck disable=SC1090
source "$NVM_DIR/nvm.sh"
nvm use >/dev/null
fi
}
list_listener_pids() {
local combined=""
if command -v lsof >/dev/null 2>&1; then
combined+="$(
lsof -tiTCP:"$PORT" -sTCP:LISTEN 2>/dev/null || true
)"$'\n'
fi
if command -v ss >/dev/null 2>&1; then
combined+="$(
ss -ltnp 2>/dev/null | awk -v port=":$PORT" '
index($4, port) || index($5, port) {
if (match($0, /pid=[0-9]+/)) {
print substr($0, RSTART + 4, RLENGTH - 4)
}
}
'
)"$'\n'
fi
printf '%s\n' "$combined" | awk -v port="$PORT" '
/^[0-9]+$/ {
seen[$0] = 1
}
END {
for (pid in seen) {
print pid
}
}
' | sort -u
}
stop_pid() {
local pid="$1"
local label="$2"
if [[ -z "$pid" ]] || ! kill -0 "$pid" 2>/dev/null; then
return
fi
echo "==> stopping $label (pid=$pid)"
kill "$pid" 2>/dev/null || true
for _ in $(seq 1 10); do
if ! kill -0 "$pid" 2>/dev/null; then
return
fi
sleep 1
done
echo "==> force stopping $label (pid=$pid)"
kill -9 "$pid" 2>/dev/null || true
}
stop_existing_server() {
local -a candidate_pids=()
if [[ -f "$PID_FILE" ]]; then
candidate_pids+=("$(cat "$PID_FILE" 2>/dev/null || true)")
fi
while IFS= read -r pid; do
candidate_pids+=("$pid")
done < <(list_listener_pids)
if command -v pgrep >/dev/null 2>&1; then
while IFS= read -r pid; do
candidate_pids+=("$pid")
done < <(pgrep -f "$PROJECT_ROOT/.next/standalone/server.js" || true)
fi
if [[ ${#candidate_pids[@]} -eq 0 ]]; then
return
fi
declare -A seen=()
for pid in "${candidate_pids[@]}"; do
[[ -z "$pid" ]] && continue
[[ -n "${seen[$pid]:-}" ]] && continue
seen[$pid]=1
stop_pid "$pid" "standalone server"
done
for _ in $(seq 1 10); do
if [[ -z "$(list_listener_pids | head -n1)" ]]; then
rm -f "$PID_FILE"
return
fi
sleep 1
done
echo "error: port $PORT is still in use after stopping existing server" >&2
exit 1
}
load_env() {
set -a
if [[ -f .env ]]; then
# shellcheck disable=SC1091
source .env
fi
if [[ -f .env.local ]]; then
# shellcheck disable=SC1091
source .env.local
fi
set +a
}
migrate_runtime_data_dir() {
local target_data_dir="${MISSION_CONTROL_DATA_DIR:-$SOURCE_DATA_DIR}"
if [[ "$target_data_dir" == "$SOURCE_DATA_DIR" ]]; then
return
fi
mkdir -p "$target_data_dir"
local source_db="$SOURCE_DATA_DIR/mission-control.db"
local target_db="$target_data_dir/mission-control.db"
if [[ -s "$target_db" || ! -s "$source_db" ]]; then
return
fi
echo "==> migrating runtime data to $target_data_dir"
if command -v sqlite3 >/dev/null 2>&1; then
local target_db_tmp="$target_db.tmp"
rm -f "$target_db_tmp"
sqlite3 "$source_db" ".backup '$target_db_tmp'"
mv "$target_db_tmp" "$target_db"
if [[ -f "$SOURCE_DATA_DIR/mission-control-tokens.json" ]]; then
cp "$SOURCE_DATA_DIR/mission-control-tokens.json" "$target_data_dir/mission-control-tokens.json"
fi
if [[ -d "$SOURCE_DATA_DIR/backups" ]]; then
rsync -a "$SOURCE_DATA_DIR/backups"/ "$target_data_dir/backups"/
fi
else
rsync -a \
--exclude 'mission-control.db-shm' \
--exclude 'mission-control.db-wal' \
--exclude '*.db-shm' \
--exclude '*.db-wal' \
"$SOURCE_DATA_DIR"/ "$target_data_dir"/
fi
}
cd "$PROJECT_ROOT"
use_project_node
echo "==> fetching branch $BRANCH"
git fetch origin "$BRANCH"
git merge --ff-only FETCH_HEAD
load_env
migrate_runtime_data_dir
echo "==> stopping existing standalone server before rebuild"
stop_existing_server
echo "==> installing dependencies"
pnpm install --frozen-lockfile
echo "==> rebuilding standalone bundle"
rm -rf .next
mkdir -p "$BUILD_DATA_DIR"
MISSION_CONTROL_DATA_DIR="$BUILD_DATA_DIR" \
MISSION_CONTROL_DB_PATH="$BUILD_DATA_DIR/mission-control.db" \
MISSION_CONTROL_TOKENS_PATH="$BUILD_DATA_DIR/mission-control-tokens.json" \
pnpm build
echo "==> starting standalone server"
load_env
PORT="$PORT" HOSTNAME="$LISTEN_HOST" nohup bash "$PROJECT_ROOT/scripts/start-standalone.sh" >"$LOG_PATH" 2>&1 &
new_pid=$!
echo "$new_pid" > "$PID_FILE"
echo "==> verifying process and static assets"
for _ in $(seq 1 20); do
if curl -fsS "http://$VERIFY_HOST:$PORT/login" >/dev/null 2>&1; then
break
fi
sleep 1
done
login_html="$(curl -fsS "http://$VERIFY_HOST:$PORT/login")"
css_path="$(printf '%s\n' "$login_html" | sed -n 's|.*\(/_next/static/chunks/[^"]*\.css\).*|\1|p' | sed -n '1p')"
if [[ -z "${css_path:-}" ]]; then
echo "error: no css asset found in rendered login HTML" >&2
exit 1
fi
listener_pid="$(list_listener_pids | head -n1)"
if [[ -z "${listener_pid:-}" ]]; then
echo "error: no listener detected on port $PORT after startup" >&2
exit 1
fi
if [[ "$listener_pid" != "$new_pid" ]]; then
echo "error: port $PORT is owned by pid=$listener_pid, expected new pid=$new_pid" >&2
exit 1
fi
css_disk_path="$PROJECT_ROOT/.next/standalone/.next${css_path#/_next}"
if [[ ! -f "$css_disk_path" ]]; then
echo "error: rendered css asset missing on disk: $css_disk_path" >&2
exit 1
fi
content_type="$(curl -fsSI "http://$VERIFY_HOST:$PORT$css_path" | awk 'BEGIN{IGNORECASE=1} /^content-type:/ {print $2}' | tr -d '\r')"
if [[ "${content_type:-}" != text/css* ]]; then
echo "error: css asset served with unexpected content-type: ${content_type:-missing}" >&2
exit 1
fi
echo "==> deployed commit $(git rev-parse --short HEAD)"
echo " pid=$new_pid port=$PORT css=$css_path"

View File

@ -1,9 +1,30 @@
#!/usr/bin/env node
import { spawn } from 'node:child_process'
import fs from 'node:fs'
import net from 'node:net'
import path from 'node:path'
import process from 'node:process'
async function findAvailablePort(host = '127.0.0.1') {
return await new Promise((resolve, reject) => {
const server = net.createServer()
server.unref()
server.on('error', reject)
server.listen(0, host, () => {
const address = server.address()
if (!address || typeof address === 'string') {
server.close(() => reject(new Error('failed to resolve dynamic port')))
return
}
const { port } = address
server.close((err) => {
if (err) reject(err)
else resolve(port)
})
})
})
}
const modeArg = process.argv.find((arg) => arg.startsWith('--mode='))
const mode = modeArg ? modeArg.split('=')[1] : 'local'
if (mode !== 'local' && mode !== 'gateway') {
@ -16,6 +37,7 @@ const fixtureSource = path.join(repoRoot, 'tests', 'fixtures', 'openclaw')
const runtimeRoot = path.join(repoRoot, '.tmp', 'e2e-openclaw', mode)
const dataDir = path.join(runtimeRoot, 'data')
const mockBinDir = path.join(repoRoot, 'scripts', 'e2e-openclaw', 'bin')
const skillsRoot = path.join(runtimeRoot, 'skills')
fs.rmSync(runtimeRoot, { recursive: true, force: true })
fs.mkdirSync(runtimeRoot, { recursive: true })
@ -23,13 +45,14 @@ fs.mkdirSync(dataDir, { recursive: true })
fs.cpSync(fixtureSource, runtimeRoot, { recursive: true })
const gatewayHost = '127.0.0.1'
const gatewayPort = '18789'
const gatewayPort = String(await findAvailablePort(gatewayHost))
const baseEnv = {
...process.env,
API_KEY: process.env.API_KEY || 'test-api-key-e2e-12345',
AUTH_USER: process.env.AUTH_USER || 'admin',
AUTH_PASS: process.env.AUTH_PASS || 'admin',
MISSION_CONTROL_TEST_MODE: process.env.MISSION_CONTROL_TEST_MODE || '1',
MC_DISABLE_RATE_LIMIT: '1',
MISSION_CONTROL_DATA_DIR: dataDir,
MISSION_CONTROL_DB_PATH: path.join(dataDir, 'mission-control.db'),
@ -39,11 +62,17 @@ const baseEnv = {
OPENCLAW_GATEWAY_PORT: gatewayPort,
OPENCLAW_BIN: path.join(mockBinDir, 'openclaw'),
CLAWDBOT_BIN: path.join(mockBinDir, 'clawdbot'),
MC_SKILLS_USER_AGENTS_DIR: path.join(skillsRoot, 'user-agents'),
MC_SKILLS_USER_CODEX_DIR: path.join(skillsRoot, 'user-codex'),
MC_SKILLS_PROJECT_AGENTS_DIR: path.join(skillsRoot, 'project-agents'),
MC_SKILLS_PROJECT_CODEX_DIR: path.join(skillsRoot, 'project-codex'),
MC_SKILLS_OPENCLAW_DIR: path.join(skillsRoot, 'openclaw'),
PATH: `${mockBinDir}:${process.env.PATH || ''}`,
E2E_GATEWAY_EXPECTED: mode === 'gateway' ? '1' : '0',
}
const children = []
let app = null
if (mode === 'gateway') {
const gw = spawn('node', ['scripts/e2e-openclaw/mock-gateway.mjs'], {
@ -51,11 +80,24 @@ if (mode === 'gateway') {
env: baseEnv,
stdio: 'inherit',
})
gw.on('error', (err) => {
process.stderr.write(`[openclaw-e2e] mock gateway failed to start: ${String(err)}\n`)
shutdown('SIGTERM')
process.exit(1)
})
gw.on('exit', (code, signal) => {
const exitCode = code ?? (signal ? 1 : 0)
if (exitCode !== 0) {
process.stderr.write(`[openclaw-e2e] mock gateway exited unexpectedly (code=${exitCode}, signal=${signal ?? 'none'})\n`)
shutdown('SIGTERM')
process.exit(exitCode)
}
})
children.push(gw)
}
const standaloneServerPath = path.join(repoRoot, '.next', 'standalone', 'server.js')
const app = fs.existsSync(standaloneServerPath)
app = fs.existsSync(standaloneServerPath)
? spawn('node', [standaloneServerPath], {
cwd: repoRoot,
env: {

81
scripts/generate-env.sh Executable file
View File

@ -0,0 +1,81 @@
#!/usr/bin/env bash
# Generate a secure .env file from .env.example with random secrets.
# Usage: bash scripts/generate-env.sh [output-path]
#
# If output-path is omitted, writes to .env in the project root.
# Will NOT overwrite an existing .env unless --force is passed.
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
EXAMPLE_FILE="$PROJECT_ROOT/.env.example"
FORCE=false
OUTPUT=""
for arg in "$@"; do
case "$arg" in
--force) FORCE=true ;;
*) OUTPUT="$arg" ;;
esac
done
OUTPUT="${OUTPUT:-$PROJECT_ROOT/.env}"
if [[ -f "$OUTPUT" ]] && ! $FORCE; then
echo "Error: $OUTPUT already exists. Use --force to overwrite."
exit 1
fi
if [[ ! -f "$EXAMPLE_FILE" ]]; then
echo "Error: .env.example not found at $EXAMPLE_FILE"
exit 1
fi
# Generate cryptographically random values
generate_password() {
local len="${1:-24}"
# Use openssl if available, fallback to /dev/urandom
if command -v openssl &>/dev/null; then
openssl rand -base64 "$((len * 3 / 4 + 1))" | tr -dc 'A-Za-z0-9' | head -c "$len"
else
head -c "$((len * 2))" /dev/urandom | LC_ALL=C tr -dc 'A-Za-z0-9' | head -c "$len"
fi
}
generate_hex() {
local len="${1:-32}"
if command -v openssl &>/dev/null; then
openssl rand -hex "$((len / 2))"
else
head -c "$((len / 2))" /dev/urandom | od -An -tx1 | tr -d ' \n' | head -c "$len"
fi
}
AUTH_PASS="$(generate_password 24)"
API_KEY="$(generate_hex 32)"
AUTH_SECRET="$(generate_password 32)"
# Copy .env.example and replace default secrets
cp "$EXAMPLE_FILE" "$OUTPUT"
# Replace the insecure defaults with generated values
if [[ "$(uname)" == "Darwin" ]]; then
sed -i '' "s|^AUTH_PASS=.*|AUTH_PASS=$AUTH_PASS|" "$OUTPUT"
sed -i '' "s|^API_KEY=.*|API_KEY=$API_KEY|" "$OUTPUT"
sed -i '' "s|^AUTH_SECRET=.*|AUTH_SECRET=$AUTH_SECRET|" "$OUTPUT"
else
sed -i "s|^AUTH_PASS=.*|AUTH_PASS=$AUTH_PASS|" "$OUTPUT"
sed -i "s|^API_KEY=.*|API_KEY=$API_KEY|" "$OUTPUT"
sed -i "s|^AUTH_SECRET=.*|AUTH_SECRET=$AUTH_SECRET|" "$OUTPUT"
fi
# Lock down permissions
chmod 600 "$OUTPUT"
echo "Generated secure .env at $OUTPUT"
echo " AUTH_USER: admin"
echo " AUTH_PASS: $AUTH_PASS"
echo " API_KEY: $API_KEY"
echo ""
echo "Save these credentials — they are not stored elsewhere."

168
scripts/security-audit.sh Executable file
View File

@ -0,0 +1,168 @@
#!/usr/bin/env bash
# Mission Control Security Audit
# Run: bash scripts/security-audit.sh [--env-file .env]
set -euo pipefail
SCORE=0
MAX_SCORE=0
ISSUES=()
pass() { echo " [PASS] $1"; ((SCORE++)); ((MAX_SCORE++)); }
fail() { echo " [FAIL] $1"; ISSUES+=("$1"); ((MAX_SCORE++)); }
warn() { echo " [WARN] $1"; ((MAX_SCORE++)); }
info() { echo " [INFO] $1"; }
# Load .env if exists
ENV_FILE="${1:-.env}"
if [[ -f "$ENV_FILE" ]]; then
while IFS='=' read -r key value; do
[[ "$key" =~ ^#.*$ ]] && continue
[[ -z "$key" ]] && continue
declare "$key=$value" 2>/dev/null || true
done < "$ENV_FILE"
fi
echo "=== Mission Control Security Audit ==="
echo ""
# 1. .env file permissions
echo "--- File Permissions ---"
if [[ -f "$ENV_FILE" ]]; then
perms=$(stat -f '%A' "$ENV_FILE" 2>/dev/null || stat -c '%a' "$ENV_FILE" 2>/dev/null)
if [[ "$perms" == "600" ]]; then
pass ".env permissions are 600 (owner read/write only)"
else
fail ".env permissions are $perms (should be 600). Run: chmod 600 $ENV_FILE"
fi
else
warn ".env file not found at $ENV_FILE"
fi
# 2. Default passwords check
echo ""
echo "--- Credentials ---"
INSECURE_PASSWORDS=("admin" "password" "change-me-on-first-login" "changeme" "testpass123" "testpass1234")
AUTH_PASS_VAL="${AUTH_PASS:-}"
if [[ -z "$AUTH_PASS_VAL" ]]; then
fail "AUTH_PASS is not set"
else
insecure=false
for bad in "${INSECURE_PASSWORDS[@]}"; do
if [[ "$AUTH_PASS_VAL" == "$bad" ]]; then
insecure=true; break
fi
done
if $insecure; then
fail "AUTH_PASS is set to a known insecure default"
elif [[ ${#AUTH_PASS_VAL} -lt 12 ]]; then
fail "AUTH_PASS is too short (${#AUTH_PASS_VAL} chars, minimum 12)"
else
pass "AUTH_PASS is set to a non-default value (${#AUTH_PASS_VAL} chars)"
fi
fi
API_KEY_VAL="${API_KEY:-}"
if [[ -z "$API_KEY_VAL" || "$API_KEY_VAL" == "generate-a-random-key" ]]; then
fail "API_KEY is not set or uses the default value"
else
pass "API_KEY is configured"
fi
# 3. Network config
echo ""
echo "--- Network Security ---"
MC_ALLOWED="${MC_ALLOWED_HOSTS:-}"
MC_ANY="${MC_ALLOW_ANY_HOST:-}"
if [[ "$MC_ANY" == "1" || "$MC_ANY" == "true" ]]; then
fail "MC_ALLOW_ANY_HOST is enabled (any host can connect)"
elif [[ -n "$MC_ALLOWED" ]]; then
pass "MC_ALLOWED_HOSTS is configured: $MC_ALLOWED"
else
warn "MC_ALLOWED_HOSTS is not set (defaults apply)"
fi
# 4. Cookie/HTTPS config
echo ""
echo "--- HTTPS & Cookies ---"
COOKIE_SECURE="${MC_COOKIE_SECURE:-}"
if [[ "$COOKIE_SECURE" == "1" || "$COOKIE_SECURE" == "true" ]]; then
pass "MC_COOKIE_SECURE is enabled"
else
warn "MC_COOKIE_SECURE is not enabled (cookies sent over HTTP)"
fi
SAMESITE="${MC_COOKIE_SAMESITE:-strict}"
if [[ "$SAMESITE" == "strict" ]]; then
pass "MC_COOKIE_SAMESITE is strict"
else
warn "MC_COOKIE_SAMESITE is '$SAMESITE' (strict recommended)"
fi
HSTS="${MC_ENABLE_HSTS:-}"
if [[ "$HSTS" == "1" ]]; then
pass "HSTS is enabled"
else
warn "HSTS is not enabled (set MC_ENABLE_HSTS=1 for HTTPS deployments)"
fi
# 5. Rate limiting
echo ""
echo "--- Rate Limiting ---"
RL_DISABLED="${MC_DISABLE_RATE_LIMIT:-}"
if [[ "$RL_DISABLED" == "1" ]]; then
fail "Rate limiting is disabled (MC_DISABLE_RATE_LIMIT=1)"
else
pass "Rate limiting is active"
fi
# 6. Docker security (if running in Docker)
echo ""
echo "--- Docker Security ---"
if command -v docker &>/dev/null; then
if docker ps --filter name=mission-control --format '{{.Names}}' 2>/dev/null | grep -q mission-control; then
ro=$(docker inspect mission-control --format '{{.HostConfig.ReadonlyRootfs}}' 2>/dev/null || echo "false")
if [[ "$ro" == "true" ]]; then
pass "Container filesystem is read-only"
else
warn "Container filesystem is writable (use read_only: true)"
fi
nnp=$(docker inspect mission-control --format '{{.HostConfig.SecurityOpt}}' 2>/dev/null || echo "[]")
if echo "$nnp" | grep -q "no-new-privileges"; then
pass "no-new-privileges is set"
else
warn "no-new-privileges not set"
fi
user=$(docker inspect mission-control --format '{{.Config.User}}' 2>/dev/null || echo "")
if [[ -n "$user" && "$user" != "root" && "$user" != "0" ]]; then
pass "Container runs as non-root user ($user)"
else
warn "Container may be running as root"
fi
else
info "Mission Control container not running"
fi
else
info "Docker not installed (skipping container checks)"
fi
# Summary
echo ""
echo "=== Security Score: $SCORE / $MAX_SCORE ==="
if [[ ${#ISSUES[@]} -gt 0 ]]; then
echo ""
echo "Issues to fix:"
for issue in "${ISSUES[@]}"; do
echo " - $issue"
done
fi
if [[ $SCORE -eq $MAX_SCORE ]]; then
echo "All checks passed!"
elif [[ $SCORE -ge $((MAX_SCORE * 7 / 10)) ]]; then
echo "Good security posture with minor improvements needed."
else
echo "Security improvements recommended before production use."
fi

168
scripts/smoke-staging.mjs Executable file
View File

@ -0,0 +1,168 @@
#!/usr/bin/env node
const baseUrl = (process.env.STAGING_BASE_URL || process.env.BASE_URL || '').replace(/\/$/, '')
const apiKey = process.env.STAGING_API_KEY || process.env.API_KEY || ''
const authUser = process.env.STAGING_AUTH_USER || process.env.AUTH_USER || ''
const authPass = process.env.STAGING_AUTH_PASS || process.env.AUTH_PASS || ''
if (!baseUrl) {
console.error('Missing STAGING_BASE_URL (or BASE_URL).')
process.exit(1)
}
if (!apiKey) {
console.error('Missing STAGING_API_KEY (or API_KEY).')
process.exit(1)
}
if (!authUser || !authPass) {
console.error('Missing STAGING_AUTH_USER/STAGING_AUTH_PASS (or AUTH_USER/AUTH_PASS).')
process.exit(1)
}
const headers = {
'x-api-key': apiKey,
'content-type': 'application/json',
}
let createdProjectId = null
let createdTaskId = null
let createdAgentId = null
async function call(path, options = {}) {
const res = await fetch(`${baseUrl}${path}`, options)
const text = await res.text()
let body = null
try {
body = text ? JSON.parse(text) : null
} catch {
body = { raw: text }
}
return { res, body }
}
function assertStatus(actual, expected, label) {
if (actual !== expected) {
throw new Error(`${label} failed: expected ${expected}, got ${actual}`)
}
console.log(`PASS ${label}`)
}
async function run() {
const login = await call('/api/auth/login', {
method: 'POST',
headers: { 'content-type': 'application/json' },
body: JSON.stringify({ username: authUser, password: authPass }),
})
assertStatus(login.res.status, 200, 'login')
const workspaces = await call('/api/workspaces', { headers })
assertStatus(workspaces.res.status, 200, 'GET /api/workspaces')
const suffix = `${Date.now()}-${Math.random().toString(36).slice(2, 7)}`
const ticketPrefix = `S${String(Date.now()).slice(-5)}`
const projectCreate = await call('/api/projects', {
method: 'POST',
headers,
body: JSON.stringify({
name: `staging-smoke-${suffix}`,
ticket_prefix: ticketPrefix,
}),
})
assertStatus(projectCreate.res.status, 201, 'POST /api/projects')
createdProjectId = projectCreate.body?.project?.id
if (!createdProjectId) throw new Error('project id missing')
const projectGet = await call(`/api/projects/${createdProjectId}`, { headers })
assertStatus(projectGet.res.status, 200, 'GET /api/projects/[id]')
const projectPatch = await call(`/api/projects/${createdProjectId}`, {
method: 'PATCH',
headers,
body: JSON.stringify({ description: 'staging smoke update' }),
})
assertStatus(projectPatch.res.status, 200, 'PATCH /api/projects/[id]')
const agentCreate = await call('/api/agents', {
method: 'POST',
headers,
body: JSON.stringify({ name: `smoke-agent-${suffix}`, role: 'tester' }),
})
assertStatus(agentCreate.res.status, 201, 'POST /api/agents')
createdAgentId = agentCreate.body?.agent?.id
const assign = await call(`/api/projects/${createdProjectId}/agents`, {
method: 'POST',
headers,
body: JSON.stringify({ agent_name: `smoke-agent-${suffix}`, role: 'member' }),
})
assertStatus(assign.res.status, 201, 'POST /api/projects/[id]/agents')
const projectTasksCreate = await call('/api/tasks', {
method: 'POST',
headers,
body: JSON.stringify({
title: `smoke-task-${suffix}`,
project_id: createdProjectId,
priority: 'medium',
status: 'inbox',
}),
})
assertStatus(projectTasksCreate.res.status, 201, 'POST /api/tasks (project scoped)')
createdTaskId = projectTasksCreate.body?.task?.id
const projectTasksGet = await call(`/api/projects/${createdProjectId}/tasks`, { headers })
assertStatus(projectTasksGet.res.status, 200, 'GET /api/projects/[id]/tasks')
const unassign = await call(`/api/projects/${createdProjectId}/agents?agent_name=${encodeURIComponent(`smoke-agent-${suffix}`)}`, {
method: 'DELETE',
headers,
})
assertStatus(unassign.res.status, 200, 'DELETE /api/projects/[id]/agents')
if (createdTaskId) {
const deleteTask = await call(`/api/tasks/${createdTaskId}`, {
method: 'DELETE',
headers,
})
assertStatus(deleteTask.res.status, 200, 'DELETE /api/tasks/[id]')
createdTaskId = null
}
if (createdProjectId) {
const deleteProject = await call(`/api/projects/${createdProjectId}?mode=delete`, {
method: 'DELETE',
headers,
})
assertStatus(deleteProject.res.status, 200, 'DELETE /api/projects/[id]?mode=delete')
createdProjectId = null
}
if (createdAgentId) {
const deleteAgent = await call(`/api/agents/${createdAgentId}`, {
method: 'DELETE',
headers,
})
if (deleteAgent.res.status !== 200 && deleteAgent.res.status !== 404) {
throw new Error(`DELETE /api/agents/[id] cleanup failed: ${deleteAgent.res.status}`)
}
createdAgentId = null
console.log('PASS cleanup agent')
}
console.log(`\nSmoke test passed for ${baseUrl}`)
}
run().catch(async (error) => {
console.error(`\nSmoke test failed: ${error.message}`)
if (createdTaskId) {
await call(`/api/tasks/${createdTaskId}`, { method: 'DELETE', headers }).catch(() => {})
}
if (createdProjectId) {
await call(`/api/projects/${createdProjectId}?mode=delete`, { method: 'DELETE', headers }).catch(() => {})
}
if (createdAgentId) {
await call(`/api/agents/${createdAgentId}`, { method: 'DELETE', headers }).catch(() => {})
}
process.exit(1)
})

View File

@ -0,0 +1,33 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
STANDALONE_DIR="$PROJECT_ROOT/.next/standalone"
STANDALONE_NEXT_DIR="$STANDALONE_DIR/.next"
STANDALONE_STATIC_DIR="$STANDALONE_NEXT_DIR/static"
SOURCE_STATIC_DIR="$PROJECT_ROOT/.next/static"
SOURCE_PUBLIC_DIR="$PROJECT_ROOT/public"
STANDALONE_PUBLIC_DIR="$STANDALONE_DIR/public"
if [[ ! -f "$STANDALONE_DIR/server.js" ]]; then
echo "error: standalone server missing at $STANDALONE_DIR/server.js" >&2
echo "run 'pnpm build' first" >&2
exit 1
fi
mkdir -p "$STANDALONE_NEXT_DIR"
if [[ -d "$SOURCE_STATIC_DIR" ]]; then
rm -rf "$STANDALONE_STATIC_DIR"
cp -R "$SOURCE_STATIC_DIR" "$STANDALONE_STATIC_DIR"
fi
if [[ -d "$SOURCE_PUBLIC_DIR" ]]; then
rm -rf "$STANDALONE_PUBLIC_DIR"
cp -R "$SOURCE_PUBLIC_DIR" "$STANDALONE_PUBLIC_DIR"
fi
cd "$STANDALONE_DIR"
exec node server.js

189
scripts/station-doctor.sh Executable file
View File

@ -0,0 +1,189 @@
#!/usr/bin/env bash
# Mission Control Station Doctor
# Local diagnostics — no auth required, runs on the host.
#
# Usage: bash scripts/station-doctor.sh [--port PORT]
set -euo pipefail
MC_PORT="${1:-3000}"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(cd "$SCRIPT_DIR/.." && pwd)"
# Parse args
for arg in "$@"; do
case "$arg" in
--port) shift; MC_PORT="$1"; shift ;;
esac
done
PASS=0
WARN=0
FAIL=0
pass() { echo " [PASS] $1"; ((PASS++)); }
warn() { echo " [WARN] $1"; ((WARN++)); }
fail() { echo " [FAIL] $1"; ((FAIL++)); }
info() { echo " [INFO] $1"; }
echo "=== Mission Control Station Doctor ==="
echo ""
# ── 1. Process / Container check ─────────────────────────────────────────────
echo "--- Service Status ---"
RUNNING_IN_DOCKER=false
if command -v docker &>/dev/null; then
if docker ps --filter name=mission-control --format '{{.Names}}' 2>/dev/null | grep -q mission-control; then
RUNNING_IN_DOCKER=true
health=$(docker inspect mission-control --format '{{.State.Health.Status}}' 2>/dev/null || echo "none")
if [[ "$health" == "healthy" ]]; then
pass "Docker container is healthy"
elif [[ "$health" == "starting" ]]; then
warn "Docker container is starting"
else
fail "Docker container health: $health"
fi
fi
fi
if ! $RUNNING_IN_DOCKER; then
if pgrep -f "node.*server.js" &>/dev/null || pgrep -f "next-server" &>/dev/null; then
pass "Mission Control process is running"
else
fail "Mission Control process not found"
fi
fi
# ── 2. Port check ────────────────────────────────────────────────────────────
echo ""
echo "--- Network ---"
if curl -sf "http://localhost:$MC_PORT/login" &>/dev/null; then
pass "Port $MC_PORT is responding"
else
fail "Port $MC_PORT is not responding"
fi
# ── 3. API health ─────────────────────────────────────────────────────────────
# Try unauthenticated — will get 401 but proves the server is up
http_code=$(curl -sf -o /dev/null -w "%{http_code}" "http://localhost:$MC_PORT/api/status?action=health" 2>/dev/null || echo "000")
if [[ "$http_code" == "200" ]]; then
pass "Health API responding (200)"
elif [[ "$http_code" == "401" ]]; then
pass "Health API responding (auth required — expected)"
elif [[ "$http_code" == "000" ]]; then
fail "Health API not reachable"
else
warn "Health API returned HTTP $http_code"
fi
# ── 4. Disk space ─────────────────────────────────────────────────────────────
echo ""
echo "--- Disk ---"
usage_pct=$(df -h "$PROJECT_ROOT" 2>/dev/null | tail -1 | awk '{for(i=1;i<=NF;i++) if($i ~ /%/) print $i}' | tr -d '%')
if [[ -n "$usage_pct" ]]; then
if [[ "$usage_pct" -lt 85 ]]; then
pass "Disk usage: ${usage_pct}%"
elif [[ "$usage_pct" -lt 95 ]]; then
warn "Disk usage: ${usage_pct}% (getting full)"
else
fail "Disk usage: ${usage_pct}% (critical)"
fi
fi
# ── 5. Database integrity ────────────────────────────────────────────────────
echo ""
echo "--- Database ---"
DB_PATH="$PROJECT_ROOT/.data/mission-control.db"
if [[ -f "$DB_PATH" ]]; then
db_size=$(du -h "$DB_PATH" 2>/dev/null | cut -f1)
pass "Database exists ($db_size)"
# SQLite integrity check
if command -v sqlite3 &>/dev/null; then
integrity=$(sqlite3 "$DB_PATH" "PRAGMA integrity_check;" 2>/dev/null || echo "error")
if [[ "$integrity" == "ok" ]]; then
pass "Database integrity check passed"
else
fail "Database integrity check failed: $integrity"
fi
# WAL mode check
journal=$(sqlite3 "$DB_PATH" "PRAGMA journal_mode;" 2>/dev/null || echo "unknown")
if [[ "$journal" == "wal" ]]; then
pass "WAL mode enabled"
else
warn "Journal mode: $journal (WAL recommended)"
fi
else
info "sqlite3 not found — skipping integrity check"
fi
else
if $RUNNING_IN_DOCKER; then
info "Database is inside Docker volume (cannot check directly)"
else
warn "Database not found at $DB_PATH"
fi
fi
# ── 6. Backup age ────────────────────────────────────────────────────────────
echo ""
echo "--- Backups ---"
BACKUP_DIR="$PROJECT_ROOT/.data/backups"
if [[ -d "$BACKUP_DIR" ]]; then
latest_backup=$(find "$BACKUP_DIR" -name "*.db" -type f 2>/dev/null | sort -r | head -1)
if [[ -n "$latest_backup" ]]; then
if [[ "$(uname)" == "Darwin" ]]; then
backup_age_days=$(( ($(date +%s) - $(stat -f %m "$latest_backup")) / 86400 ))
else
backup_age_days=$(( ($(date +%s) - $(stat -c %Y "$latest_backup")) / 86400 ))
fi
backup_name=$(basename "$latest_backup")
if [[ "$backup_age_days" -lt 1 ]]; then
pass "Latest backup: $backup_name (today)"
elif [[ "$backup_age_days" -lt 7 ]]; then
pass "Latest backup: $backup_name (${backup_age_days}d ago)"
elif [[ "$backup_age_days" -lt 30 ]]; then
warn "Latest backup: $backup_name (${backup_age_days}d ago — consider more frequent backups)"
else
fail "Latest backup: $backup_name (${backup_age_days}d ago — stale!)"
fi
else
warn "No backups found in $BACKUP_DIR"
fi
else
warn "No backup directory at $BACKUP_DIR"
fi
# ── 7. OpenClaw gateway ─────────────────────────────────────────────────────
echo ""
echo "--- OpenClaw Gateway ---"
GW_HOST="${OPENCLAW_GATEWAY_HOST:-127.0.0.1}"
GW_PORT="${OPENCLAW_GATEWAY_PORT:-18789}"
if nc -z "$GW_HOST" "$GW_PORT" 2>/dev/null || (echo > "/dev/tcp/$GW_HOST/$GW_PORT") 2>/dev/null; then
pass "Gateway reachable at $GW_HOST:$GW_PORT"
else
info "Gateway not reachable at $GW_HOST:$GW_PORT"
fi
# ── Summary ──────────────────────────────────────────────────────────────────
echo ""
TOTAL=$((PASS + WARN + FAIL))
echo "=== Results: $PASS passed, $WARN warnings, $FAIL failures (of $TOTAL checks) ==="
if [[ $FAIL -gt 0 ]]; then
echo "Status: UNHEALTHY"
exit 1
elif [[ $WARN -gt 0 ]]; then
echo "Status: DEGRADED"
exit 0
else
echo "Status: HEALTHY"
exit 0
fi

View File

@ -0,0 +1,68 @@
# Mission Control Installer Skill
Install and configure Mission Control on any Linux or macOS system.
## What This Skill Does
1. Detects the target OS and available runtimes (Docker or Node.js 20+)
2. Clones or updates the Mission Control repository
3. Generates a secure `.env` with random credentials
4. Starts the dashboard via Docker Compose or local Node.js
5. Runs an OpenClaw fleet health check (cleans stale PIDs, old logs, validates gateway)
6. Prints the access URL and admin credentials
## Usage
Run the installer script:
```bash
# Auto-detect deployment mode (prefers Docker)
bash install.sh
# Force Docker deployment
bash install.sh --docker
# Force local deployment (Node.js + pnpm)
bash install.sh --local
# Custom port
bash install.sh --port 8080
# Skip OpenClaw fleet check
bash install.sh --skip-openclaw
```
Or as a one-liner:
```bash
curl -fsSL https://raw.githubusercontent.com/builderz-labs/mission-control/main/install.sh | bash
```
## Prerequisites
- **Docker mode**: Docker Engine with Docker Compose v2
- **Local mode**: Node.js 20+, pnpm (auto-installed via corepack if missing)
- **Both**: git (to clone the repository)
## Post-Install
After installation:
1. Open `http://localhost:3000` (or your configured port)
2. Log in with the credentials printed by the installer (also in `.env`)
3. Configure your OpenClaw gateway connection in Settings
4. Register agents via the Agents panel
## Environment Configuration
The installer generates a `.env` from `.env.example` with secure random values for:
- `AUTH_PASS` — 24-character random password
- `API_KEY` — 32-character hex API key
- `AUTH_SECRET` — 32-character session secret
To regenerate credentials independently:
```bash
bash scripts/generate-env.sh --force
```

View File

@ -0,0 +1,27 @@
{
"name": "mission-control-installer",
"version": "1.0.0",
"description": "Install and configure Mission Control — the OpenClaw agent orchestration dashboard",
"author": "Builderz Labs",
"license": "MIT",
"tools": ["exec", "fs"],
"parameters": {
"deployment_mode": {
"type": "string",
"enum": ["docker", "local"],
"default": "docker",
"description": "How to deploy Mission Control"
},
"port": {
"type": "number",
"default": 3000,
"description": "Port for the Mission Control dashboard"
},
"install_dir": {
"type": "string",
"default": "",
"description": "Installation directory (defaults to ./mission-control)"
}
},
"tags": ["mission-control", "dashboard", "installer", "docker"]
}

View File

@ -0,0 +1,104 @@
# Mission Control Management Skill
Manage a running Mission Control instance programmatically.
## API Endpoints
All endpoints require authentication via `x-api-key` header or session cookie.
### Health Check
```bash
# Quick health status
curl -H "x-api-key: $API_KEY" http://localhost:3000/api/status?action=health
# Response: { "status": "healthy", "version": "1.3.0", "checks": [...] }
```
Possible statuses: `healthy`, `degraded`, `unhealthy`
### System Overview
```bash
# Full system status (memory, disk, sessions, processes)
curl -H "x-api-key: $API_KEY" http://localhost:3000/api/status?action=overview
```
### Diagnostics (Admin Only)
```bash
# Comprehensive diagnostics including security posture
curl -H "x-api-key: $API_KEY" http://localhost:3000/api/diagnostics
# Response includes:
# - system: node version, platform, memory, docker detection
# - security: score (0-100) with individual checks
# - database: size, WAL mode, migration version
# - gateway: configured, reachable, host/port
# - agents: total count, by status
# - retention: configured retention policies
```
### Check for Updates
```bash
curl -H "x-api-key: $API_KEY" http://localhost:3000/api/releases/check
# Response: { "updateAvailable": true, "currentVersion": "1.3.0", "latestVersion": "1.4.0", ... }
```
### Trigger Update
```bash
# Apply available update (bare-metal only; Docker returns instructions)
curl -X POST -H "x-api-key: $API_KEY" http://localhost:3000/api/releases/update
```
### Database Backup
```bash
curl -X POST -H "x-api-key: $API_KEY" http://localhost:3000/api/backup
```
### Agent Management
```bash
# List agents
curl -H "x-api-key: $API_KEY" http://localhost:3000/api/agents
# Register an agent
curl -X POST -H "x-api-key: $API_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "my-agent", "type": "openclaw"}' \
http://localhost:3000/api/agents
```
## Station Doctor
For local diagnostics without API access:
```bash
bash scripts/station-doctor.sh
```
Checks: Docker health, port availability, disk space, DB integrity, backup age.
## Common Workflows
### Automated Health Monitoring
```bash
# Check health and alert if unhealthy
STATUS=$(curl -sf -H "x-api-key: $API_KEY" http://localhost:3000/api/status?action=health | jq -r '.status')
if [ "$STATUS" != "healthy" ]; then
echo "ALERT: Mission Control is $STATUS"
fi
```
### Pre-Upgrade Checklist
1. Check for updates: `GET /api/releases/check`
2. Create backup: `POST /api/backup`
3. Run diagnostics: `GET /api/diagnostics` (verify no active tasks)
4. Apply update: `POST /api/releases/update` (or `docker pull` + recreate for Docker)
5. Verify health: `GET /api/status?action=health`

View File

@ -0,0 +1,20 @@
{
"name": "mission-control-manage",
"version": "1.0.0",
"description": "Manage a running Mission Control instance — health checks, diagnostics, upgrades, backups",
"author": "Builderz Labs",
"license": "MIT",
"tools": ["exec", "http"],
"parameters": {
"base_url": {
"type": "string",
"default": "http://localhost:3000",
"description": "Mission Control base URL"
},
"api_key": {
"type": "string",
"description": "API key for authentication (x-api-key header)"
}
},
"tags": ["mission-control", "management", "health", "upgrade", "backup"]
}

View File

@ -1,18 +1,15 @@
'use client'
import { useEffect, useState } from 'react'
import { createElement, useEffect, useState } from 'react'
import { usePathname, useRouter } from 'next/navigation'
import { NavRail } from '@/components/layout/nav-rail'
import { HeaderBar } from '@/components/layout/header-bar'
import { LiveFeed } from '@/components/layout/live-feed'
import { Dashboard } from '@/components/dashboard/dashboard'
import { AgentSpawnPanel } from '@/components/panels/agent-spawn-panel'
import { LogViewerPanel } from '@/components/panels/log-viewer-panel'
import { CronManagementPanel } from '@/components/panels/cron-management-panel'
import { MemoryBrowserPanel } from '@/components/panels/memory-browser-panel'
import { TokenDashboardPanel } from '@/components/panels/token-dashboard-panel'
import { AgentCostPanel } from '@/components/panels/agent-cost-panel'
import { SessionDetailsPanel } from '@/components/panels/session-details-panel'
import { CostTrackerPanel } from '@/components/panels/cost-tracker-panel'
import { TaskBoardPanel } from '@/components/panels/task-board-panel'
import { ActivityFeedPanel } from '@/components/panels/activity-feed-panel'
import { AgentSquadPanelPhase3 } from '@/components/panels/agent-squad-panel-phase3'
@ -22,7 +19,6 @@ import { OrchestrationBar } from '@/components/panels/orchestration-bar'
import { NotificationsPanel } from '@/components/panels/notifications-panel'
import { UserManagementPanel } from '@/components/panels/user-management-panel'
import { AuditTrailPanel } from '@/components/panels/audit-trail-panel'
import { AgentHistoryPanel } from '@/components/panels/agent-history-panel'
import { WebhookPanel } from '@/components/panels/webhook-panel'
import { SettingsPanel } from '@/components/panels/settings-panel'
import { GatewayConfigPanel } from '@/components/panels/gateway-config-panel'
@ -32,36 +28,176 @@ import { MultiGatewayPanel } from '@/components/panels/multi-gateway-panel'
import { SuperAdminPanel } from '@/components/panels/super-admin-panel'
import { OfficePanel } from '@/components/panels/office-panel'
import { GitHubSyncPanel } from '@/components/panels/github-sync-panel'
import { DocumentsPanel } from '@/components/panels/documents-panel'
import { SkillsPanel } from '@/components/panels/skills-panel'
import { LocalAgentsDocPanel } from '@/components/panels/local-agents-doc-panel'
import { ChannelsPanel } from '@/components/panels/channels-panel'
import { DebugPanel } from '@/components/panels/debug-panel'
import { SecurityAuditPanel } from '@/components/panels/security-audit-panel'
import { NodesPanel } from '@/components/panels/nodes-panel'
import { ExecApprovalPanel } from '@/components/panels/exec-approval-panel'
import { ChatPagePanel } from '@/components/panels/chat-page-panel'
import { ChatPanel } from '@/components/chat/chat-panel'
import { getPluginPanel } from '@/lib/plugins'
import { ErrorBoundary } from '@/components/ErrorBoundary'
import { LocalModeBanner } from '@/components/layout/local-mode-banner'
import { UpdateBanner } from '@/components/layout/update-banner'
import { PromoBanner } from '@/components/layout/promo-banner'
import { OpenClawUpdateBanner } from '@/components/layout/openclaw-update-banner'
import { OpenClawDoctorBanner } from '@/components/layout/openclaw-doctor-banner'
import { OnboardingWizard } from '@/components/onboarding/onboarding-wizard'
import { Loader } from '@/components/ui/loader'
import { ProjectManagerModal } from '@/components/modals/project-manager-modal'
import { ExecApprovalOverlay } from '@/components/modals/exec-approval-overlay'
import { useWebSocket } from '@/lib/websocket'
import { useServerEvents } from '@/lib/use-server-events'
import { completeNavigationTiming } from '@/lib/navigation-metrics'
import { panelHref, useNavigateToPanel } from '@/lib/navigation'
import { clearOnboardingDismissedThisSession, clearOnboardingReplayFromStart, getOnboardingSessionDecision, markOnboardingReplayFromStart, readOnboardingDismissedThisSession } from '@/lib/onboarding-session'
import { Button } from '@/components/ui/button'
import { useMissionControl } from '@/store'
interface GatewaySummary {
id: number
is_primary: number
}
function renderPluginPanel(panelId: string) {
const pluginPanel = getPluginPanel(panelId)
return pluginPanel ? createElement(pluginPanel) : <Dashboard />
}
function isLocalHost(hostname: string): boolean {
return hostname === 'localhost' || hostname === '127.0.0.1' || hostname === '::1'
}
export default function Home() {
const router = useRouter()
const { connect } = useWebSocket()
const { activeTab, setActiveTab, setCurrentUser, setDashboardMode, setGatewayAvailable, setSubscription, setUpdateAvailable, liveFeedOpen, toggleLiveFeed } = useMissionControl()
const { activeTab, setActiveTab, setCurrentUser, setDashboardMode, setGatewayAvailable, setCapabilitiesChecked, setSubscription, setDefaultOrgName, setUpdateAvailable, setOpenclawUpdate, showOnboarding, setShowOnboarding, liveFeedOpen, toggleLiveFeed, showProjectManagerModal, setShowProjectManagerModal, fetchProjects, setChatPanelOpen, bootComplete, setBootComplete, setAgents, setSessions, setProjects, setInterfaceMode, setMemoryGraphAgents, setSkillsData } = useMissionControl()
// Sync URL → Zustand activeTab
const pathname = usePathname()
const panelFromUrl = pathname === '/' ? 'overview' : pathname.slice(1)
const normalizedPanel = panelFromUrl === 'sessions' ? 'chat' : panelFromUrl
useEffect(() => {
setActiveTab(panelFromUrl)
}, [panelFromUrl, setActiveTab])
completeNavigationTiming(pathname)
}, [pathname])
useEffect(() => {
completeNavigationTiming(panelHref(activeTab))
}, [activeTab])
useEffect(() => {
setActiveTab(normalizedPanel)
if (normalizedPanel === 'chat') {
setChatPanelOpen(false)
}
if (panelFromUrl === 'sessions') {
router.replace('/chat')
}
}, [panelFromUrl, normalizedPanel, router, setActiveTab, setChatPanelOpen])
// Connect to SSE for real-time local DB events (tasks, agents, chat, etc.)
useServerEvents()
const [isClient, setIsClient] = useState(false)
const [initSteps, setInitSteps] = useState<Array<{ key: string; label: string; status: 'pending' | 'done' }>>([
{ key: 'auth', label: 'Authenticating operator', status: 'pending' },
{ key: 'capabilities', label: 'Detecting station mode', status: 'pending' },
{ key: 'config', label: 'Loading control config', status: 'pending' },
{ key: 'connect', label: 'Connecting runtime links', status: 'pending' },
{ key: 'agents', label: 'Syncing agent registry', status: 'pending' },
{ key: 'sessions', label: 'Loading active sessions', status: 'pending' },
{ key: 'projects', label: 'Hydrating workspace board', status: 'pending' },
{ key: 'memory', label: 'Mapping memory graph', status: 'pending' },
{ key: 'skills', label: 'Indexing skill catalog', status: 'pending' },
])
const markStep = (key: string) => {
setInitSteps(prev => prev.map(s => s.key === key ? { ...s, status: 'done' } : s))
}
useEffect(() => {
if (!bootComplete && initSteps.every(s => s.status === 'done')) {
const t = setTimeout(() => setBootComplete(), 400)
return () => clearTimeout(t)
}
}, [initSteps, bootComplete, setBootComplete])
// Security console warning (anti-self-XSS)
useEffect(() => {
if (!bootComplete) return
if (typeof window === 'undefined') return
const key = 'mc-console-warning'
if (sessionStorage.getItem(key)) return
sessionStorage.setItem(key, '1')
console.log(
'%c Stop! ',
'color: #fff; background: #e53e3e; font-size: 40px; font-weight: bold; padding: 4px 16px; border-radius: 4px;'
)
console.log(
'%cThis is a browser feature intended for developers.\n\nIf someone told you to copy-paste something here to enable a feature or "hack" an account, it is a scam and will give them access to your account.',
'font-size: 14px; color: #e2e8f0; padding: 8px 0;'
)
console.log(
'%cLearn more: https://en.wikipedia.org/wiki/Self-XSS',
'font-size: 12px; color: #718096;'
)
}, [bootComplete])
useEffect(() => {
setIsClient(true)
// OpenClaw control-ui device identity requires a secure browser context.
// Redirect remote HTTP sessions to HTTPS automatically to avoid handshake failures.
if (window.location.protocol === 'http:' && !isLocalHost(window.location.hostname)) {
const secureUrl = new URL(window.location.href)
secureUrl.protocol = 'https:'
window.location.replace(secureUrl.toString())
return
}
const connectWithEnvFallback = () => {
const explicitWsUrl = process.env.NEXT_PUBLIC_GATEWAY_URL || ''
const gatewayPort = process.env.NEXT_PUBLIC_GATEWAY_PORT || '18789'
const gatewayHost = process.env.NEXT_PUBLIC_GATEWAY_HOST || window.location.hostname
const gatewayProto =
process.env.NEXT_PUBLIC_GATEWAY_PROTOCOL ||
(window.location.protocol === 'https:' ? 'wss' : 'ws')
const wsUrl = explicitWsUrl || `${gatewayProto}://${gatewayHost}:${gatewayPort}`
connect(wsUrl)
}
const connectWithPrimaryGateway = async (): Promise<{ attempted: boolean; connected: boolean }> => {
try {
const gatewaysRes = await fetch('/api/gateways')
if (!gatewaysRes.ok) return { attempted: false, connected: false }
const gatewaysJson = await gatewaysRes.json().catch(() => ({}))
const gateways = Array.isArray(gatewaysJson?.gateways) ? gatewaysJson.gateways as GatewaySummary[] : []
if (gateways.length === 0) return { attempted: false, connected: false }
const primaryGateway = gateways.find(gw => Number(gw?.is_primary) === 1) || gateways[0]
if (!primaryGateway?.id) return { attempted: true, connected: false }
const connectRes = await fetch('/api/gateways/connect', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ id: primaryGateway.id }),
})
if (!connectRes.ok) return { attempted: true, connected: false }
const payload = await connectRes.json().catch(() => ({}))
const wsUrl = typeof payload?.ws_url === 'string' ? payload.ws_url : ''
const wsToken = typeof payload?.token === 'string' ? payload.token : ''
if (!wsUrl) return { attempted: true, connected: false }
connect(wsUrl, wsToken)
return { attempted: true, connected: true }
} catch {
return { attempted: false, connected: false }
}
}
// Fetch current user
fetch('/api/auth/me')
.then(async (res) => {
@ -71,8 +207,8 @@ export default function Home() {
}
return null
})
.then(data => { if (data?.user) setCurrentUser(data.user) })
.catch(() => {})
.then(data => { if (data?.user) setCurrentUser(data.user); markStep('auth') })
.catch(() => { markStep('auth') })
// Check for available updates
fetch('/api/releases/check')
@ -88,16 +224,41 @@ export default function Home() {
})
.catch(() => {})
// Check for OpenClaw updates
fetch('/api/openclaw/version')
.then(res => res.ok ? res.json() : null)
.then(data => {
if (data?.updateAvailable) {
setOpenclawUpdate({
installed: data.installed,
latest: data.latest,
releaseUrl: data.releaseUrl,
releaseNotes: data.releaseNotes,
updateCommand: data.updateCommand,
})
}
})
.catch(() => {})
// Check capabilities, then conditionally connect to gateway
fetch('/api/status?action=capabilities')
.then(res => res.ok ? res.json() : null)
.then(data => {
.then(async data => {
if (data?.subscription) {
setSubscription(data.subscription)
}
if (data?.processUser) {
setDefaultOrgName(data.processUser)
}
if (data?.interfaceMode === 'essential' || data?.interfaceMode === 'full') {
setInterfaceMode(data.interfaceMode)
}
if (data && data.gateway === false) {
setDashboardMode('local')
setGatewayAvailable(false)
setCapabilitiesChecked(true)
markStep('capabilities')
markStep('connect')
// Skip WebSocket connect — no gateway to talk to
return
}
@ -105,45 +266,86 @@ export default function Home() {
setDashboardMode('full')
setGatewayAvailable(true)
}
// Connect to gateway WebSocket
const wsToken = process.env.NEXT_PUBLIC_GATEWAY_TOKEN || process.env.NEXT_PUBLIC_WS_TOKEN || ''
const explicitWsUrl = process.env.NEXT_PUBLIC_GATEWAY_URL || ''
const gatewayPort = process.env.NEXT_PUBLIC_GATEWAY_PORT || '18789'
const gatewayHost = process.env.NEXT_PUBLIC_GATEWAY_HOST || window.location.hostname
const gatewayProto =
process.env.NEXT_PUBLIC_GATEWAY_PROTOCOL ||
(window.location.protocol === 'https:' ? 'wss' : 'ws')
const wsUrl = explicitWsUrl || `${gatewayProto}://${gatewayHost}:${gatewayPort}`
connect(wsUrl, wsToken)
setCapabilitiesChecked(true)
markStep('capabilities')
const primaryConnect = await connectWithPrimaryGateway()
if (!primaryConnect.connected && !primaryConnect.attempted) {
connectWithEnvFallback()
}
markStep('connect')
})
.catch(() => {
// If capabilities check fails, still try to connect
const wsToken = process.env.NEXT_PUBLIC_GATEWAY_TOKEN || process.env.NEXT_PUBLIC_WS_TOKEN || ''
const explicitWsUrl = process.env.NEXT_PUBLIC_GATEWAY_URL || ''
const gatewayPort = process.env.NEXT_PUBLIC_GATEWAY_PORT || '18789'
const gatewayHost = process.env.NEXT_PUBLIC_GATEWAY_HOST || window.location.hostname
const gatewayProto =
process.env.NEXT_PUBLIC_GATEWAY_PROTOCOL ||
(window.location.protocol === 'https:' ? 'wss' : 'ws')
const wsUrl = explicitWsUrl || `${gatewayProto}://${gatewayHost}:${gatewayPort}`
connect(wsUrl, wsToken)
setCapabilitiesChecked(true)
markStep('capabilities')
markStep('connect')
connectWithEnvFallback()
})
}, [connect, pathname, router, setCurrentUser, setDashboardMode, setGatewayAvailable, setSubscription, setUpdateAvailable])
if (!isClient) {
return (
<div className="flex items-center justify-center min-h-screen">
<div className="flex flex-col items-center gap-3">
<div className="w-10 h-10 rounded-xl bg-primary flex items-center justify-center">
<span className="text-primary-foreground font-bold text-sm">MC</span>
</div>
<div className="flex items-center gap-2">
<div className="w-1.5 h-1.5 rounded-full bg-primary animate-pulse" />
<span className="text-sm text-muted-foreground">Loading Mission Control...</span>
</div>
</div>
</div>
)
// Check onboarding state
fetch('/api/onboarding')
.then(res => res.ok ? res.json() : null)
.then(data => {
const decision = getOnboardingSessionDecision({
isAdmin: data?.isAdmin === true,
serverShowOnboarding: data?.showOnboarding === true,
completed: data?.completed === true,
skipped: data?.skipped === true,
dismissedThisSession: readOnboardingDismissedThisSession(),
})
if (decision.shouldOpen) {
clearOnboardingDismissedThisSession()
if (decision.replayFromStart) {
markOnboardingReplayFromStart()
} else {
clearOnboardingReplayFromStart()
}
setShowOnboarding(true)
}
markStep('config')
})
.catch(() => { markStep('config') })
// Preload workspace data in parallel
Promise.allSettled([
fetch('/api/agents')
.then(r => r.ok ? r.json() : null)
.then((agentsData) => {
if (agentsData?.agents) setAgents(agentsData.agents)
})
.finally(() => { markStep('agents') }),
fetch('/api/sessions')
.then(r => r.ok ? r.json() : null)
.then((sessionsData) => {
if (sessionsData?.sessions) setSessions(sessionsData.sessions)
})
.finally(() => { markStep('sessions') }),
fetch('/api/projects')
.then(r => r.ok ? r.json() : null)
.then((projectsData) => {
if (projectsData?.projects) setProjects(projectsData.projects)
})
.finally(() => { markStep('projects') }),
fetch('/api/memory/graph?agent=all')
.then(r => r.ok ? r.json() : null)
.then((graphData) => {
if (graphData?.agents) setMemoryGraphAgents(graphData.agents)
})
.finally(() => { markStep('memory') }),
fetch('/api/skills')
.then(r => r.ok ? r.json() : null)
.then((skillsData) => {
if (skillsData?.skills) setSkillsData(skillsData.skills, skillsData.groups || [], skillsData.total || 0)
})
.finally(() => { markStep('skills') }),
]).catch(() => { /* panels will lazy-load as fallback */ })
// eslint-disable-next-line react-hooks/exhaustive-deps -- boot once on mount, not on every pathname change
}, [connect, router, setCurrentUser, setDashboardMode, setGatewayAvailable, setCapabilitiesChecked, setSubscription, setUpdateAvailable, setShowOnboarding, setAgents, setSessions, setProjects, setInterfaceMode, setMemoryGraphAgents, setSkillsData])
if (!isClient || !bootComplete) {
return <Loader variant="page" steps={isClient ? initSteps : undefined} />
}
return (
@ -151,33 +353,49 @@ export default function Home() {
<a href="#main-content" className="sr-only focus:not-sr-only focus:absolute focus:z-50 focus:top-2 focus:left-2 focus:px-4 focus:py-2 focus:bg-primary focus:text-primary-foreground focus:rounded-md focus:text-sm focus:font-medium">
Skip to main content
</a>
{/* Left: Icon rail navigation (hidden on mobile, shown as bottom bar instead) */}
<NavRail />
{!showOnboarding && <NavRail />}
{/* Center: Header + Content */}
<div className="flex-1 flex flex-col min-w-0">
<HeaderBar />
<LocalModeBanner />
<UpdateBanner />
<PromoBanner />
<main id="main-content" className="flex-1 overflow-auto pb-16 md:pb-0" role="main">
<div aria-live="polite">
{!showOnboarding && (
<>
<HeaderBar />
<LocalModeBanner />
<UpdateBanner />
<OpenClawUpdateBanner />
<OpenClawDoctorBanner />
</>
)}
<main
id="main-content"
className={`flex-1 overflow-auto pb-16 md:pb-0 ${showOnboarding ? 'pointer-events-none select-none blur-[2px] opacity-30' : ''}`}
role="main"
aria-hidden={showOnboarding}
>
<div aria-live="polite" className="flex flex-col min-h-full">
<ErrorBoundary key={activeTab}>
<ContentRouter tab={activeTab} />
</ErrorBoundary>
</div>
<footer className="px-4 pb-4 pt-2">
<p className="text-2xs text-muted-foreground/50 text-center">
Built with care by <a href="https://x.com/nyk_builderz" target="_blank" rel="noopener noreferrer" className="text-muted-foreground/70 hover:text-primary transition-colors duration-200">nyk</a>.
</p>
</footer>
</main>
</div>
{/* Right: Live feed (hidden on mobile) */}
{liveFeedOpen && (
{!showOnboarding && liveFeedOpen && (
<div className="hidden lg:flex h-full">
<LiveFeed />
</div>
)}
{/* Floating button to reopen LiveFeed when closed */}
{!liveFeedOpen && (
{!showOnboarding && !liveFeedOpen && (
<button
onClick={toggleLiveFeed}
className="hidden lg:flex fixed right-0 top-1/2 -translate-y-1/2 z-30 w-6 h-12 items-center justify-center bg-card border border-r-0 border-border rounded-l-md text-muted-foreground hover:text-foreground hover:bg-secondary transition-all duration-200"
@ -190,22 +408,70 @@ export default function Home() {
)}
{/* Chat panel overlay */}
<ChatPanel />
{!showOnboarding && <ChatPanel />}
{/* Global exec approval overlay (shown regardless of active panel) */}
{!showOnboarding && <ExecApprovalOverlay />}
{/* Global Project Manager Modal */}
{!showOnboarding && showProjectManagerModal && (
<ProjectManagerModal
onClose={() => setShowProjectManagerModal(false)}
onChanged={async () => { await fetchProjects() }}
/>
)}
<OnboardingWizard />
</div>
)
}
const ESSENTIAL_PANELS = new Set([
'overview', 'agents', 'tasks', 'chat', 'activity', 'logs', 'settings',
])
function ContentRouter({ tab }: { tab: string }) {
const { dashboardMode } = useMissionControl()
const { dashboardMode, interfaceMode, setInterfaceMode } = useMissionControl()
const navigateToPanel = useNavigateToPanel()
const isLocal = dashboardMode === 'local'
// Guard: show nudge for non-essential panels in essential mode
if (interfaceMode === 'essential' && !ESSENTIAL_PANELS.has(tab)) {
return (
<div className="flex flex-col items-center justify-center py-24 text-center gap-4">
<p className="text-sm text-muted-foreground">
<span className="font-medium text-foreground capitalize">{tab.replace(/-/g, ' ')}</span> is available in Full mode.
</p>
<div className="flex items-center gap-2">
<Button
variant="outline"
size="sm"
onClick={async () => {
setInterfaceMode('full')
try { await fetch('/api/settings', { method: 'PUT', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify({ settings: { 'general.interface_mode': 'full' } }) }) } catch {}
}}
>
Switch to Full
</Button>
<Button
variant="ghost"
size="sm"
onClick={() => navigateToPanel('overview')}
>
Go to Overview
</Button>
</div>
</div>
)
}
switch (tab) {
case 'overview':
return (
<>
<Dashboard />
{!isLocal && (
<div className="mt-4 mx-4 mb-4 rounded-xl border border-border bg-card overflow-hidden">
<div className="mt-4 mx-4 mb-4 rounded-lg border border-border bg-card overflow-hidden">
<AgentCommsPanel />
</div>
)}
@ -217,38 +483,31 @@ function ContentRouter({ tab }: { tab: string }) {
return (
<>
<OrchestrationBar />
{isLocal && <LocalAgentsDocPanel />}
<AgentSquadPanelPhase3 />
{!isLocal && (
<div className="mt-4 mx-4 mb-4 rounded-xl border border-border bg-card overflow-hidden">
<AgentCommsPanel />
</div>
)}
</>
)
case 'activity':
return <ActivityFeedPanel />
case 'notifications':
return <NotificationsPanel />
case 'standup':
return <StandupPanel />
case 'spawn':
return <AgentSpawnPanel />
case 'sessions':
return <SessionDetailsPanel />
return <ChatPagePanel />
case 'logs':
return <LogViewerPanel />
case 'cron':
return <CronManagementPanel />
case 'memory':
return <MemoryBrowserPanel />
case 'cost-tracker':
case 'tokens':
return <TokenDashboardPanel />
case 'agent-costs':
return <AgentCostPanel />
return <CostTrackerPanel />
case 'users':
return <UserManagementPanel />
case 'history':
return <AgentHistoryPanel />
case 'activity':
return <ActivityFeedPanel />
case 'audit':
return <AuditTrailPanel />
case 'webhooks':
@ -256,24 +515,53 @@ function ContentRouter({ tab }: { tab: string }) {
case 'alerts':
return <AlertRulesPanel />
case 'gateways':
if (isLocal) return <LocalModeUnavailable panel={tab} />
return <MultiGatewayPanel />
case 'gateway-config':
if (isLocal) return <LocalModeUnavailable panel={tab} />
return <GatewayConfigPanel />
case 'integrations':
return <IntegrationsPanel />
case 'settings':
return <SettingsPanel />
case 'super-admin':
return <SuperAdminPanel />
case 'github':
return <GitHubSyncPanel />
case 'office':
return <OfficePanel />
case 'documents':
return <DocumentsPanel />
case 'super-admin':
return <SuperAdminPanel />
case 'workspaces':
return <SuperAdminPanel />
default:
return <Dashboard />
case 'skills':
return <SkillsPanel />
case 'channels':
if (isLocal) return <LocalModeUnavailable panel={tab} />
return <ChannelsPanel />
case 'nodes':
if (isLocal) return <LocalModeUnavailable panel={tab} />
return <NodesPanel />
case 'security':
return <SecurityAuditPanel />
case 'debug':
return <DebugPanel />
case 'exec-approvals':
if (isLocal) return <LocalModeUnavailable panel={tab} />
return <ExecApprovalPanel />
case 'chat':
return <ChatPagePanel />
default: {
return renderPluginPanel(tab)
}
}
}
function LocalModeUnavailable({ panel }: { panel: string }) {
return (
<div className="flex flex-col items-center justify-center py-24 text-center">
<p className="text-sm text-muted-foreground">
<span className="font-medium text-foreground">{panel}</span> requires an OpenClaw gateway connection.
</p>
<p className="text-xs text-muted-foreground mt-1">
Configure a gateway to enable this panel.
</p>
</div>
)
}

View File

@ -49,8 +49,14 @@ async function handleActivitiesRequest(request: NextRequest, workspaceId: number
const params: any[] = [workspaceId];
if (type) {
query += ' AND type = ?';
params.push(type);
const types = type.split(',').map(t => t.trim()).filter(Boolean);
if (types.length === 1) {
query += ' AND type = ?';
params.push(types[0]);
} else if (types.length > 1) {
query += ` AND type IN (${types.map(() => '?').join(',')})`;
params.push(...types);
}
}
if (actor) {
@ -132,8 +138,14 @@ async function handleActivitiesRequest(request: NextRequest, workspaceId: number
const countParams: any[] = [workspaceId];
if (type) {
countQuery += ' AND type = ?';
countParams.push(type);
const types = type.split(',').map(t => t.trim()).filter(Boolean);
if (types.length === 1) {
countQuery += ' AND type = ?';
countParams.push(types[0]);
} else if (types.length > 1) {
countQuery += ` AND type IN (${types.map(() => '?').join(',')})`;
countParams.push(...types);
}
}
if (actor) {

View File

@ -0,0 +1,118 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { getAdapter, listAdapters } from '@/lib/adapters'
import { agentHeartbeatLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
/**
* GET /api/adapters List available framework adapters.
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
return NextResponse.json({ adapters: listAdapters() })
}
/**
* POST /api/adapters Framework-agnostic agent action dispatcher.
*
* Body: { framework, action, payload }
*
* Actions:
* register Register an agent via its framework adapter
* heartbeat Send a heartbeat/status update
* report Report task progress
* assignments Get pending task assignments
* disconnect Disconnect an agent
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateLimited = agentHeartbeatLimiter(request)
if (rateLimited) return rateLimited
let body: any
try {
body = await request.json()
} catch {
return NextResponse.json({ error: 'Invalid JSON body' }, { status: 400 })
}
const framework = typeof body?.framework === 'string' ? body.framework.trim() : ''
const action = typeof body?.action === 'string' ? body.action.trim() : ''
const payload = body?.payload ?? {}
if (!framework || !action) {
return NextResponse.json({ error: 'framework and action are required' }, { status: 400 })
}
let adapter
try {
adapter = getAdapter(framework)
} catch {
return NextResponse.json({
error: `Unknown framework: ${framework}. Available: ${listAdapters().join(', ')}`,
}, { status: 400 })
}
try {
switch (action) {
case 'register': {
const { agentId, name, metadata } = payload
if (!agentId || !name) {
return NextResponse.json({ error: 'payload.agentId and payload.name required' }, { status: 400 })
}
await adapter.register({ agentId, name, framework, metadata })
return NextResponse.json({ ok: true, action: 'register', framework })
}
case 'heartbeat': {
const { agentId, status, metrics } = payload
if (!agentId) {
return NextResponse.json({ error: 'payload.agentId required' }, { status: 400 })
}
await adapter.heartbeat({ agentId, status: status || 'online', metrics })
return NextResponse.json({ ok: true, action: 'heartbeat', framework })
}
case 'report': {
const { taskId, agentId, progress, status: taskStatus, output } = payload
if (!taskId || !agentId) {
return NextResponse.json({ error: 'payload.taskId and payload.agentId required' }, { status: 400 })
}
await adapter.reportTask({ taskId, agentId, progress: progress ?? 0, status: taskStatus || 'in_progress', output })
return NextResponse.json({ ok: true, action: 'report', framework })
}
case 'assignments': {
const { agentId } = payload
if (!agentId) {
return NextResponse.json({ error: 'payload.agentId required' }, { status: 400 })
}
const assignments = await adapter.getAssignments(agentId)
return NextResponse.json({ assignments, framework })
}
case 'disconnect': {
const { agentId } = payload
if (!agentId) {
return NextResponse.json({ error: 'payload.agentId required' }, { status: 400 })
}
await adapter.disconnect(agentId)
return NextResponse.json({ ok: true, action: 'disconnect', framework })
}
default:
return NextResponse.json({
error: `Unknown action: ${action}. Use: register, heartbeat, report, assignments, disconnect`,
}, { status: 400 })
}
} catch (error) {
logger.error({ err: error, framework, action }, 'POST /api/adapters error')
return NextResponse.json({ error: 'Adapter action failed' }, { status: 500 })
}
}
export const dynamic = 'force-dynamic'

View File

@ -0,0 +1,153 @@
import { NextRequest, NextResponse } from 'next/server'
import { getDatabase, db_helpers } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { config } from '@/lib/config'
import { existsSync, mkdirSync, readFileSync, writeFileSync } from 'node:fs'
import { dirname, isAbsolute, resolve } from 'node:path'
import { resolveWithin } from '@/lib/paths'
import { getAgentWorkspaceCandidates, readAgentWorkspaceFile } from '@/lib/agent-workspace'
import { logger } from '@/lib/logger'
const ALLOWED_FILES = new Set([
'agent.md',
'identity.md',
'soul.md',
'WORKING.md',
'MEMORY.md',
'TOOLS.md',
'AGENTS.md',
'MISSION.md',
'USER.md',
])
const FILE_ALIASES: Record<string, string[]> = {
'agent.md': ['agent.md', 'AGENT.md', 'MISSION.md', 'USER.md'],
'identity.md': ['identity.md', 'IDENTITY.md'],
'soul.md': ['soul.md', 'SOUL.md'],
'WORKING.md': ['WORKING.md', 'working.md'],
'MEMORY.md': ['MEMORY.md', 'memory.md'],
'TOOLS.md': ['TOOLS.md', 'tools.md'],
'AGENTS.md': ['AGENTS.md', 'agents.md'],
'MISSION.md': ['MISSION.md', 'mission.md'],
'USER.md': ['USER.md', 'user.md'],
}
function resolveAgentWorkspacePath(workspace: string): string {
if (isAbsolute(workspace)) return resolve(workspace)
if (!config.openclawStateDir) throw new Error('OPENCLAW_STATE_DIR not configured')
return resolveWithin(config.openclawStateDir, workspace)
}
function getAgentByIdOrName(db: ReturnType<typeof getDatabase>, id: string, workspaceId: number): any | undefined {
if (isNaN(Number(id))) {
return db.prepare('SELECT * FROM agents WHERE name = ? AND workspace_id = ?').get(id, workspaceId)
}
return db.prepare('SELECT * FROM agents WHERE id = ? AND workspace_id = ?').get(Number(id), workspaceId)
}
export async function GET(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const { id } = await params
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const agent = getAgentByIdOrName(db, id, workspaceId)
if (!agent) return NextResponse.json({ error: 'Agent not found' }, { status: 404 })
const agentConfig = agent.config ? JSON.parse(agent.config) : {}
const candidates = getAgentWorkspaceCandidates(agentConfig, agent.name)
if (candidates.length === 0) {
return NextResponse.json({ error: 'Agent workspace is not configured' }, { status: 400 })
}
const safeWorkspace = candidates[0]
const requested = (new URL(request.url).searchParams.get('file') || '').trim()
const files = requested
? [requested]
: ['agent.md', 'identity.md', 'soul.md', 'WORKING.md', 'MEMORY.md', 'TOOLS.md', 'AGENTS.md', 'MISSION.md', 'USER.md']
const payload: Record<string, { exists: boolean; content: string }> = {}
for (const file of files) {
if (!ALLOWED_FILES.has(file)) {
return NextResponse.json({ error: `Unsupported file: ${file}` }, { status: 400 })
}
const aliases = FILE_ALIASES[file] || [file]
const match = readAgentWorkspaceFile(candidates, aliases)
payload[file] = { exists: match.exists, content: match.content }
}
return NextResponse.json({
agent: { id: agent.id, name: agent.name },
workspace: safeWorkspace,
files: payload,
})
} catch (error) {
logger.error({ err: error }, 'GET /api/agents/[id]/files error')
return NextResponse.json({ error: 'Failed to load workspace files' }, { status: 500 })
}
}
export async function PUT(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const { id } = await params
const body = await request.json()
const file = String(body?.file || '').trim()
const content = String(body?.content || '')
const MAX_WORKSPACE_FILE_SIZE = 1024 * 1024 // 1 MB
if (content.length > MAX_WORKSPACE_FILE_SIZE) {
return NextResponse.json({ error: `File content too large (max ${MAX_WORKSPACE_FILE_SIZE} bytes)` }, { status: 413 })
}
if (!ALLOWED_FILES.has(file)) {
return NextResponse.json({ error: `Unsupported file: ${file}` }, { status: 400 })
}
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const agent = getAgentByIdOrName(db, id, workspaceId)
if (!agent) return NextResponse.json({ error: 'Agent not found' }, { status: 404 })
const agentConfig = agent.config ? JSON.parse(agent.config) : {}
const candidates = getAgentWorkspaceCandidates(agentConfig, agent.name)
const safeWorkspace = candidates[0]
if (!safeWorkspace) {
return NextResponse.json({ error: 'Agent workspace is not configured' }, { status: 400 })
}
const safePath = resolveWithin(safeWorkspace, file)
mkdirSync(dirname(safePath), { recursive: true })
writeFileSync(safePath, content, 'utf-8')
if (file === 'soul.md') {
db.prepare('UPDATE agents SET soul_content = ?, updated_at = unixepoch() WHERE id = ? AND workspace_id = ?')
.run(content, agent.id, workspaceId)
}
if (file === 'WORKING.md') {
db.prepare('UPDATE agents SET working_memory = ?, updated_at = unixepoch() WHERE id = ? AND workspace_id = ?')
.run(content, agent.id, workspaceId)
}
db_helpers.logActivity(
'agent_workspace_file_updated',
'agent',
agent.id,
auth.user.username,
`${file} updated for ${agent.name}`,
{ file, size: content.length },
workspaceId
)
return NextResponse.json({ success: true, file, size: content.length })
} catch (error) {
logger.error({ err: error }, 'PUT /api/agents/[id]/files error')
return NextResponse.json({ error: 'Failed to save workspace file' }, { status: 500 })
}
}

View File

@ -1,6 +1,7 @@
import { NextRequest, NextResponse } from 'next/server';
import { getDatabase, db_helpers } from '@/lib/db';
import { requireRole } from '@/lib/auth';
import { agentHeartbeatLimiter } from '@/lib/rate-limit';
import { logger } from '@/lib/logger';
/**
@ -189,6 +190,9 @@ export async function POST(
const auth = requireRole(request, 'operator');
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status });
const rateLimited = agentHeartbeatLimiter(request);
if (rateLimited) return rateLimited;
let body: any = {};
try {
body = await request.json();

View File

@ -2,6 +2,17 @@ import { NextRequest, NextResponse } from 'next/server';
import { getDatabase, db_helpers } from '@/lib/db';
import { requireRole } from '@/lib/auth';
import { logger } from '@/lib/logger';
import { config } from '@/lib/config';
import { existsSync, mkdirSync, readFileSync, writeFileSync } from 'node:fs';
import { dirname, isAbsolute, resolve } from 'node:path';
import { resolveWithin } from '@/lib/paths';
import { getAgentWorkspaceCandidates, readAgentWorkspaceFile } from '@/lib/agent-workspace';
function resolveAgentWorkspacePath(workspace: string): string {
if (isAbsolute(workspace)) return resolve(workspace)
if (!config.openclawStateDir) throw new Error('OPENCLAW_STATE_DIR not configured')
return resolveWithin(config.openclawStateDir, workspace)
}
/**
* GET /api/agents/[id]/memory - Get agent's working memory
@ -43,11 +54,28 @@ export async function GET(
db.exec("ALTER TABLE agents ADD COLUMN working_memory TEXT DEFAULT ''");
}
// Prefer workspace WORKING.md, fall back to DB working_memory
let workingMemory = '';
let source: 'workspace' | 'database' | 'none' = 'none';
try {
const agentConfig = agent.config ? JSON.parse(agent.config) : {};
const candidates = getAgentWorkspaceCandidates(agentConfig, agent.name);
const match = readAgentWorkspaceFile(candidates, ['WORKING.md', 'working.md', 'MEMORY.md', 'memory.md']);
if (match.exists) {
workingMemory = match.content;
source = 'workspace';
}
} catch (err) {
logger.warn({ err, agent: agent.name }, 'Failed to read WORKING.md from workspace');
}
// Get working memory content
const memoryStmt = db.prepare(`SELECT working_memory FROM agents WHERE ${isNaN(Number(agentId)) ? 'name' : 'id'} = ? AND workspace_id = ?`);
const result = memoryStmt.get(agentId, workspaceId) as any;
const workingMemory = result?.working_memory || '';
if (!workingMemory) {
workingMemory = result?.working_memory || '';
source = workingMemory ? 'database' : 'none';
}
return NextResponse.json({
agent: {
@ -56,6 +84,7 @@ export async function GET(
role: agent.role
},
working_memory: workingMemory,
source,
updated_at: agent.updated_at,
size: workingMemory.length
});
@ -118,6 +147,22 @@ export async function PUT(
}
const now = Math.floor(Date.now() / 1000);
// Best effort: sync workspace WORKING.md if agent workspace is configured
let savedToWorkspace = false;
try {
const agentConfig = agent.config ? JSON.parse(agent.config) : {};
const candidates = getAgentWorkspaceCandidates(agentConfig, agent.name);
const safeWorkspace = candidates[0];
if (safeWorkspace) {
const safeWorkingPath = resolveWithin(safeWorkspace, 'WORKING.md');
mkdirSync(dirname(safeWorkingPath), { recursive: true });
writeFileSync(safeWorkingPath, newContent, 'utf-8');
savedToWorkspace = true;
}
} catch (err) {
logger.warn({ err, agent: agent.name }, 'Failed to write WORKING.md to workspace');
}
// Update working memory
const updateStmt = db.prepare(`
@ -135,10 +180,11 @@ export async function PUT(
agent.id,
agent.name,
`Working memory ${append ? 'appended' : 'updated'} for agent ${agent.name}`,
{
{
content_length: newContent.length,
append_mode: append || false,
timestamp: now
timestamp: now,
saved_to_workspace: savedToWorkspace
},
workspaceId
);
@ -147,6 +193,7 @@ export async function PUT(
success: true,
message: `Working memory ${append ? 'appended' : 'updated'} for ${agent.name}`,
working_memory: newContent,
saved_to_workspace: savedToWorkspace,
updated_at: now,
size: newContent.length
});
@ -185,6 +232,20 @@ export async function DELETE(
}
const now = Math.floor(Date.now() / 1000);
// Best effort: clear workspace WORKING.md if agent workspace is configured
try {
const agentConfig = agent.config ? JSON.parse(agent.config) : {};
const candidates = getAgentWorkspaceCandidates(agentConfig, agent.name);
const safeWorkspace = candidates[0];
if (safeWorkspace) {
const safeWorkingPath = resolveWithin(safeWorkspace, 'WORKING.md');
mkdirSync(dirname(safeWorkingPath), { recursive: true });
writeFileSync(safeWorkingPath, '', 'utf-8');
}
} catch (err) {
logger.warn({ err, agent: agent.name }, 'Failed to clear WORKING.md in workspace');
}
// Clear working memory
const updateStmt = db.prepare(`

View File

@ -1,9 +1,10 @@
import { NextRequest, NextResponse } from 'next/server'
import { getDatabase, db_helpers, logAuditEvent } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { writeAgentToConfig, enrichAgentConfigFromWorkspace } from '@/lib/agent-sync'
import { writeAgentToConfig, enrichAgentConfigFromWorkspace, removeAgentFromConfig } from '@/lib/agent-sync'
import { eventBus } from '@/lib/event-bus'
import { logger } from '@/lib/logger'
import { runOpenClaw } from '@/lib/command'
/**
* GET /api/agents/[id] - Get a single agent by ID or name
@ -102,20 +103,9 @@ export async function PUT(
return writeBack
}
// Unified save: gateway first, then DB. If DB fails after gateway write, attempt rollback.
if (shouldWriteToGateway) {
try {
await writeAgentToConfig(getWriteBackPayload(gateway_config))
} catch (err: any) {
return NextResponse.json(
{ error: `Save failed: unable to update gateway config: ${err.message}` },
{ status: 502 }
)
}
}
// Unified save: DB first (transactional, easy to revert), then gateway file.
// If gateway write fails after DB succeeds, revert DB to keep consistency.
try {
// Build update
const fields: string[] = ['updated_at = ?']
const values: any[] = [now]
@ -132,21 +122,33 @@ export async function PUT(
values.push(agent.id, workspaceId)
db.prepare(`UPDATE agents SET ${fields.join(', ')} WHERE id = ? AND workspace_id = ?`).run(...values)
} catch (err: any) {
if (shouldWriteToGateway) {
try {
// Best-effort rollback to preserve consistency if DB update fails after gateway write.
await writeAgentToConfig(getWriteBackPayload(existingConfig))
} catch (rollbackErr: any) {
logger.error({ err: rollbackErr, agent: agent.name }, 'Failed to rollback gateway config after DB failure')
return NextResponse.json(
{ error: `Save failed after gateway update and rollback failed: ${err.message}` },
{ status: 500 }
)
}
}
return NextResponse.json({ error: `Save failed: ${err.message}` }, { status: 500 })
}
if (shouldWriteToGateway) {
try {
await writeAgentToConfig(getWriteBackPayload(gateway_config))
} catch (err: any) {
// Gateway write failed — revert DB to previous state
try {
const revertFields: string[] = ['updated_at = ?']
const revertValues: any[] = [agent.updated_at]
revertFields.push('role = ?')
revertValues.push(agent.role)
revertFields.push('config = ?')
revertValues.push(agent.config || '{}')
revertValues.push(agent.id, workspaceId)
db.prepare(`UPDATE agents SET ${revertFields.join(', ')} WHERE id = ? AND workspace_id = ?`).run(...revertValues)
} catch (revertErr: any) {
logger.error({ err: revertErr, agent: agent.name }, 'Failed to revert DB after gateway write failure')
}
return NextResponse.json(
{ error: `Save failed: unable to update gateway config: ${err.message}` },
{ status: 502 }
)
}
}
if (shouldWriteToGateway) {
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'
logAuditEvent({
@ -205,6 +207,13 @@ export async function DELETE(
const db = getDatabase()
const { id } = await params
const workspaceId = auth.user.workspace_id ?? 1;
let removeWorkspace = false
try {
const body = await request.json()
removeWorkspace = Boolean(body?.remove_workspace)
} catch {
// Optional body
}
let agent
if (isNaN(Number(id))) {
@ -217,6 +226,38 @@ export async function DELETE(
return NextResponse.json({ error: 'Agent not found' }, { status: 404 })
}
if (removeWorkspace) {
const agentConfig = agent.config ? JSON.parse(agent.config) : {}
const openclawId =
String(agentConfig?.openclawId || agent.name || '')
.toLowerCase()
.replace(/[^a-z0-9._-]+/g, '-')
.replace(/^-+|-+$/g, '') || agent.name
try {
await runOpenClaw(['agents', 'delete', openclawId, '--force'], { timeoutMs: 30000 })
} catch (err: any) {
logger.error({ err, openclawId, agent: agent.name }, 'Failed to remove OpenClaw agent/workspace')
return NextResponse.json(
{ error: `Failed to remove OpenClaw workspace for ${agent.name}: ${err?.message || 'unknown error'}` },
{ status: 502 }
)
}
}
let configCleanupWarning: string | null = null
try {
const agentConfig = agent.config ? JSON.parse(agent.config) : {}
const openclawId =
String(agentConfig?.openclawId || agent.name || '')
.toLowerCase()
.replace(/[^a-z0-9._-]+/g, '-')
.replace(/^-+|-+$/g, '') || agent.name
await removeAgentFromConfig({ id: openclawId, name: agent.name })
} catch (err: any) {
configCleanupWarning = `OpenClaw config cleanup skipped for ${agent.name}: ${err?.message || 'unknown error'}`
logger.warn({ err, agent: agent.name }, 'Failed to remove OpenClaw agent config entry')
}
db.prepare('DELETE FROM agents WHERE id = ? AND workspace_id = ?').run(agent.id, workspaceId)
db_helpers.logActivity(
@ -225,13 +266,18 @@ export async function DELETE(
agent.id,
auth.user.username,
`Deleted agent: ${agent.name}`,
{ name: agent.name, role: agent.role },
{ name: agent.name, role: agent.role, remove_workspace: removeWorkspace },
workspaceId
)
eventBus.broadcast('agent.deleted', { id: agent.id, name: agent.name })
return NextResponse.json({ success: true, deleted: agent.name })
return NextResponse.json({
success: true,
deleted: agent.name,
remove_workspace: removeWorkspace,
...(configCleanupWarning ? { warning: configCleanupWarning } : {}),
})
} catch (error) {
logger.error({ err: error }, 'DELETE /api/agents/[id] error')
return NextResponse.json({ error: 'Failed to delete agent' }, { status: 500 })

View File

@ -4,6 +4,7 @@ import { readFileSync, existsSync, readdirSync, writeFileSync, mkdirSync } from
import { join, dirname, isAbsolute, resolve } from 'path';
import { config } from '@/lib/config';
import { resolveWithin } from '@/lib/paths';
import { getAgentWorkspaceCandidates, readAgentWorkspaceFile } from '@/lib/agent-workspace';
import { requireRole } from '@/lib/auth';
import { logger } from '@/lib/logger';
@ -47,13 +48,11 @@ export async function GET(
try {
const agentConfig = agent.config ? JSON.parse(agent.config) : {}
if (agentConfig.workspace) {
const safeWorkspace = resolveAgentWorkspacePath(agentConfig.workspace)
const safeSoulPath = resolveWithin(safeWorkspace, 'soul.md')
if (existsSync(safeSoulPath)) {
soulContent = readFileSync(safeSoulPath, 'utf-8')
source = 'workspace'
}
const candidates = getAgentWorkspaceCandidates(agentConfig, agent.name)
const match = readAgentWorkspaceFile(candidates, ['soul.md', 'SOUL.md'])
if (match.exists) {
soulContent = match.content
source = 'workspace'
}
} catch (err) {
logger.warn({ err, agent: agent.name }, 'Failed to read soul.md from workspace')
@ -163,8 +162,9 @@ export async function PUT(
let savedToWorkspace = false
try {
const agentConfig = agent.config ? JSON.parse(agent.config) : {}
if (agentConfig.workspace) {
const safeWorkspace = resolveAgentWorkspacePath(agentConfig.workspace)
const candidates = getAgentWorkspaceCandidates(agentConfig, agent.name)
const safeWorkspace = candidates[0]
if (safeWorkspace) {
const safeSoulPath = resolveWithin(safeWorkspace, 'soul.md')
mkdirSync(dirname(safeSoulPath), { recursive: true })
writeFileSync(safeSoulPath, newSoulContent || '', 'utf-8')

View File

@ -21,31 +21,46 @@ export async function GET(request: NextRequest) {
const since = searchParams.get("since")
const agent = searchParams.get("agent")
// Filter out human/system messages - only agent-to-agent
// Session-thread comms feed used by coordinator + runtime sessions
const commsPredicate = `
(
conversation_id LIKE 'a2a:%'
OR conversation_id LIKE 'coord:%'
OR conversation_id LIKE 'session:%'
OR conversation_id LIKE 'agent_%'
OR (json_valid(metadata) AND json_extract(metadata, '$.channel') = 'coordinator-inbox')
)
`
const humanNames = ["human", "system", "operator"]
const humanPlaceholders = humanNames.map(() => "?").join(",")
// 1. Get inter-agent messages
let messagesQuery = `
SELECT * FROM messages
// 1. Get timeline messages (page latest rows but render chronologically)
let messagesWhere = `
FROM messages
WHERE workspace_id = ?
AND to_agent IS NOT NULL
AND from_agent NOT IN (${humanPlaceholders})
AND to_agent NOT IN (${humanPlaceholders})
AND ${commsPredicate}
`
const messagesParams: any[] = [workspaceId, ...humanNames, ...humanNames]
const messagesParams: any[] = [workspaceId]
if (since) {
messagesQuery += " AND created_at > ?"
messagesParams.push(parseInt(since))
messagesWhere += " AND created_at > ?"
messagesParams.push(parseInt(since, 10))
}
if (agent) {
messagesQuery += " AND (from_agent = ? OR to_agent = ?)"
messagesWhere += " AND (from_agent = ? OR to_agent = ?)"
messagesParams.push(agent, agent)
}
// Deterministic chronological ordering prevents visual jumps in UI
messagesQuery += " ORDER BY created_at ASC, id ASC LIMIT ? OFFSET ?"
const messagesQuery = `
SELECT * FROM (
SELECT *
${messagesWhere}
ORDER BY created_at DESC, id DESC
LIMIT ? OFFSET ?
) recent
ORDER BY created_at ASC, id ASC
`
messagesParams.push(limit, offset)
const messages = db.prepare(messagesQuery).all(...messagesParams) as Message[]
@ -58,14 +73,15 @@ export async function GET(request: NextRequest) {
MAX(created_at) as last_message_at
FROM messages
WHERE workspace_id = ?
AND ${commsPredicate}
AND to_agent IS NOT NULL
AND from_agent NOT IN (${humanPlaceholders})
AND to_agent NOT IN (${humanPlaceholders})
AND lower(from_agent) NOT IN (${humanPlaceholders})
AND lower(to_agent) NOT IN (${humanPlaceholders})
`
const graphParams: any[] = [workspaceId, ...humanNames, ...humanNames]
if (since) {
graphQuery += " AND created_at > ?"
graphParams.push(parseInt(since))
graphParams.push(parseInt(since, 10))
}
graphQuery += " GROUP BY from_agent, to_agent ORDER BY message_count DESC"
@ -75,15 +91,19 @@ export async function GET(request: NextRequest) {
const statsQuery = `
SELECT agent, SUM(sent) as sent, SUM(received) as received FROM (
SELECT from_agent as agent, COUNT(*) as sent, 0 as received
FROM messages WHERE workspace_id = ? AND to_agent IS NOT NULL
AND from_agent NOT IN (${humanPlaceholders})
AND to_agent NOT IN (${humanPlaceholders})
FROM messages WHERE workspace_id = ?
AND ${commsPredicate}
AND to_agent IS NOT NULL
AND lower(from_agent) NOT IN (${humanPlaceholders})
AND lower(to_agent) NOT IN (${humanPlaceholders})
GROUP BY from_agent
UNION ALL
SELECT to_agent as agent, 0 as sent, COUNT(*) as received
FROM messages WHERE workspace_id = ? AND to_agent IS NOT NULL
AND from_agent NOT IN (${humanPlaceholders})
AND to_agent NOT IN (${humanPlaceholders})
FROM messages WHERE workspace_id = ?
AND ${commsPredicate}
AND to_agent IS NOT NULL
AND lower(from_agent) NOT IN (${humanPlaceholders})
AND lower(to_agent) NOT IN (${humanPlaceholders})
GROUP BY to_agent
) GROUP BY agent ORDER BY (sent + received) DESC
`
@ -94,14 +114,12 @@ export async function GET(request: NextRequest) {
let countQuery = `
SELECT COUNT(*) as total FROM messages
WHERE workspace_id = ?
AND to_agent IS NOT NULL
AND from_agent NOT IN (${humanPlaceholders})
AND to_agent NOT IN (${humanPlaceholders})
AND ${commsPredicate}
`
const countParams: any[] = [workspaceId, ...humanNames, ...humanNames]
const countParams: any[] = [workspaceId]
if (since) {
countQuery += " AND created_at > ?"
countParams.push(parseInt(since))
countParams.push(parseInt(since, 10))
}
if (agent) {
countQuery += " AND (from_agent = ? OR to_agent = ?)"
@ -112,15 +130,13 @@ export async function GET(request: NextRequest) {
let seededCountQuery = `
SELECT COUNT(*) as seeded FROM messages
WHERE workspace_id = ?
AND to_agent IS NOT NULL
AND from_agent NOT IN (${humanPlaceholders})
AND to_agent NOT IN (${humanPlaceholders})
AND ${commsPredicate}
AND conversation_id LIKE ?
`
const seededParams: any[] = [workspaceId, ...humanNames, ...humanNames, "conv-multi-%"]
const seededParams: any[] = [workspaceId, "conv-multi-%"]
if (since) {
seededCountQuery += " AND created_at > ?"
seededParams.push(parseInt(since))
seededParams.push(parseInt(since, 10))
}
if (agent) {
seededCountQuery += " AND (from_agent = ? OR to_agent = ?)"
@ -142,7 +158,6 @@ export async function GET(request: NextRequest) {
try {
parsedMetadata = JSON.parse(msg.metadata)
} catch {
// Keep endpoint resilient even if one legacy row has bad metadata
parsedMetadata = null
}
}

View File

@ -0,0 +1,171 @@
import { NextRequest, NextResponse } from 'next/server'
import { getDatabase } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { readLimiter, mutationLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
import {
runOutputEvals,
evalReasoningCoherence,
evalToolReliability,
runDriftCheck,
getDriftTimeline,
type EvalResult,
} from '@/lib/agent-evals'
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateCheck = readLimiter(request)
if (rateCheck) return rateCheck
try {
const { searchParams } = new URL(request.url)
const agent = searchParams.get('agent')
const action = searchParams.get('action')
const workspaceId = auth.user.workspace_id ?? 1
if (!agent) {
return NextResponse.json({ error: 'Missing required parameter: agent' }, { status: 400 })
}
// History mode
if (action === 'history') {
const weeks = parseInt(searchParams.get('weeks') || '4', 10)
const db = getDatabase()
const history = db.prepare(`
SELECT eval_layer, score, passed, detail, created_at
FROM eval_runs
WHERE agent_name = ? AND workspace_id = ?
ORDER BY created_at DESC
LIMIT ?
`).all(agent, workspaceId, weeks * 7) as any[]
const driftTimeline = getDriftTimeline(agent, weeks, workspaceId)
return NextResponse.json({
agent,
history,
driftTimeline,
})
}
// Default: latest eval results per layer
const db = getDatabase()
const latestByLayer = db.prepare(`
SELECT e.eval_layer, e.score, e.passed, e.detail, e.created_at
FROM eval_runs e
INNER JOIN (
SELECT eval_layer, MAX(created_at) as max_created
FROM eval_runs
WHERE agent_name = ? AND workspace_id = ?
GROUP BY eval_layer
) latest ON e.eval_layer = latest.eval_layer AND e.created_at = latest.max_created
WHERE e.agent_name = ? AND e.workspace_id = ?
`).all(agent, workspaceId, agent, workspaceId) as any[]
const driftResults = runDriftCheck(agent, workspaceId)
const hasDrift = driftResults.some(d => d.drifted)
return NextResponse.json({
agent,
layers: latestByLayer,
drift: {
hasDrift,
metrics: driftResults,
},
})
} catch (error) {
logger.error({ err: error }, 'GET /api/agents/evals error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
export async function POST(request: NextRequest) {
try {
const body = await request.json()
const { action } = body
if (action === 'run') {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateCheck = mutationLimiter(request)
if (rateCheck) return rateCheck
const { agent, layer } = body
if (!agent) return NextResponse.json({ error: 'Missing: agent' }, { status: 400 })
const workspaceId = auth.user.workspace_id ?? 1
const db = getDatabase()
const results: EvalResult[] = []
const layers = layer ? [layer] : ['output', 'trace', 'component', 'drift']
for (const l of layers) {
let evalResults: EvalResult[] = []
switch (l) {
case 'output':
evalResults = runOutputEvals(agent, 168, workspaceId)
break
case 'trace':
evalResults = [evalReasoningCoherence(agent, 24, workspaceId)]
break
case 'component':
evalResults = [evalToolReliability(agent, 24, workspaceId)]
break
case 'drift': {
const driftResults = runDriftCheck(agent, workspaceId)
const driftScore = driftResults.filter(d => !d.drifted).length / Math.max(driftResults.length, 1)
evalResults = [{
layer: 'drift',
score: Math.round(driftScore * 100) / 100,
passed: !driftResults.some(d => d.drifted),
detail: driftResults.map(d => `${d.metric}: ${d.drifted ? 'DRIFTED' : 'stable'} (delta=${d.delta})`).join('; '),
}]
break
}
}
for (const r of evalResults) {
db.prepare(`
INSERT INTO eval_runs (agent_name, eval_layer, score, passed, detail, workspace_id)
VALUES (?, ?, ?, ?, ?, ?)
`).run(agent, r.layer, r.score, r.passed ? 1 : 0, r.detail, workspaceId)
results.push(r)
}
}
return NextResponse.json({ agent, results })
}
if (action === 'golden-set') {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateCheck = mutationLimiter(request)
if (rateCheck) return rateCheck
const { name, entries } = body
if (!name) return NextResponse.json({ error: 'Missing: name' }, { status: 400 })
const workspaceId = auth.user.workspace_id ?? 1
const db = getDatabase()
db.prepare(`
INSERT INTO eval_golden_sets (name, entries, created_by, workspace_id)
VALUES (?, ?, ?, ?)
ON CONFLICT(name, workspace_id)
DO UPDATE SET entries = excluded.entries, updated_at = unixepoch()
`).run(name, JSON.stringify(entries || []), auth.user.username, workspaceId)
return NextResponse.json({ success: true, name })
}
return NextResponse.json({ error: 'Unknown action' }, { status: 400 })
} catch (error) {
logger.error({ err: error }, 'POST /api/agents/evals error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -5,6 +5,9 @@ import { requireRole } from '@/lib/auth'
import { validateBody, createMessageSchema } from '@/lib/validation'
import { mutationLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
import { scanForInjection } from '@/lib/injection-guard'
import { scanForSecrets } from '@/lib/secret-scanner'
import { logSecurityEvent } from '@/lib/security-events'
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
@ -19,6 +22,24 @@ export async function POST(request: NextRequest) {
const { to, message } = result.data
const from = auth.user.display_name || auth.user.username || 'system'
// Scan message for injection — this gets forwarded directly to an agent
const injectionReport = scanForInjection(message, { context: 'prompt' })
if (!injectionReport.safe) {
const criticals = injectionReport.matches.filter(m => m.severity === 'critical')
if (criticals.length > 0) {
logger.warn({ to, rules: criticals.map(m => m.rule) }, 'Blocked agent message: injection detected')
return NextResponse.json(
{ error: 'Message blocked: potentially unsafe content detected', injection: criticals.map(m => ({ rule: m.rule, description: m.description })) },
{ status: 422 }
)
}
}
const secretHits = scanForSecrets(message)
if (secretHits.length > 0) {
try { logSecurityEvent({ event_type: 'secret_exposure', severity: 'critical', source: 'agent-message', agent_name: from, detail: JSON.stringify({ count: secretHits.length, types: secretHits.map(s => s.type) }), workspace_id: auth.user.workspace_id ?? 1, tenant_id: 1 }) } catch {}
}
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1;
const agent = db

View File

@ -0,0 +1,102 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { readLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
import {
analyzeTokenEfficiency,
analyzeToolPatterns,
getFleetBenchmarks,
generateRecommendations,
} from '@/lib/agent-optimizer'
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateCheck = readLimiter(request)
if (rateCheck) return rateCheck
try {
const { searchParams } = new URL(request.url)
const agent = searchParams.get('agent')
const hours = parseInt(searchParams.get('hours') || '24', 10)
const workspaceId = auth.user.workspace_id ?? 1
if (!agent) {
return NextResponse.json({ error: 'Missing required parameter: agent' }, { status: 400 })
}
const efficiency = analyzeTokenEfficiency(agent, hours, workspaceId)
const toolPatterns = analyzeToolPatterns(agent, hours, workspaceId)
const fleet = getFleetBenchmarks(workspaceId)
const recommendations = generateRecommendations(agent, workspaceId)
// Calculate fleet percentile for tokens per session
const fleetTokens = fleet
.map(f => f.tokensPerTask)
.filter(t => t > 0)
.sort((a, b) => a - b)
const agentTokensPerTask = efficiency.sessionsCount > 0 ? efficiency.avgTokensPerSession : 0
const percentile = fleetTokens.length > 0
? Math.round((fleetTokens.filter(t => t >= agentTokensPerTask).length / fleetTokens.length) * 100)
: 50
// Fleet average cost
const fleetAvgCost = fleet.length > 0
? fleet.reduce((sum, f) => sum + f.costPerTask, 0) / fleet.length
: 0
// Tool analysis
const mostUsed = toolPatterns.topTools.slice(0, 5)
const leastEffective = toolPatterns.topTools
.filter(t => t.successRate < 80)
.sort((a, b) => a.successRate - b.successRate)
.slice(0, 5)
// Performance from fleet benchmarks
const agentBenchmark = fleet.find(f => f.agentName === agent)
return NextResponse.json({
agent,
analyzedAt: new Date().toISOString(),
efficiency: {
tokensPerTask: agentTokensPerTask,
fleetAverage: fleetTokens.length > 0
? Math.round(fleetTokens.reduce((a, b) => a + b, 0) / fleetTokens.length)
: 0,
percentile,
trend: efficiency.totalTokens,
costPerTask: efficiency.avgCostPerSession,
},
toolPatterns: {
mostUsed: mostUsed.map(t => ({
name: t.toolName,
count: t.count,
successRate: t.successRate,
})),
leastEffective: leastEffective.map(t => ({
name: t.toolName,
count: t.count,
successRate: t.successRate,
})),
unusedCapabilities: [],
},
performance: {
taskCompletionRate: agentBenchmark?.tasksCompleted ?? 0,
avgTaskDuration: toolPatterns.avgDurationMs,
errorRate: toolPatterns.failureRate,
fleetRanking: fleet.findIndex(f => f.agentName === agent) + 1 || fleet.length + 1,
},
recommendations: recommendations.map(r => ({
category: r.category,
priority: r.severity,
title: r.category.charAt(0).toUpperCase() + r.category.slice(1) + ' issue',
description: r.message,
expectedImpact: r.metric ?? null,
})),
})
} catch (error) {
logger.error({ err: error }, 'GET /api/agents/optimize error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -0,0 +1,137 @@
import { NextRequest, NextResponse } from 'next/server'
import { getDatabase, db_helpers } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { selfRegisterLimiter } from '@/lib/rate-limit'
import { logAuditEvent } from '@/lib/db'
import { eventBus } from '@/lib/event-bus'
import { logger } from '@/lib/logger'
const NAME_RE = /^[a-zA-Z0-9][a-zA-Z0-9._-]{0,62}$/
const VALID_ROLES = ['coder', 'reviewer', 'tester', 'devops', 'researcher', 'assistant', 'agent']
/**
* POST /api/agents/register Agent self-registration.
*
* Allows agents to register themselves with minimal auth (viewer role).
* If an agent with the same name already exists, returns the existing agent
* (idempotent upsert on status/last_seen).
*
* Body: { name, role?, capabilities?, framework? }
*
* Rate-limited to 5 registrations/min per IP to prevent spam.
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const limited = selfRegisterLimiter(request)
if (limited) return limited
let body: any
try {
body = await request.json()
} catch {
return NextResponse.json({ error: 'Request body required' }, { status: 400 })
}
const name = typeof body?.name === 'string' ? body.name.trim() : ''
const role = typeof body?.role === 'string' ? body.role.trim() : 'agent'
const capabilities = Array.isArray(body?.capabilities) ? body.capabilities.filter((c: any) => typeof c === 'string') : []
const framework = typeof body?.framework === 'string' ? body.framework.trim() : null
if (!name || !NAME_RE.test(name)) {
return NextResponse.json({
error: 'Invalid agent name. Use 1-63 alphanumeric characters, dots, hyphens, or underscores. Must start with alphanumeric.',
}, { status: 400 })
}
if (!VALID_ROLES.includes(role)) {
return NextResponse.json({
error: `Invalid role. Use: ${VALID_ROLES.join(', ')}`,
}, { status: 400 })
}
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const now = Math.floor(Date.now() / 1000)
// Check if agent already exists — idempotent: update last_seen and status
const existing = db.prepare(
'SELECT * FROM agents WHERE name = ? AND workspace_id = ?'
).get(name, workspaceId) as any | undefined
if (existing) {
db.prepare(
'UPDATE agents SET status = ?, last_seen = ?, updated_at = ? WHERE id = ? AND workspace_id = ?'
).run('idle', now, now, existing.id, workspaceId)
return NextResponse.json({
agent: {
id: existing.id,
name: existing.name,
role: existing.role,
status: 'idle',
created_at: existing.created_at,
},
registered: false,
message: 'Agent already registered, status updated',
})
}
// Create new agent
const config: Record<string, any> = {}
if (capabilities.length > 0) config.capabilities = capabilities
if (framework) config.framework = framework
const result = db.prepare(`
INSERT INTO agents (name, role, status, config, created_at, updated_at, last_seen, workspace_id)
VALUES (?, ?, 'idle', ?, ?, ?, ?, ?)
`).run(name, role, JSON.stringify(config), now, now, now, workspaceId)
const agentId = Number(result.lastInsertRowid)
db_helpers.logActivity(
'agent_created',
'agent',
agentId,
name,
`Agent self-registered: ${name} (${role})${framework ? ` via ${framework}` : ''}`,
{ name, role, framework, capabilities, self_registered: true },
workspaceId,
)
logAuditEvent({
action: 'agent_self_register',
actor: auth.user.username,
actor_id: auth.user.id,
target_type: 'agent',
target_id: agentId,
detail: { name, role, framework, self_registered: true },
ip_address: request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown',
})
eventBus.broadcast('agent.created', { id: agentId, name, role, status: 'idle' })
return NextResponse.json({
agent: {
id: agentId,
name,
role,
status: 'idle',
created_at: now,
},
registered: true,
message: 'Agent registered successfully',
}, { status: 201 })
} catch (error: any) {
if (error.message?.includes('UNIQUE constraint')) {
// Race condition — another request registered the same name
return NextResponse.json({ error: 'Agent name already exists' }, { status: 409 })
}
logger.error({ err: error }, 'POST /api/agents/register error')
return NextResponse.json({ error: 'Registration failed' }, { status: 500 })
}
}
export const dynamic = 'force-dynamic'

View File

@ -64,7 +64,8 @@ export async function GET(request: NextRequest) {
COUNT(*) as total,
SUM(CASE WHEN status = 'assigned' THEN 1 ELSE 0 END) as assigned,
SUM(CASE WHEN status = 'in_progress' THEN 1 ELSE 0 END) as in_progress,
SUM(CASE WHEN status = 'done' THEN 1 ELSE 0 END) as completed
SUM(CASE WHEN status = 'quality_review' THEN 1 ELSE 0 END) as quality_review,
SUM(CASE WHEN status = 'done' THEN 1 ELSE 0 END) as done
FROM tasks
WHERE assigned_to = ? AND workspace_id = ?
`);
@ -78,7 +79,9 @@ export async function GET(request: NextRequest) {
total: taskStats.total || 0,
assigned: taskStats.assigned || 0,
in_progress: taskStats.in_progress || 0,
completed: taskStats.completed || 0
quality_review: taskStats.quality_review || 0,
done: taskStats.done || 0,
completed: taskStats.done || 0
}
};
});
@ -185,7 +188,7 @@ export async function POST(request: NextRequest) {
try {
await runOpenClaw(
['agents', 'add', openclawId, '--name', name, '--workspace', workspacePath, '--non-interactive'],
['agents', 'add', openclawId, '--workspace', workspacePath, '--non-interactive'],
{ timeoutMs: 20000 }
);
} catch (provisionError: any) {
@ -244,7 +247,7 @@ export async function POST(request: NextRequest) {
const parsedAgent = {
...createdAgent,
config: JSON.parse(createdAgent.config || '{}'),
taskStats: { total: 0, assigned: 0, in_progress: 0, completed: 0 }
taskStats: { total: 0, assigned: 0, in_progress: 0, quality_review: 0, done: 0, completed: 0 }
};
// Broadcast to SSE clients

View File

@ -1,17 +1,27 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { syncAgentsFromConfig, previewSyncDiff } from '@/lib/agent-sync'
import { syncLocalAgents } from '@/lib/local-agent-sync'
import { logger } from '@/lib/logger'
/**
* POST /api/agents/sync - Trigger agent config sync from openclaw.json
* POST /api/agents/sync - Trigger agent config sync
* ?source=local triggers local disk scan instead of openclaw.json sync.
* Requires admin role.
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const { searchParams } = new URL(request.url)
const source = searchParams.get('source')
try {
if (source === 'local') {
const result = await syncLocalAgents()
return NextResponse.json(result)
}
const result = await syncAgentsFromConfig(auth.user.username)
if (result.error) {

View File

@ -0,0 +1,43 @@
import { NextResponse } from 'next/server'
import { getUserFromRequest } from '@/lib/auth'
import { getDatabase, logAuditEvent } from '@/lib/db'
export async function POST(request: Request) {
const user = getUserFromRequest(request)
if (!user || user.id === 0) {
return NextResponse.json({ error: 'Authentication required' }, { status: 401 })
}
if (user.provider !== 'google') {
return NextResponse.json({ error: 'Account is not connected to Google' }, { status: 400 })
}
const db = getDatabase()
// Check that the user has a password set so they can still log in after disconnect
const row = db.prepare('SELECT password_hash FROM users WHERE id = ?').get(user.id) as { password_hash?: string } | undefined
if (!row?.password_hash) {
return NextResponse.json(
{ error: 'Cannot disconnect Google — no password set. Set a password first to avoid being locked out.' },
{ status: 400 }
)
}
db.prepare(`
UPDATE users
SET provider = 'local', provider_user_id = NULL, updated_at = (unixepoch())
WHERE id = ?
`).run(user.id)
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'
const userAgent = request.headers.get('user-agent') || undefined
logAuditEvent({
action: 'google_disconnect',
actor: user.username,
actor_id: user.id,
ip_address: ipAddress,
user_agent: userAgent,
})
return NextResponse.json({ ok: true })
}

View File

@ -1,9 +1,10 @@
import { randomBytes } from 'crypto'
import { NextResponse } from 'next/server'
import { NextRequest, NextResponse } from 'next/server'
import { createSession } from '@/lib/auth'
import { getDatabase, logAuditEvent } from '@/lib/db'
import { verifyGoogleIdToken } from '@/lib/google-auth'
import { getMcSessionCookieOptions } from '@/lib/session-cookie'
import { loginLimiter } from '@/lib/rate-limit'
function upsertAccessRequest(input: {
email: string
@ -25,7 +26,10 @@ function upsertAccessRequest(input: {
`).run(input.email.toLowerCase(), input.providerUserId, input.displayName, input.avatarUrl || null)
}
export async function POST(request: Request) {
export async function POST(request: NextRequest) {
const rateCheck = loginLimiter(request)
if (rateCheck) return rateCheck
try {
const body = await request.json().catch(() => ({}))
const credential = String(body?.credential || '')
@ -38,8 +42,10 @@ export async function POST(request: Request) {
const avatar = profile.picture ? String(profile.picture) : null
const row = db.prepare(`
SELECT id, username, display_name, role, provider, email, avatar_url, is_approved, created_at, updated_at, last_login_at, workspace_id
FROM users
SELECT u.id, u.username, u.display_name, u.role, u.provider, u.email, u.avatar_url, u.is_approved,
u.created_at, u.updated_at, u.last_login_at, u.workspace_id, COALESCE(w.tenant_id, 1) as tenant_id
FROM users u
LEFT JOIN workspaces w ON w.id = u.workspace_id
WHERE (provider = 'google' AND provider_user_id = ?) OR lower(email) = ?
ORDER BY id ASC
LIMIT 1
@ -90,6 +96,7 @@ export async function POST(request: Request) {
email,
avatar_url: avatar,
workspace_id: row.workspace_id ?? 1,
tenant_id: row.tenant_id ?? 1,
},
})

View File

@ -39,6 +39,7 @@ export async function POST(request: Request) {
email: user.email || null,
avatar_url: user.avatar_url || null,
workspace_id: user.workspace_id ?? 1,
tenant_id: user.tenant_id ?? 1,
},
})

View File

@ -1,7 +1,8 @@
import { NextRequest, NextResponse } from 'next/server'
import { getUserFromRequest, updateUser, requireRole } from '@/lib/auth'
import { getUserFromRequest, updateUser, requireRole, destroyAllUserSessions, createSession } from '@/lib/auth'
import { logAuditEvent } from '@/lib/db'
import { verifyPassword } from '@/lib/password'
import { getMcSessionCookieOptions } from '@/lib/session-cookie'
import { logger } from '@/lib/logger'
export async function GET(request: Request) {
@ -24,6 +25,7 @@ export async function GET(request: Request) {
email: user.email || null,
avatar_url: user.avatar_url || null,
workspace_id: user.workspace_id ?? 1,
tenant_id: user.tenant_id ?? 1,
},
})
}
@ -87,14 +89,17 @@ export async function PATCH(request: NextRequest) {
}
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'
const userAgent = request.headers.get('user-agent') || undefined
if (updates.password) {
logAuditEvent({ action: 'password_change', actor: user.username, actor_id: user.id, ip_address: ipAddress })
// Revoke all existing sessions and issue a fresh one for this request
destroyAllUserSessions(user.id)
}
if (updates.display_name) {
logAuditEvent({ action: 'profile_update', actor: user.username, actor_id: user.id, detail: { display_name: updates.display_name }, ip_address: ipAddress })
}
return NextResponse.json({
const response = NextResponse.json({
success: true,
user: {
id: updated.id,
@ -105,8 +110,21 @@ export async function PATCH(request: NextRequest) {
email: updated.email || null,
avatar_url: updated.avatar_url || null,
workspace_id: updated.workspace_id ?? 1,
tenant_id: updated.tenant_id ?? 1,
},
})
// Issue a fresh session cookie after password change (old ones were just revoked)
if (updates.password) {
const { token, expiresAt } = createSession(user.id, ipAddress, userAgent, user.workspace_id ?? 1)
const isSecureRequest = request.headers.get('x-forwarded-proto') === 'https'
|| new URL(request.url).protocol === 'https:'
response.cookies.set('mc-session', token, {
...getMcSessionCookieOptions({ maxAgeSeconds: expiresAt - Math.floor(Date.now() / 1000), isSecureRequest }),
})
}
return response
} catch (error) {
logger.error({ err: error }, 'PATCH /api/auth/me error')
return NextResponse.json({ error: 'Failed to update profile' }, { status: 500 })

View File

@ -64,6 +64,7 @@ export async function POST(request: NextRequest) {
avatar_url: newUser.avatar_url || null,
is_approved: newUser.is_approved ?? 1,
workspace_id: newUser.workspace_id ?? 1,
tenant_id: newUser.tenant_id ?? 1,
}
}, { status: 201 })
} catch (error: any) {
@ -130,6 +131,7 @@ export async function PUT(request: NextRequest) {
avatar_url: updated.avatar_url || null,
is_approved: updated.is_approved ?? 1,
workspace_id: updated.workspace_id ?? 1,
tenant_id: updated.tenant_id ?? 1,
}
})
} catch (error) {

View File

@ -6,6 +6,7 @@ import { join, dirname } from 'path'
import { readdirSync, statSync, unlinkSync } from 'fs'
import { heavyLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
import { runOpenClaw } from '@/lib/command'
const BACKUP_DIR = join(dirname(config.dbPath), 'backups')
const MAX_BACKUPS = 10
@ -48,6 +49,49 @@ export async function POST(request: NextRequest) {
const rateCheck = heavyLimiter(request)
if (rateCheck) return rateCheck
const target = request.nextUrl.searchParams.get('target')
// Gateway state backup via `openclaw backup create`
if (target === 'gateway') {
ensureDirExists(BACKUP_DIR)
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'
try {
let stdout: string
let stderr: string
try {
const result = await runOpenClaw(['backup', 'create', '--output', BACKUP_DIR], { timeoutMs: 60000 })
stdout = result.stdout
stderr = result.stderr
} catch (error: any) {
// openclaw backup may exit non-zero despite success — check output
stdout = error.stdout || ''
stderr = error.stderr || ''
const combined = `${stdout}\n${stderr}`
if (!combined.includes('Created')) {
const message = stderr || error.message || 'Unknown error'
logger.error({ err: error }, 'Gateway backup failed')
return NextResponse.json({ error: `Gateway backup failed: ${message}` }, { status: 500 })
}
}
const output = (stdout || stderr).trim()
logAuditEvent({
action: 'openclaw.backup',
actor: auth.user.username,
actor_id: auth.user.id,
detail: { output },
ip_address: ipAddress,
})
return NextResponse.json({ success: true, output })
} catch (error: any) {
logger.error({ err: error }, 'Gateway backup failed')
return NextResponse.json({ error: `Gateway backup failed: ${error.message}` }, { status: 500 })
}
}
// Default: MC SQLite backup
ensureDirExists(BACKUP_DIR)
const timestamp = new Date().toISOString().replace(/[:.]/g, '-').replace('T', '_').slice(0, 19)

View File

@ -0,0 +1,436 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { config } from '@/lib/config'
import { logger } from '@/lib/logger'
import { getDetectedGatewayToken } from '@/lib/gateway-runtime'
import { callOpenClawGateway } from '@/lib/openclaw-gateway'
const gatewayInternalUrl = `http://${config.gatewayHost}:${config.gatewayPort}`
function gatewayHeaders(): Record<string, string> {
const token = getDetectedGatewayToken()
const headers: Record<string, string> = { 'Content-Type': 'application/json' }
if (token) headers['Authorization'] = `Bearer ${token}`
return headers
}
type GatewayData = unknown
function asRecord(value: unknown): Record<string, unknown> | null {
return value && typeof value === 'object' ? (value as Record<string, unknown>) : null
}
function readBoolean(value: unknown): boolean | undefined {
return typeof value === 'boolean' ? value : undefined
}
function readString(value: unknown): string | undefined {
return typeof value === 'string' ? value : undefined
}
function readNumber(value: unknown): number | undefined {
return typeof value === 'number' ? value : undefined
}
interface ChannelStatus {
configured: boolean
linked?: boolean
running: boolean
connected?: boolean
lastConnectedAt?: number | null
lastMessageAt?: number | null
lastStartAt?: number | null
lastError?: string | null
authAgeMs?: number | null
mode?: string | null
baseUrl?: string | null
publicKey?: string | null
probe?: GatewayData
profile?: GatewayData
}
interface ChannelAccount {
accountId: string
name?: string | null
configured?: boolean | null
linked?: boolean | null
running?: boolean | null
connected?: boolean | null
lastConnectedAt?: number | null
lastInboundAt?: number | null
lastOutboundAt?: number | null
lastError?: string | null
lastStartAt?: number | null
mode?: string | null
probe?: GatewayData
publicKey?: string | null
profile?: GatewayData
}
interface ChannelsSnapshot {
channels: Record<string, ChannelStatus>
channelAccounts: Record<string, ChannelAccount[]>
channelOrder: string[]
channelLabels: Record<string, string>
connected: boolean
updatedAt?: number
}
function transformGatewayChannels(data: GatewayData): ChannelsSnapshot {
const parsed = asRecord(data)
const rawChannels = asRecord(parsed?.channels) ?? {}
const rawAccounts = asRecord(parsed?.channelAccounts) ?? {}
const channelLabels = asRecord(parsed?.channelLabels)
const order = Array.isArray(parsed?.channelOrder)
? parsed.channelOrder.filter((value): value is string => typeof value === 'string')
: Object.keys(rawChannels)
const channels: Record<string, ChannelStatus> = {}
const channelAccounts: Record<string, ChannelAccount[]> = {}
const labels: Record<string, string> = Object.fromEntries(
Object.entries(channelLabels ?? {}).flatMap(([key, value]) => typeof value === 'string' ? [[key, value]] : [])
)
for (const key of order) {
const ch = asRecord(rawChannels[key])
if (!ch) continue
channels[key] = {
configured: !!readBoolean(ch.configured),
linked: readBoolean(ch.linked),
running: !!readBoolean(ch.running),
connected: readBoolean(ch.connected),
lastConnectedAt: readNumber(ch.lastConnectedAt) ?? null,
lastMessageAt: readNumber(ch.lastMessageAt) ?? null,
lastStartAt: readNumber(ch.lastStartAt) ?? null,
lastError: readString(ch.lastError) ?? null,
authAgeMs: readNumber(ch.authAgeMs) ?? null,
mode: readString(ch.mode) ?? null,
baseUrl: readString(ch.baseUrl) ?? null,
publicKey: readString(ch.publicKey) ?? null,
probe: ch.probe ?? null,
profile: ch.profile ?? null,
}
const accounts = rawAccounts[key] || []
const accountEntries = (Array.isArray(accounts) ? accounts : Object.values(accounts)) as GatewayData[]
channelAccounts[key] = accountEntries.map((acct) => {
const parsedAccount = asRecord(acct) ?? {}
return {
accountId: readString(parsedAccount.accountId) ?? 'default',
name: readString(parsedAccount.name) ?? null,
configured: readBoolean(parsedAccount.configured) ?? null,
linked: readBoolean(parsedAccount.linked) ?? null,
running: readBoolean(parsedAccount.running) ?? null,
connected: readBoolean(parsedAccount.connected) ?? null,
lastConnectedAt: readNumber(parsedAccount.lastConnectedAt) ?? null,
lastInboundAt: readNumber(parsedAccount.lastInboundAt) ?? null,
lastOutboundAt: readNumber(parsedAccount.lastOutboundAt) ?? null,
lastError: readString(parsedAccount.lastError) ?? null,
lastStartAt: readNumber(parsedAccount.lastStartAt) ?? null,
mode: readString(parsedAccount.mode) ?? null,
probe: parsedAccount.probe ?? null,
publicKey: readString(parsedAccount.publicKey) ?? null,
profile: parsedAccount.profile ?? null,
}
})
}
return {
channels,
channelAccounts,
channelOrder: order,
channelLabels: labels,
connected: true,
updatedAt: readNumber(parsed?.ts),
}
}
async function loadChannelsViaRpc(probe = false): Promise<ChannelsSnapshot> {
const payload = await callOpenClawGateway<GatewayData>(
'channels.status',
{ probe, timeoutMs: 8000 },
probe ? 20000 : 15000,
)
return {
...transformGatewayChannels(payload),
connected: true,
}
}
async function loadChannelsViaCli(probe = false): Promise<ChannelsSnapshot> {
const payload = await callOpenClawGateway<GatewayData>(
'channels.status',
{ probe, timeoutMs: 8000 },
probe ? 20000 : 15000,
).catch(() => null)
if (payload) {
return {
...transformGatewayChannels(payload),
connected: true,
}
}
const { runOpenClaw } = await import('@/lib/command')
const args = ['channels', 'status', '--json', '--timeout', '5000']
if (probe) args.push('--probe')
const { stdout } = await runOpenClaw(args, { timeoutMs: probe ? 20000 : 15000 })
return {
...transformGatewayChannels(JSON.parse(stdout)),
connected: true,
}
}
async function isGatewayReachable(): Promise<boolean> {
try {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 2000)
const res = await fetch(`${gatewayInternalUrl}/api/health`, {
headers: gatewayHeaders(),
signal: controller.signal,
})
clearTimeout(timeout)
return res.ok
} catch {
return false
}
}
/**
* GET /api/channels - Fetch channel status from the gateway
* Supports ?action=probe&channel=<name> to probe a specific channel
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const { searchParams } = new URL(request.url)
const action = searchParams.get('action')
// Probe a specific channel
if (action === 'probe') {
const channel = searchParams.get('channel')
if (!channel) {
return NextResponse.json({ error: 'channel parameter required' }, { status: 400 })
}
try {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 5000)
const res = await fetch(`${gatewayInternalUrl}/api/channels/probe`, {
method: 'POST',
headers: gatewayHeaders(),
body: JSON.stringify({ channel }),
signal: controller.signal,
})
clearTimeout(timeout)
if (!res.ok) {
if (res.status === 404) {
return NextResponse.json(await loadChannelsViaRpc(true).catch(() => loadChannelsViaCli(true)))
}
throw new Error(`Gateway channel probe failed with status ${res.status}`)
}
const data = await res.json()
return NextResponse.json(data)
} catch (err) {
try {
return NextResponse.json(await loadChannelsViaRpc(true).catch(() => loadChannelsViaCli(true)))
} catch (cliErr) {
logger.warn({ err, cliErr, channel }, 'Channel probe failed')
return NextResponse.json(
{ ok: false, error: 'Gateway unreachable' },
{ status: 502 },
)
}
}
}
// Default: fetch all channel statuses
try {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 5000)
const res = await fetch(`${gatewayInternalUrl}/api/channels/status`, {
headers: gatewayHeaders(),
signal: controller.signal,
})
clearTimeout(timeout)
if (!res.ok) {
if (res.status === 404) {
return NextResponse.json(await loadChannelsViaRpc(false).catch(() => loadChannelsViaCli(false)))
}
throw new Error(`Gateway channel status failed with status ${res.status}`)
}
const data = await res.json()
return NextResponse.json(transformGatewayChannels(data))
} catch (err) {
try {
return NextResponse.json(await loadChannelsViaRpc(false).catch(() => loadChannelsViaCli(false)))
} catch (cliErr) {
logger.warn({ err, cliErr }, 'Gateway unreachable for channel status')
const reachable = await isGatewayReachable()
return NextResponse.json({
channels: {},
channelAccounts: {},
channelOrder: [],
channelLabels: {},
connected: reachable,
} satisfies ChannelsSnapshot)
}
}
}
/**
* POST /api/channels - Platform-specific actions
* Body: { action: string, ...params }
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const body = await request.json().catch(() => null)
if (!body || !body.action) {
return NextResponse.json({ error: 'action required' }, { status: 400 })
}
const { action } = body
try {
switch (action) {
case 'whatsapp-link': {
const force = body.force === true
try {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 30000)
const res = await fetch(`${gatewayInternalUrl}/api/channels/whatsapp/link`, {
method: 'POST',
headers: gatewayHeaders(),
body: JSON.stringify({ force }),
signal: controller.signal,
})
clearTimeout(timeout)
if (res.ok) {
const data = await res.json()
return NextResponse.json(data)
}
if (res.status !== 404) {
const data = await res.json().catch(() => ({}))
return NextResponse.json(data, { status: res.status })
}
} catch {
// Fallback to RPC below.
}
return NextResponse.json(
await callOpenClawGateway('web.login.start', { force, timeoutMs: 30000 }, 32000)
)
}
case 'whatsapp-wait': {
try {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 120000)
const res = await fetch(`${gatewayInternalUrl}/api/channels/whatsapp/wait`, {
method: 'POST',
headers: gatewayHeaders(),
signal: controller.signal,
})
clearTimeout(timeout)
if (res.ok) {
const data = await res.json()
return NextResponse.json(data)
}
if (res.status !== 404) {
const data = await res.json().catch(() => ({}))
return NextResponse.json(data, { status: res.status })
}
} catch {
// Fallback to RPC below.
}
return NextResponse.json(
await callOpenClawGateway('web.login.wait', { timeoutMs: 120000 }, 122000)
)
}
case 'whatsapp-logout': {
try {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 10000)
const res = await fetch(`${gatewayInternalUrl}/api/channels/whatsapp/logout`, {
method: 'POST',
headers: gatewayHeaders(),
signal: controller.signal,
})
clearTimeout(timeout)
if (res.ok) {
const data = await res.json()
return NextResponse.json(data)
}
if (res.status !== 404) {
const data = await res.json().catch(() => ({}))
return NextResponse.json(data, { status: res.status })
}
} catch {
// Fallback to RPC below.
}
return NextResponse.json(
await callOpenClawGateway('channels.logout', { channel: 'whatsapp' }, 12000)
)
}
case 'nostr-profile-save': {
const accountId = body.accountId || 'default'
const profile = body.profile
if (!profile) {
return NextResponse.json({ error: 'profile required' }, { status: 400 })
}
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 10000)
const res = await fetch(
`${gatewayInternalUrl}/api/channels/nostr/${encodeURIComponent(accountId)}/profile`,
{
method: 'PUT',
headers: gatewayHeaders(),
body: JSON.stringify(profile),
signal: controller.signal,
},
)
clearTimeout(timeout)
const data = await res.json()
return NextResponse.json(data, { status: res.ok ? 200 : res.status })
}
case 'nostr-profile-import': {
const accountId = body.accountId || 'default'
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 15000)
const res = await fetch(
`${gatewayInternalUrl}/api/channels/nostr/${encodeURIComponent(accountId)}/profile/import`,
{
method: 'POST',
headers: gatewayHeaders(),
body: JSON.stringify({ autoMerge: true }),
signal: controller.signal,
},
)
clearTimeout(timeout)
const data = await res.json()
return NextResponse.json(data, { status: res.ok ? 200 : res.status })
}
default:
return NextResponse.json({ error: `Unknown action: ${action}` }, { status: 400 })
}
} catch (err) {
logger.warn({ err, action }, 'Channel action failed')
return NextResponse.json(
{ ok: false, error: 'Gateway unreachable' },
{ status: 502 },
)
}
}

View File

@ -5,6 +5,8 @@ import { getAllGatewaySessions } from '@/lib/sessions'
import { eventBus } from '@/lib/event-bus'
import { requireRole } from '@/lib/auth'
import { logger } from '@/lib/logger'
import { scanForInjection, sanitizeForPrompt } from '@/lib/injection-guard'
import { callOpenClawGateway } from '@/lib/openclaw-gateway'
type ForwardInfo = {
attempted: boolean
@ -14,6 +16,19 @@ type ForwardInfo = {
runId?: string
}
type ToolEvent = {
name: string
input?: string
output?: string
status?: string
}
type ChatAttachmentInput = {
name?: string
type?: string
dataUrl?: string
}
const COORDINATOR_AGENT =
String(process.env.MC_COORDINATOR_AGENT || process.env.NEXT_PUBLIC_COORDINATOR_AGENT || 'coordinator').trim() ||
'coordinator'
@ -31,6 +46,35 @@ function parseGatewayJson(raw: string): any | null {
}
}
function toGatewayAttachments(value: unknown): Array<{ type: 'image'; mimeType: string; fileName?: string; content: string }> | undefined {
if (!Array.isArray(value)) return undefined
const attachments = value.flatMap((entry) => {
const file = entry as ChatAttachmentInput
if (!file || typeof file !== 'object' || typeof file.dataUrl !== 'string') return []
const match = /^data:([^;]+);base64,(.+)$/.exec(file.dataUrl)
if (!match) return []
if (!match[1].startsWith('image/')) return []
return [{
type: 'image' as const,
mimeType: match[1],
fileName: typeof file.name === 'string' ? file.name : undefined,
content: match[2],
}]
})
return attachments.length > 0 ? attachments : undefined
}
function safeParseMetadata(raw: string | null | undefined): any | null {
if (!raw) return null
try {
return JSON.parse(raw)
} catch {
return null
}
}
function createChatReply(
db: ReturnType<typeof getDatabase>,
workspaceId: number,
@ -38,7 +82,7 @@ function createChatReply(
fromAgent: string,
toAgent: string,
content: string,
messageType: 'text' | 'status' = 'status',
messageType: 'text' | 'status' | 'tool_call' = 'status',
metadata: Record<string, any> | null = null
) {
const replyInsert = db
@ -62,7 +106,7 @@ function createChatReply(
eventBus.broadcast('chat.message', {
...row,
metadata: row.metadata ? JSON.parse(row.metadata) : null,
metadata: safeParseMetadata(row.metadata),
})
}
@ -91,9 +135,108 @@ function extractReplyText(waitPayload: any): string | null {
}
}
if (Array.isArray(waitPayload.output)) {
const parts: string[] = []
for (const item of waitPayload.output) {
if (!item || typeof item !== 'object') continue
if (typeof item.text === 'string' && item.text.trim()) parts.push(item.text.trim())
if (item.type === 'message' && Array.isArray(item.content)) {
for (const block of item.content) {
if (!block || typeof block !== 'object') continue
const blockType = String(block.type || '')
if ((blockType === 'text' || blockType === 'output_text' || blockType === 'input_text') && typeof block.text === 'string' && block.text.trim()) {
parts.push(block.text.trim())
}
}
}
}
if (parts.length > 0) return parts.join('\n').slice(0, 8000)
}
return null
}
function normalizeToolEvent(raw: any): ToolEvent | null {
if (!raw || typeof raw !== 'object') return null
const name = String(raw.name || raw.tool || raw.toolName || raw.function || raw.call || '').trim()
if (!name) return null
const inputRaw = raw.input ?? raw.args ?? raw.arguments ?? raw.params
const outputRaw = raw.output ?? raw.result ?? raw.response
const statusRaw =
raw.status ??
(raw.isError === true ? 'error' : undefined) ??
(raw.ok === false ? 'error' : undefined) ??
(raw.success === true ? 'ok' : undefined)
const input =
typeof inputRaw === 'string'
? inputRaw.slice(0, 2000)
: inputRaw !== undefined
? JSON.stringify(inputRaw).slice(0, 2000)
: undefined
const output =
typeof outputRaw === 'string'
? outputRaw.slice(0, 4000)
: outputRaw !== undefined
? JSON.stringify(outputRaw).slice(0, 4000)
: undefined
const status = statusRaw !== undefined ? String(statusRaw).slice(0, 60) : undefined
return { name, input, output, status }
}
function extractToolEvents(waitPayload: any): ToolEvent[] {
if (!waitPayload || typeof waitPayload !== 'object') return []
const candidates = [
waitPayload.toolCalls,
waitPayload.tools,
waitPayload.calls,
waitPayload.events,
waitPayload.output?.toolCalls,
waitPayload.output?.tools,
waitPayload.output?.events,
]
const events: ToolEvent[] = []
for (const list of candidates) {
if (!Array.isArray(list)) continue
for (const item of list) {
const evt = normalizeToolEvent(item)
if (evt) events.push(evt)
if (events.length >= 20) return events
}
}
// OpenAI Responses-style output array
if (Array.isArray(waitPayload.output)) {
for (const item of waitPayload.output) {
if (!item || typeof item !== 'object') continue
const itemType = String(item.type || '').toLowerCase()
if (itemType === 'function_call' || itemType === 'tool_call') {
const evt = normalizeToolEvent({
name: item.name || item.tool_name || item.toolName,
arguments: item.arguments || item.input,
output: item.output || item.result,
status: item.status,
})
if (evt) events.push(evt)
} else if (itemType === 'message' && Array.isArray(item.content)) {
for (const block of item.content) {
const blockType = String(block?.type || '').toLowerCase()
if (blockType === 'tool_use' || blockType === 'tool_call' || blockType === 'function_call') {
const evt = normalizeToolEvent(block)
if (evt) events.push(evt)
}
}
}
if (events.length >= 20) return events
}
}
return events
}
/**
* GET /api/chat/messages - List messages with filters
* Query params: conversation_id, from_agent, to_agent, limit, offset, since
@ -144,7 +287,7 @@ export async function GET(request: NextRequest) {
const parsed = messages.map((msg) => ({
...msg,
metadata: msg.metadata ? JSON.parse(msg.metadata) : null
metadata: safeParseMetadata(msg.metadata),
}))
// Get total count for pagination
@ -189,7 +332,11 @@ export async function POST(request: NextRequest) {
const workspaceId = auth.user.workspace_id ?? 1
const body = await request.json()
const from = auth.user.display_name || auth.user.username || 'system'
const requestedFrom = typeof body.from === 'string' ? body.from.trim() : ''
const isCoordinatorOverride = requestedFrom.toLowerCase() === COORDINATOR_AGENT.toLowerCase()
const from = isCoordinatorOverride
? COORDINATOR_AGENT
: (auth.user.display_name || auth.user.username || 'system')
const to = body.to ? (body.to as string).trim() : null
const content = (body.content || '').trim()
const message_type = body.message_type || 'text'
@ -203,6 +350,21 @@ export async function POST(request: NextRequest) {
)
}
// Scan content for injection when it will be forwarded to an agent
if (body.forward && to) {
const injectionReport = scanForInjection(content, { context: 'prompt' })
if (!injectionReport.safe) {
const criticals = injectionReport.matches.filter(m => m.severity === 'critical')
if (criticals.length > 0) {
logger.warn({ to, rules: criticals.map(m => m.rule) }, 'Blocked chat message: injection detected')
return NextResponse.json(
{ error: 'Message blocked: potentially unsafe content detected', injection: criticals.map(m => ({ rule: m.rule, description: m.description })) },
{ status: 422 }
)
}
}
}
const stmt = db.prepare(`
INSERT INTO messages (conversation_id, from_agent, to_agent, content, message_type, metadata, workspace_id)
VALUES (?, ?, ?, ?, ?, ?, ?)
@ -253,7 +415,10 @@ export async function POST(request: NextRequest) {
.prepare('SELECT * FROM agents WHERE lower(name) = lower(?) AND workspace_id = ?')
.get(to, workspaceId) as any
let sessionKey: string | null = agent?.session_key || null
// Use explicit session key from caller if provided, then DB, then on-disk lookup
let sessionKey: string | null = typeof body.sessionKey === 'string' && body.sessionKey
? body.sessionKey
: agent?.session_key || null
// Fallback: derive session from on-disk gateway session stores
if (!sessionKey) {
@ -302,32 +467,53 @@ export async function POST(request: NextRequest) {
}
} else {
try {
const invokeParams: any = {
message: `Message from ${from}: ${content}`,
idempotencyKey: `mc-${messageId}-${Date.now()}`,
deliver: false,
}
if (sessionKey) invokeParams.sessionKey = sessionKey
else invokeParams.agentId = openclawAgentId
const idempotencyKey = `mc-${messageId}-${Date.now()}`
const invokeResult = await runOpenClaw(
[
'gateway',
'call',
'agent',
'--timeout',
'10000',
'--params',
JSON.stringify(invokeParams),
'--json',
],
{ timeoutMs: 12000 }
)
const acceptedPayload = parseGatewayJson(invokeResult.stdout)
forwardInfo.delivered = true
forwardInfo.session = sessionKey || openclawAgentId || undefined
if (typeof acceptedPayload?.runId === 'string' && acceptedPayload.runId) {
forwardInfo.runId = acceptedPayload.runId
if (sessionKey) {
const acceptedPayload = await callOpenClawGateway<any>(
'chat.send',
{
sessionKey,
message: content,
idempotencyKey,
deliver: false,
attachments: toGatewayAttachments(body.attachments),
},
12000,
)
const status = String(acceptedPayload?.status || '').toLowerCase()
forwardInfo.delivered = status === 'started' || status === 'ok' || status === 'in_flight'
forwardInfo.session = sessionKey
if (typeof acceptedPayload?.runId === 'string' && acceptedPayload.runId) {
forwardInfo.runId = acceptedPayload.runId
}
} else {
const invokeParams: any = {
message: `Message from ${from}: ${content}`,
idempotencyKey,
deliver: false,
}
invokeParams.agentId = openclawAgentId
const invokeResult = await runOpenClaw(
[
'gateway',
'call',
'agent',
'--timeout',
'10000',
'--params',
JSON.stringify(invokeParams),
'--json',
],
{ timeoutMs: 12000 }
)
const acceptedPayload = parseGatewayJson(invokeResult.stdout)
forwardInfo.delivered = true
forwardInfo.session = openclawAgentId || undefined
if (typeof acceptedPayload?.runId === 'string' && acceptedPayload.runId) {
forwardInfo.runId = acceptedPayload.runId
}
}
} catch (err) {
// OpenClaw may return accepted JSON on stdout but still emit a late stderr warning.
@ -404,6 +590,29 @@ export async function POST(request: NextRequest) {
const waitPayload = parseGatewayJson(waitResult.stdout)
const waitStatus = String(waitPayload?.status || '').toLowerCase()
const toolEvents = extractToolEvents(waitPayload)
if (toolEvents.length > 0) {
for (const evt of toolEvents) {
createChatReply(
db,
workspaceId,
conversation_id,
COORDINATOR_AGENT,
from,
evt.name,
'tool_call',
{
event: 'tool_call',
toolName: evt.name,
input: evt.input || null,
output: evt.output || null,
status: evt.status || null,
runId: forwardInfo.runId || null,
}
)
}
}
if (waitStatus === 'error') {
const reason =
@ -486,7 +695,10 @@ export async function POST(request: NextRequest) {
const created = db.prepare('SELECT * FROM messages WHERE id = ? AND workspace_id = ?').get(messageId, workspaceId) as Message
const parsedMessage = {
...created,
metadata: created.metadata ? JSON.parse(created.metadata) : null
metadata: {
...(safeParseMetadata(created.metadata) || {}),
forwardInfo: forwardInfo || undefined,
},
}
// Broadcast to SSE clients

View File

@ -0,0 +1,108 @@
import { NextRequest, NextResponse } from 'next/server'
import { getDatabase } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { logger } from '@/lib/logger'
const PREFS_KEY = 'chat.session_prefs.v1'
const ALLOWED_COLORS = new Set(['slate', 'blue', 'green', 'amber', 'red', 'purple', 'pink', 'teal'])
type SessionPref = {
name?: string
color?: string
}
type SessionPrefs = Record<string, SessionPref>
function loadPrefs(): SessionPrefs {
const db = getDatabase()
const row = db.prepare('SELECT value FROM settings WHERE key = ?').get(PREFS_KEY) as { value: string } | undefined
if (!row?.value) return {}
try {
const parsed = JSON.parse(row.value)
return parsed && typeof parsed === 'object' ? parsed : {}
} catch {
return {}
}
}
function savePrefs(prefs: SessionPrefs, username: string) {
const db = getDatabase()
const now = Math.floor(Date.now() / 1000)
db.prepare(`
INSERT INTO settings (key, value, description, category, updated_by, updated_at)
VALUES (?, ?, ?, ?, ?, ?)
ON CONFLICT(key) DO UPDATE SET
value = excluded.value,
updated_by = excluded.updated_by,
updated_at = excluded.updated_at
`).run(
PREFS_KEY,
JSON.stringify(prefs),
'Chat local session preferences (rename + color tags)',
'chat',
username,
now,
)
}
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
return NextResponse.json({ prefs: loadPrefs() })
} catch (error) {
logger.error({ err: error }, 'GET /api/chat/session-prefs error')
return NextResponse.json({ error: 'Failed to load preferences' }, { status: 500 })
}
}
/**
* PATCH /api/chat/session-prefs
* Body: { key: "claude-code:<sessionId>", name?: string, color?: string | null }
*/
export async function PATCH(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const body = await request.json().catch(() => ({}))
const key = typeof body?.key === 'string' ? body.key.trim() : ''
if (!key || !/^[a-zA-Z0-9_-]+:[a-zA-Z0-9._:-]+$/.test(key)) {
return NextResponse.json({ error: 'Invalid key' }, { status: 400 })
}
const nextName = body?.name === null ? '' : (typeof body?.name === 'string' ? body.name.trim() : undefined)
const nextColor = body?.color === null ? '' : (typeof body?.color === 'string' ? body.color.trim().toLowerCase() : undefined)
if (typeof nextName === 'string' && nextName.length > 80) {
return NextResponse.json({ error: 'name must be <= 80 chars' }, { status: 400 })
}
if (typeof nextColor === 'string' && nextColor && !ALLOWED_COLORS.has(nextColor)) {
return NextResponse.json({ error: 'Invalid color' }, { status: 400 })
}
const prefs = loadPrefs()
const existing = prefs[key] || {}
const updated: SessionPref = {
...existing,
...(typeof nextName === 'string' ? { name: nextName || undefined } : {}),
...(typeof nextColor === 'string' ? { color: nextColor || undefined } : {}),
}
if (!updated.name && !updated.color) {
delete prefs[key]
} else {
prefs[key] = updated
}
savePrefs(prefs, auth.user.username)
return NextResponse.json({ ok: true, pref: prefs[key] || null })
} catch (error) {
logger.error({ err: error }, 'PATCH /api/chat/session-prefs error')
return NextResponse.json({ error: 'Failed to update preferences' }, { status: 500 })
}
}
export const dynamic = 'force-dynamic'

View File

@ -0,0 +1,17 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { getClaudeCodeTasks } from '@/lib/claude-tasks'
/**
* GET /api/claude-tasks Returns Claude Code teams and tasks
* Read-only bridge: MC reads from ~/.claude/tasks/ and ~/.claude/teams/
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const force = request.nextUrl.searchParams.get('force') === 'true'
const result = getClaudeCodeTasks(force)
return NextResponse.json(result)
}

View File

@ -20,19 +20,22 @@ export async function GET(request: NextRequest) {
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const now = Math.floor(Date.now() / 1000)
const ret = config.retention
const preview = []
for (const { table, column, days, label } of getRetentionTargets()) {
for (const { table, column, days, label, scoped } of getRetentionTargets()) {
if (days <= 0) {
preview.push({ table: label, retention_days: 0, stale_count: 0, note: 'Retention disabled (keep forever)' })
continue
}
const cutoff = now - days * 86400
try {
const row = db.prepare(`SELECT COUNT(*) as c FROM ${table} WHERE ${column} < ?`).get(cutoff) as any
const wsClause = scoped ? ' AND workspace_id = ?' : ''
const params: any[] = scoped ? [cutoff, workspaceId] : [cutoff]
const row = db.prepare(`SELECT COUNT(*) as c FROM ${table} WHERE ${column} < ?${wsClause}`).get(...params) as any
preview.push({
table: label,
retention_days: days,
@ -89,17 +92,20 @@ export async function POST(request: NextRequest) {
const dryRun = body.dry_run === true
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const now = Math.floor(Date.now() / 1000)
const results: CleanupResult[] = []
let totalDeleted = 0
for (const { table, column, days, label } of getRetentionTargets()) {
for (const { table, column, days, label, scoped } of getRetentionTargets()) {
if (days <= 0) continue
const cutoff = now - days * 86400
const wsClause = scoped ? ' AND workspace_id = ?' : ''
const params: any[] = scoped ? [cutoff, workspaceId] : [cutoff]
try {
if (dryRun) {
const row = db.prepare(`SELECT COUNT(*) as c FROM ${table} WHERE ${column} < ?`).get(cutoff) as any
const row = db.prepare(`SELECT COUNT(*) as c FROM ${table} WHERE ${column} < ?${wsClause}`).get(...params) as any
results.push({
table: label,
deleted: row.c,
@ -108,7 +114,7 @@ export async function POST(request: NextRequest) {
})
totalDeleted += row.c
} else {
const res = db.prepare(`DELETE FROM ${table} WHERE ${column} < ?`).run(cutoff)
const res = db.prepare(`DELETE FROM ${table} WHERE ${column} < ?${wsClause}`).run(...params)
results.push({
table: label,
deleted: res.changes,
@ -183,9 +189,9 @@ export async function POST(request: NextRequest) {
function getRetentionTargets() {
const ret = config.retention
return [
{ table: 'activities', column: 'created_at', days: ret.activities, label: 'Activities' },
{ table: 'audit_log', column: 'created_at', days: ret.auditLog, label: 'Audit Log' },
{ table: 'notifications', column: 'created_at', days: ret.notifications, label: 'Notifications' },
{ table: 'pipeline_runs', column: 'created_at', days: ret.pipelineRuns, label: 'Pipeline Runs' },
{ table: 'activities', column: 'created_at', days: ret.activities, label: 'Activities', scoped: true },
{ table: 'audit_log', column: 'created_at', days: ret.auditLog, label: 'Audit Log', scoped: false }, // instance-global, admin-only
{ table: 'notifications', column: 'created_at', days: ret.notifications, label: 'Notifications', scoped: true },
{ table: 'pipeline_runs', column: 'created_at', days: ret.pipelineRuns, label: 'Pipeline Runs', scoped: true },
]
}

View File

@ -188,6 +188,71 @@ export async function GET(request: NextRequest) {
return NextResponse.json({ logs })
}
if (action === 'history') {
const jobId = searchParams.get('jobId')
if (!jobId) {
return NextResponse.json({ error: 'Job ID required' }, { status: 400 })
}
const page = parseInt(searchParams.get('page') || '1', 10)
const query = searchParams.get('query') || ''
// Try to load run history from the cron runs log file
const openclawStateDir = config.openclawStateDir
if (!openclawStateDir) {
return NextResponse.json({ entries: [], total: 0, hasMore: false })
}
try {
const runsPath = path.join(openclawStateDir, 'cron', 'runs.json')
const raw = await readFile(runsPath, 'utf-8')
const runsData = JSON.parse(raw)
let entries: any[] = Array.isArray(runsData.runs) ? runsData.runs : Array.isArray(runsData) ? runsData : []
// Filter to this job
entries = entries.filter((r: any) => r.jobId === jobId || r.id === jobId)
// Apply search filter
if (query) {
const q = query.toLowerCase()
entries = entries.filter((r: any) =>
(r.status || '').toLowerCase().includes(q) ||
(r.error || '').toLowerCase().includes(q) ||
(r.deliveryStatus || '').toLowerCase().includes(q)
)
}
// Sort by timestamp descending
entries.sort((a: any, b: any) => (b.timestamp || b.startedAtMs || 0) - (a.timestamp || a.startedAtMs || 0))
const pageSize = 20
const start = (page - 1) * pageSize
const paged = entries.slice(start, start + pageSize)
return NextResponse.json({
entries: paged,
total: entries.length,
hasMore: start + pageSize < entries.length,
page,
})
} catch {
// No runs file — fall back to state-based info
const cronFile = await loadCronFile()
const job = cronFile?.jobs.find(j => j.id === jobId || j.name === jobId)
const entries: any[] = []
if (job?.state?.lastRunAtMs) {
entries.push({
jobId: job.id,
status: job.state.lastStatus || 'unknown',
timestamp: job.state.lastRunAtMs,
durationMs: job.state.lastDurationMs,
error: job.state.lastError,
})
}
return NextResponse.json({ entries, total: entries.length, hasMore: false, page: 1 })
}
}
return NextResponse.json({ error: 'Invalid action' }, { status: 400 })
} catch (error) {
logger.error({ err: error }, 'Cron API error')
@ -249,11 +314,14 @@ export async function POST(request: NextRequest) {
}
// For OpenClaw cron jobs, trigger via the openclaw CLI
const triggerMode = body.mode || 'force'
const { runCommand } = await import('@/lib/command')
try {
const { stdout, stderr } = await runCommand(config.openclawBin, [
'cron', 'trigger', job.id
], { timeoutMs: 30000 })
const args = ['cron', 'trigger', job.id]
if (triggerMode === 'due') {
args.push('--if-due')
}
const { stdout, stderr } = await runCommand(config.openclawBin, args, { timeoutMs: 30000 })
return NextResponse.json({
success: true,
@ -296,7 +364,7 @@ export async function POST(request: NextRequest) {
}
if (action === 'add') {
const { schedule, command, model, description } = body
const { schedule, command, model, description, staggerSeconds } = body
const name = jobName || body.name
if (!schedule || !command || !name) {
return NextResponse.json(
@ -320,6 +388,9 @@ export async function POST(request: NextRequest) {
schedule: {
kind: 'cron',
expr: schedule,
...(typeof staggerSeconds === 'number' && staggerSeconds > 0
? { staggerMs: staggerSeconds * 1000 } as any
: {}),
},
payload: {
kind: 'agentTurn',
@ -341,6 +412,49 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ success: true })
}
if (action === 'clone') {
const id = jobId || jobName
if (!id) {
return NextResponse.json({ error: 'Job ID required' }, { status: 400 })
}
const cronFile = await loadCronFile()
if (!cronFile) {
return NextResponse.json({ error: 'Cron file not found' }, { status: 404 })
}
const sourceJob = cronFile.jobs.find(j => j.id === id || j.name === id)
if (!sourceJob) {
return NextResponse.json({ error: 'Job not found' }, { status: 404 })
}
// Generate unique clone name
const existingNames = new Set(cronFile.jobs.map(j => j.name.toLowerCase()))
let cloneName = `${sourceJob.name} (copy)`
let counter = 2
while (existingNames.has(cloneName.toLowerCase())) {
cloneName = `${sourceJob.name} (copy ${counter})`
counter++
}
const clonedJob: OpenClawCronJob = {
...JSON.parse(JSON.stringify(sourceJob)),
id: `mc-${Date.now().toString(36)}`,
name: cloneName,
createdAtMs: Date.now(),
updatedAtMs: Date.now(),
state: {},
}
cronFile.jobs.push(clonedJob)
if (!(await saveCronFile(cronFile))) {
return NextResponse.json({ error: 'Failed to save cron file' }, { status: 500 })
}
return NextResponse.json({ success: true, clonedName: cloneName })
}
return NextResponse.json({ error: 'Invalid action' }, { status: 400 })
} catch (error) {
logger.error({ err: error }, 'Cron management error')

146
src/app/api/debug/route.ts Normal file
View File

@ -0,0 +1,146 @@
import { NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { config } from '@/lib/config'
import { logger } from '@/lib/logger'
const GATEWAY_BASE = `http://${config.gatewayHost}:${config.gatewayPort}`
async function gatewayFetch(
path: string,
options: { method?: string; body?: string; timeoutMs?: number } = {}
): Promise<Response> {
const { method = 'GET', body, timeoutMs = 5000 } = options
const controller = new AbortController()
const timer = setTimeout(() => controller.abort(), timeoutMs)
try {
const res = await fetch(`${GATEWAY_BASE}${path}`, {
method,
signal: controller.signal,
headers: body ? { 'Content-Type': 'application/json' } : undefined,
body,
})
return res
} finally {
clearTimeout(timer)
}
}
export async function GET(request: Request) {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const { searchParams } = new URL(request.url)
const action = searchParams.get('action') || 'status'
try {
switch (action) {
case 'status': {
try {
const res = await gatewayFetch('/api/status')
const data = await res.json()
return NextResponse.json(data)
} catch (err) {
logger.warn({ err }, 'debug: gateway unreachable for status')
return NextResponse.json({ gatewayReachable: false })
}
}
case 'health': {
try {
const res = await gatewayFetch('/api/health')
const data = await res.json()
return NextResponse.json(data)
} catch (err) {
logger.warn({ err }, 'debug: gateway unreachable for health')
return NextResponse.json({ healthy: false, error: 'Gateway unreachable' })
}
}
case 'models': {
try {
const res = await gatewayFetch('/api/models')
const data = await res.json()
return NextResponse.json(data)
} catch (err) {
logger.warn({ err }, 'debug: gateway unreachable for models')
return NextResponse.json({ models: [] })
}
}
case 'heartbeat': {
const start = performance.now()
try {
const res = await gatewayFetch('/api/heartbeat', { timeoutMs: 3000 })
const latencyMs = Math.round(performance.now() - start)
const ok = res.ok
return NextResponse.json({ ok, latencyMs, timestamp: Date.now() })
} catch {
const latencyMs = Math.round(performance.now() - start)
return NextResponse.json({ ok: false, latencyMs, timestamp: Date.now() })
}
}
default:
return NextResponse.json({ error: `Unknown action: ${action}` }, { status: 400 })
}
} catch (err) {
logger.error({ err }, 'debug: unexpected error')
return NextResponse.json({ error: 'Internal error' }, { status: 500 })
}
}
export async function POST(request: Request) {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const { searchParams } = new URL(request.url)
const action = searchParams.get('action')
if (action !== 'call') {
return NextResponse.json({ error: 'POST only supports action=call' }, { status: 400 })
}
let body: { method?: string; path?: string; body?: any }
try {
body = await request.json()
} catch {
return NextResponse.json({ error: 'Invalid JSON body' }, { status: 400 })
}
const { method, path, body: callBody } = body
if (!method || !['GET', 'POST'].includes(method)) {
return NextResponse.json({ error: 'method must be GET or POST' }, { status: 400 })
}
if (!path || typeof path !== 'string' || !path.startsWith('/api/')) {
return NextResponse.json({ error: 'path must start with /api/' }, { status: 400 })
}
try {
const res = await gatewayFetch(path, {
method,
body: callBody ? JSON.stringify(callBody) : undefined,
timeoutMs: 5000,
})
let responseBody: any
const contentType = res.headers.get('content-type') || ''
if (contentType.includes('application/json')) {
responseBody = await res.json()
} else {
responseBody = await res.text()
}
return NextResponse.json({
status: res.status,
statusText: res.statusText,
contentType,
body: responseBody,
})
} catch (err) {
logger.warn({ err, path }, 'debug: gateway call failed')
return NextResponse.json({ error: 'Gateway unreachable', path }, { status: 502 })
}
}

View File

@ -0,0 +1,211 @@
import { NextRequest, NextResponse } from 'next/server'
import net from 'node:net'
import { existsSync, statSync } from 'node:fs'
import { requireRole } from '@/lib/auth'
import { config } from '@/lib/config'
import { getDatabase } from '@/lib/db'
import { runOpenClaw } from '@/lib/command'
import { logger } from '@/lib/logger'
import { APP_VERSION } from '@/lib/version'
const INSECURE_PASSWORDS = new Set([
'admin',
'password',
'change-me-on-first-login',
'changeme',
'testpass123',
])
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const [version, security, database, agents, sessions, gateway] = await Promise.all([
getVersionInfo(),
getSecurityInfo(),
getDatabaseInfo(),
getAgentInfo(),
getSessionInfo(),
getGatewayInfo(),
])
return NextResponse.json({
system: {
nodeVersion: process.version,
platform: process.platform,
arch: process.arch,
processMemory: process.memoryUsage(),
processUptime: process.uptime(),
isDocker: existsSync('/.dockerenv'),
},
version,
security,
database,
agents,
sessions,
gateway,
retention: config.retention,
})
} catch (error) {
logger.error({ err: error }, 'Diagnostics API error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
async function getVersionInfo() {
let openclaw: string | null = null
try {
const { stdout } = await runOpenClaw(['--version'], { timeoutMs: 3000 })
openclaw = stdout.trim()
} catch {
// openclaw not available
}
return { app: APP_VERSION, openclaw }
}
function getSecurityInfo() {
const checks: Array<{ name: string; pass: boolean; detail: string }> = []
const apiKey = process.env.API_KEY || ''
checks.push({
name: 'API key configured',
pass: Boolean(apiKey) && apiKey !== 'generate-a-random-key',
detail: !apiKey ? 'API_KEY is not set' : apiKey === 'generate-a-random-key' ? 'API_KEY is default value' : 'API_KEY is set',
})
const authPass = process.env.AUTH_PASS || ''
checks.push({
name: 'Auth password secure',
pass: Boolean(authPass) && !INSECURE_PASSWORDS.has(authPass),
detail: !authPass ? 'AUTH_PASS is not set' : INSECURE_PASSWORDS.has(authPass) ? 'AUTH_PASS is a known insecure password' : 'AUTH_PASS is not a common default',
})
const allowedHosts = process.env.MC_ALLOWED_HOSTS || ''
checks.push({
name: 'Allowed hosts configured',
pass: Boolean(allowedHosts.trim()),
detail: allowedHosts.trim() ? 'MC_ALLOWED_HOSTS is configured' : 'MC_ALLOWED_HOSTS is not set',
})
const sameSite = process.env.MC_COOKIE_SAMESITE || ''
checks.push({
name: 'Cookie SameSite strict',
pass: sameSite.toLowerCase() === 'strict',
detail: sameSite ? `MC_COOKIE_SAMESITE is '${sameSite}'` : 'MC_COOKIE_SAMESITE is not set',
})
const hsts = process.env.MC_ENABLE_HSTS || ''
checks.push({
name: 'HSTS enabled',
pass: hsts === '1',
detail: hsts === '1' ? 'HSTS is enabled' : 'MC_ENABLE_HSTS is not set to 1',
})
const rateLimitDisabled = process.env.MC_DISABLE_RATE_LIMIT || ''
checks.push({
name: 'Rate limiting enabled',
pass: !rateLimitDisabled,
detail: rateLimitDisabled ? 'Rate limiting is disabled' : 'Rate limiting is active',
})
const gwHost = config.gatewayHost
checks.push({
name: 'Gateway bound to localhost',
pass: gwHost === '127.0.0.1' || gwHost === 'localhost',
detail: `Gateway host is '${gwHost}'`,
})
const passing = checks.filter(c => c.pass).length
const score = Math.round((passing / checks.length) * 100)
return { score, checks }
}
function getDatabaseInfo() {
try {
const db = getDatabase()
let sizeBytes = 0
try {
sizeBytes = statSync(config.dbPath).size
} catch {
// ignore
}
const journalRow = db.prepare('PRAGMA journal_mode').get() as { journal_mode: string } | undefined
const walMode = journalRow?.journal_mode === 'wal'
let migrationVersion: string | null = null
try {
const row = db.prepare(
"SELECT name FROM sqlite_master WHERE type='table' AND name='migrations'"
).get() as { name?: string } | undefined
if (row?.name) {
const latest = db.prepare(
'SELECT version FROM migrations ORDER BY rowid DESC LIMIT 1'
).get() as { version: string } | undefined
migrationVersion = latest?.version ?? null
}
} catch {
// migrations table may not exist
}
return { sizeBytes, walMode, migrationVersion }
} catch (err) {
logger.error({ err }, 'Diagnostics: database info error')
return { sizeBytes: 0, walMode: false, migrationVersion: null }
}
}
function getAgentInfo() {
try {
const db = getDatabase()
const rows = db.prepare(
'SELECT status, COUNT(*) as count FROM agents GROUP BY status'
).all() as Array<{ status: string; count: number }>
const byStatus: Record<string, number> = {}
let total = 0
for (const row of rows) {
byStatus[row.status] = row.count
total += row.count
}
return { total, byStatus }
} catch {
return { total: 0, byStatus: {} }
}
}
function getSessionInfo() {
try {
const db = getDatabase()
const totalRow = db.prepare('SELECT COUNT(*) as c FROM claude_sessions').get() as { c: number } | undefined
const activeRow = db.prepare(
"SELECT COUNT(*) as c FROM claude_sessions WHERE is_active = 1"
).get() as { c: number } | undefined
return { active: activeRow?.c ?? 0, total: totalRow?.c ?? 0 }
} catch {
return { active: 0, total: 0 }
}
}
async function getGatewayInfo() {
const host = config.gatewayHost
const port = config.gatewayPort
const configured = Boolean(host && port)
let reachable = false
if (configured) {
reachable = await new Promise<boolean>((resolve) => {
const socket = new net.Socket()
socket.setTimeout(1500)
socket.once('connect', () => { socket.destroy(); resolve(true) })
socket.once('timeout', () => { socket.destroy(); resolve(false) })
socket.once('error', () => { socket.destroy(); resolve(false) })
socket.connect(port, host)
})
}
return { configured, reachable, host, port }
}

View File

@ -25,8 +25,11 @@ export async function GET(request: NextRequest) {
encoder.encode(`data: ${JSON.stringify({ type: 'connected', data: null, timestamp: Date.now() })}\n\n`)
)
// Forward all server events to this SSE client
// Forward workspace-scoped server events to this SSE client
const userWorkspaceId = auth.user.workspace_id ?? 1
const handler = (event: ServerEvent) => {
// Skip events from other workspaces (if event carries workspace_id)
if (event.data?.workspace_id && event.data.workspace_id !== userWorkspaceId) return
try {
controller.enqueue(
encoder.encode(`data: ${JSON.stringify(event)}\n\n`)

View File

@ -0,0 +1,210 @@
import { NextRequest, NextResponse } from 'next/server'
import { createHash } from 'node:crypto'
import { requireRole } from '@/lib/auth'
import { config } from '@/lib/config'
import { logger } from '@/lib/logger'
import path from 'node:path'
function gatewayUrl(p: string): string {
return `http://${config.gatewayHost}:${config.gatewayPort}${p}`
}
function execApprovalsPath(): string {
return path.join(config.openclawHome, 'exec-approvals.json')
}
function computeHash(raw: string): string {
return createHash('sha256').update(raw, 'utf8').digest('hex')
}
/**
* GET /api/exec-approvals - Fetch pending execution approval requests
* GET /api/exec-approvals?action=allowlist - Fetch per-agent allowlists
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const action = request.nextUrl.searchParams.get('action')
if (action === 'allowlist') {
return getAllowlist()
}
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 5000)
try {
const res = await fetch(gatewayUrl('/api/exec-approvals'), {
signal: controller.signal,
headers: { 'Accept': 'application/json' },
})
clearTimeout(timeout)
if (!res.ok) {
logger.warn({ status: res.status }, 'Gateway exec-approvals endpoint returned error')
return NextResponse.json({ approvals: [] })
}
const data = await res.json()
return NextResponse.json(data)
} catch (err: any) {
clearTimeout(timeout)
if (err.name === 'AbortError') {
logger.warn('Gateway exec-approvals request timed out')
} else {
logger.warn({ err }, 'Gateway exec-approvals unreachable')
}
return NextResponse.json({ approvals: [] })
}
}
async function getAllowlist(): Promise<NextResponse> {
const filePath = execApprovalsPath()
try {
const { readFile } = require('fs/promises')
const raw = await readFile(filePath, 'utf-8')
const parsed = JSON.parse(raw)
const agents: Record<string, { pattern: string }[]> = {}
if (parsed?.agents && typeof parsed.agents === 'object') {
for (const [agentId, agentConfig] of Object.entries(parsed.agents)) {
const cfg = agentConfig as any
if (Array.isArray(cfg?.allowlist)) {
agents[agentId] = cfg.allowlist.map((e: any) => ({ pattern: String(e?.pattern ?? '') }))
} else {
agents[agentId] = []
}
}
}
return NextResponse.json({ agents, hash: computeHash(raw) })
} catch (err: any) {
if (err.code === 'ENOENT') {
return NextResponse.json({ agents: {}, hash: computeHash('') })
}
logger.warn({ err }, 'Failed to read exec-approvals config')
return NextResponse.json({ error: `Failed to read config: ${err.message}` }, { status: 500 })
}
}
/**
* PUT /api/exec-approvals - Save allowlist changes
* Body: { agents: Record<string, { pattern: string }[]>, hash?: string }
*/
export async function PUT(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
let body: { agents: Record<string, { pattern: string }[]>; hash?: string }
try {
body = await request.json()
} catch {
return NextResponse.json({ error: 'Invalid JSON body' }, { status: 400 })
}
if (!body.agents || typeof body.agents !== 'object') {
return NextResponse.json({ error: 'Missing required field: agents' }, { status: 400 })
}
const filePath = execApprovalsPath()
try {
const { readFile, writeFile, mkdir } = require('fs/promises')
const { existsSync } = require('fs')
let parsed: any = { version: 1, agents: {} }
try {
const raw = await readFile(filePath, 'utf-8')
parsed = JSON.parse(raw)
if (body.hash) {
const serverHash = computeHash(raw)
if (body.hash !== serverHash) {
return NextResponse.json(
{ error: 'Config has been modified. Please reload and try again.', code: 'CONFLICT' },
{ status: 409 },
)
}
}
} catch (err: any) {
if (err.code !== 'ENOENT') throw err
}
if (!parsed.agents) parsed.agents = {}
for (const [agentId, patterns] of Object.entries(body.agents)) {
if (!parsed.agents[agentId]) parsed.agents[agentId] = {}
if (patterns.length === 0) {
delete parsed.agents[agentId].allowlist
} else {
parsed.agents[agentId].allowlist = patterns.map((p: { pattern: string }) => ({
pattern: String(p.pattern ?? ''),
}))
}
}
const dir = path.dirname(filePath)
if (!existsSync(dir)) {
await mkdir(dir, { recursive: true })
}
const newRaw = JSON.stringify(parsed, null, 2) + '\n'
await writeFile(filePath, newRaw, { mode: 0o600 })
return NextResponse.json({ ok: true, hash: computeHash(newRaw) })
} catch (err: any) {
logger.error({ err }, 'Failed to save exec-approvals config')
return NextResponse.json({ error: `Failed to save: ${err.message}` }, { status: 500 })
}
}
/**
* POST /api/exec-approvals - Respond to an execution approval request
* Body: { id: string, action: 'approve' | 'deny' | 'always_allow', reason?: string }
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
let body: { id: string; action: string; reason?: string }
try {
body = await request.json()
} catch {
return NextResponse.json({ error: 'Invalid JSON body' }, { status: 400 })
}
if (!body.id || typeof body.id !== 'string') {
return NextResponse.json({ error: 'Missing required field: id' }, { status: 400 })
}
const validActions = ['approve', 'deny', 'always_allow']
if (!validActions.includes(body.action)) {
return NextResponse.json({ error: `Invalid action. Must be one of: ${validActions.join(', ')}` }, { status: 400 })
}
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 5000)
try {
const res = await fetch(gatewayUrl('/api/exec-approvals/respond'), {
method: 'POST',
signal: controller.signal,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
id: body.id,
action: body.action,
reason: body.reason,
}),
})
clearTimeout(timeout)
const data = await res.json()
return NextResponse.json(data, { status: res.status })
} catch (err: any) {
clearTimeout(timeout)
if (err.name === 'AbortError') {
logger.error('Gateway exec-approvals respond request timed out')
return NextResponse.json({ error: 'Gateway request timed out' }, { status: 504 })
}
logger.error({ err }, 'Gateway exec-approvals respond failed')
return NextResponse.json({ error: 'Gateway unreachable' }, { status: 502 })
}
}

View File

@ -53,6 +53,7 @@ export async function GET(request: NextRequest) {
switch (type) {
case 'audit': {
// audit_log is instance-global (no workspace_id column); export is admin-only so this is safe
rows = db.prepare(`SELECT * FROM audit_log ${where} ORDER BY created_at DESC LIMIT ?`).all(...params, limit)
headers = ['id', 'action', 'actor', 'actor_id', 'target_type', 'target_id', 'detail', 'ip_address', 'user_agent', 'created_at']
filename = 'audit-log'
@ -77,7 +78,10 @@ export async function GET(request: NextRequest) {
break
}
case 'pipelines': {
rows = db.prepare(`SELECT pr.*, wp.name as pipeline_name FROM pipeline_runs pr LEFT JOIN workflow_pipelines wp ON pr.pipeline_id = wp.id ${where ? where.replace('created_at', 'pr.created_at') : ''} ORDER BY pr.created_at DESC LIMIT ?`).all(...params, limit)
conditions.unshift('pr.workspace_id = ?')
params.unshift(workspaceId)
const scopedWhere = conditions.length > 0 ? `WHERE ${conditions.map(c => c.replace(/^created_at/, 'pr.created_at')).join(' AND ')}` : ''
rows = db.prepare(`SELECT pr.*, wp.name as pipeline_name FROM pipeline_runs pr LEFT JOIN workflow_pipelines wp ON pr.pipeline_id = wp.id ${scopedWhere} ORDER BY pr.created_at DESC LIMIT ?`).all(...params, limit)
headers = ['id', 'pipeline_id', 'pipeline_name', 'status', 'current_step', 'steps_snapshot', 'started_at', 'completed_at', 'triggered_by', 'created_at']
filename = 'pipeline-runs'
break

View File

@ -1,22 +1,45 @@
import { NextRequest, NextResponse } from 'next/server'
import { createHash } from 'node:crypto'
import { requireRole } from '@/lib/auth'
import { logAuditEvent } from '@/lib/db'
import { config } from '@/lib/config'
import { validateBody, gatewayConfigUpdateSchema } from '@/lib/validation'
import { mutationLimiter } from '@/lib/rate-limit'
import { parseJsonRelaxed } from '@/lib/json-relaxed'
import { getDetectedGatewayToken } from '@/lib/gateway-runtime'
function getConfigPath(): string | null {
return config.openclawConfigPath || null
}
function gatewayUrl(path: string): string {
return `http://${config.gatewayHost}:${config.gatewayPort}${path}`
}
function gatewayHeaders(): Record<string, string> {
const token = getDetectedGatewayToken()
const headers: Record<string, string> = { 'Content-Type': 'application/json' }
if (token) headers['Authorization'] = `Bearer ${token}`
return headers
}
function computeHash(raw: string): string {
return createHash('sha256').update(raw, 'utf8').digest('hex')
}
/**
* GET /api/gateway-config - Read the gateway configuration
* GET /api/gateway-config?action=schema - Get the config JSON schema
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const action = request.nextUrl.searchParams.get('action')
if (action === 'schema') {
return getSchema()
}
const configPath = getConfigPath()
if (!configPath) {
return NextResponse.json({ error: 'OPENCLAW_CONFIG_PATH not configured' }, { status: 404 })
@ -25,7 +48,8 @@ export async function GET(request: NextRequest) {
try {
const { readFile } = require('fs/promises')
const raw = await readFile(configPath, 'utf-8')
const parsed = parseJsonRelaxed<any>(raw)
const parsed = JSON.parse(raw)
const hash = computeHash(raw)
// Redact sensitive fields for display
const redacted = redactSensitive(JSON.parse(JSON.stringify(parsed)))
@ -34,6 +58,7 @@ export async function GET(request: NextRequest) {
path: configPath,
config: redacted,
raw_size: raw.length,
hash,
})
} catch (err: any) {
if (err.code === 'ENOENT') {
@ -43,12 +68,38 @@ export async function GET(request: NextRequest) {
}
}
async function getSchema(): Promise<NextResponse> {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 5000)
try {
const res = await fetch(gatewayUrl('/api/config/schema'), {
signal: controller.signal,
headers: gatewayHeaders(),
})
clearTimeout(timeout)
if (!res.ok) {
return NextResponse.json(
{ error: `Gateway returned ${res.status}` },
{ status: 502 },
)
}
const data = await res.json()
return NextResponse.json(data)
} catch (err: any) {
clearTimeout(timeout)
return NextResponse.json(
{ error: err.name === 'AbortError' ? 'Gateway timeout' : 'Gateway unreachable' },
{ status: 502 },
)
}
}
/**
* PUT /api/gateway-config - Update specific config fields
* Body: { updates: { "path.to.key": value, ... } }
* PUT /api/gateway-config?action=apply - Hot-apply config via gateway RPC
* PUT /api/gateway-config?action=update - System update via gateway RPC
*
* Uses dot-notation paths to set nested values.
* CRITICAL: Preserves gateway.auth.password and other sensitive fields.
* Body: { updates: { "path.to.key": value, ... }, hash?: string }
*/
export async function PUT(request: NextRequest) {
const auth = requireRole(request, 'admin')
@ -57,6 +108,16 @@ export async function PUT(request: NextRequest) {
const rateCheck = mutationLimiter(request)
if (rateCheck) return rateCheck
const action = request.nextUrl.searchParams.get('action')
if (action === 'apply') {
return applyConfig(request, auth)
}
if (action === 'update') {
return updateSystem(request, auth)
}
const configPath = getConfigPath()
if (!configPath) {
return NextResponse.json({ error: 'OPENCLAW_CONFIG_PATH not configured' }, { status: 404 })
@ -77,7 +138,30 @@ export async function PUT(request: NextRequest) {
try {
const { readFile, writeFile } = require('fs/promises')
const raw = await readFile(configPath, 'utf-8')
const parsed = parseJsonRelaxed<any>(raw)
// Hash-based concurrency check
const clientHash = (body as any).hash
if (clientHash) {
const serverHash = computeHash(raw)
if (clientHash !== serverHash) {
return NextResponse.json(
{ error: 'Config has been modified by another user. Please reload and try again.', code: 'CONFLICT' },
{ status: 409 },
)
}
}
const parsed = JSON.parse(raw)
for (const dotPath of Object.keys(body.updates)) {
const [rootKey] = dotPath.split('.')
if (!rootKey || !(rootKey in parsed)) {
return NextResponse.json(
{ error: `Unknown config root: ${rootKey || dotPath}` },
{ status: 400 },
)
}
}
// Apply updates via dot-notation
const appliedKeys: string[] = []
@ -87,7 +171,8 @@ export async function PUT(request: NextRequest) {
}
// Write back with pretty formatting
await writeFile(configPath, JSON.stringify(parsed, null, 2) + '\n')
const newRaw = JSON.stringify(parsed, null, 2) + '\n'
await writeFile(configPath, newRaw)
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'
logAuditEvent({
@ -98,12 +183,92 @@ export async function PUT(request: NextRequest) {
ip_address: ipAddress,
})
return NextResponse.json({ updated: appliedKeys, count: appliedKeys.length })
return NextResponse.json({
updated: appliedKeys,
count: appliedKeys.length,
hash: computeHash(newRaw),
})
} catch (err: any) {
return NextResponse.json({ error: `Failed to update config: ${err.message}` }, { status: 500 })
}
}
async function applyConfig(request: NextRequest, auth: any): Promise<NextResponse> {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 10000)
try {
const res = await fetch(gatewayUrl('/api/config/apply'), {
method: 'POST',
signal: controller.signal,
headers: gatewayHeaders(),
})
clearTimeout(timeout)
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'
logAuditEvent({
action: 'gateway_config_apply',
actor: auth.user.username,
actor_id: auth.user.id,
detail: { status: res.status },
ip_address: ipAddress,
})
if (!res.ok) {
const text = await res.text().catch(() => '')
return NextResponse.json(
{ error: `Apply failed (${res.status}): ${text}` },
{ status: 502 },
)
}
const data = await res.json().catch(() => ({}))
return NextResponse.json({ ok: true, ...data })
} catch (err: any) {
clearTimeout(timeout)
return NextResponse.json(
{ error: err.name === 'AbortError' ? 'Gateway timeout' : 'Gateway unreachable' },
{ status: 502 },
)
}
}
async function updateSystem(request: NextRequest, auth: any): Promise<NextResponse> {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 15000)
try {
const res = await fetch(gatewayUrl('/api/config/update'), {
method: 'POST',
signal: controller.signal,
headers: gatewayHeaders(),
})
clearTimeout(timeout)
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'
logAuditEvent({
action: 'gateway_config_system_update',
actor: auth.user.username,
actor_id: auth.user.id,
detail: { status: res.status },
ip_address: ipAddress,
})
if (!res.ok) {
const text = await res.text().catch(() => '')
return NextResponse.json(
{ error: `Update failed (${res.status}): ${text}` },
{ status: 502 },
)
}
const data = await res.json().catch(() => ({}))
return NextResponse.json({ ok: true, ...data })
} catch (err: any) {
clearTimeout(timeout)
return NextResponse.json(
{ error: err.name === 'AbortError' ? 'Gateway timeout' : 'Gateway unreachable' },
{ status: 502 },
)
}
}
/** Set a value in a nested object using dot-notation path */
function setNestedValue(obj: any, path: string, value: any) {
const keys = path.split('.')
@ -124,7 +289,7 @@ function redactSensitive(obj: any, parentKey = ''): any {
for (const key of Object.keys(obj)) {
if (sensitiveKeys.some(sk => key.toLowerCase().includes(sk))) {
if (typeof obj[key] === 'string' && obj[key].length > 0) {
obj[key] = '••••••••'
obj[key] = '--------'
}
} else if (typeof obj[key] === 'object' && obj[key] !== null) {
redactSensitive(obj[key], key)

View File

@ -1,13 +1,126 @@
import { NextRequest, NextResponse } from 'next/server'
import { readFileSync } from 'node:fs'
import { requireRole } from '@/lib/auth'
import { getDatabase } from '@/lib/db'
import { buildGatewayWebSocketUrl } from '@/lib/gateway-url'
import { getDetectedGatewayToken } from '@/lib/gateway-runtime'
interface GatewayEntry {
id: number
host: string
port: number
token: string
is_primary: number
}
function inferBrowserProtocol(request: NextRequest): 'http:' | 'https:' {
const forwardedProto = String(request.headers.get('x-forwarded-proto') || '').split(',')[0]?.trim().toLowerCase()
if (forwardedProto === 'https') return 'https:'
if (forwardedProto === 'http') return 'http:'
const origin = request.headers.get('origin') || request.headers.get('referer') || ''
if (origin) {
try {
const parsed = new URL(origin)
if (parsed.protocol === 'https:') return 'https:'
if (parsed.protocol === 'http:') return 'http:'
} catch {
// ignore and continue fallback resolution
}
}
if (request.nextUrl.protocol === 'https:') return 'https:'
return 'http:'
}
const LOCALHOST_HOSTS = new Set(['127.0.0.1', 'localhost', '::1'])
/**
* Detect whether Tailscale Serve is proxying a `/gw` route to the gateway.
*
* Checks in order:
* 1. `tailscale serve status --json` look for a /gw handler (authoritative)
* 2. Fallback: `gateway.tailscale.mode === 'serve'` in openclaw.json (legacy)
*/
function detectTailscaleServe(): boolean {
// 1. Check live Tailscale Serve config for a /gw handler
try {
const { execFileSync } = require('node:child_process')
const raw = execFileSync('tailscale', ['serve', 'status', '--json'], {
timeout: 3000,
encoding: 'utf-8',
stdio: ['ignore', 'pipe', 'ignore'],
})
const config = JSON.parse(raw)
const web = config?.Web
if (web) {
for (const host of Object.values(web) as any[]) {
if ((host as any)?.Handlers?.['/gw']) return true
}
}
} catch {
// tailscale CLI not available or not running — fall through
}
// 2. Legacy: check openclaw.json config
const configPath = process.env.OPENCLAW_CONFIG_PATH || ''
if (!configPath) return false
try {
const raw = readFileSync(configPath, 'utf-8')
const config = JSON.parse(raw)
return config?.gateway?.tailscale?.mode === 'serve'
} catch {
return false
}
}
/** Cache Tailscale Serve detection with 60-second TTL. */
let _tailscaleServeCache: { value: boolean; expiresAt: number } | null = null
const TAILSCALE_CACHE_TTL_MS = 60_000
function isTailscaleServe(): boolean {
const now = Date.now()
if (!_tailscaleServeCache || now > _tailscaleServeCache.expiresAt) {
_tailscaleServeCache = { value: detectTailscaleServe(), expiresAt: now + TAILSCALE_CACHE_TTL_MS }
}
return _tailscaleServeCache.value
}
/** Extract the browser-facing hostname from the request. */
function getBrowserHostname(request: NextRequest): string {
const origin = request.headers.get('origin') || request.headers.get('referer') || ''
if (origin) {
try { return new URL(origin).hostname } catch { /* ignore */ }
}
const hostHeader = request.headers.get('host') || ''
return hostHeader.split(':')[0]
}
/**
* When the gateway is on localhost but the browser is remote, resolve the
* correct WebSocket URL the browser should use.
*
* - Tailscale Serve mode: `wss://<dashboard-host>/gw` (Tailscale proxies /gw to localhost gateway)
* - Otherwise: rewrite host to dashboard hostname with the gateway port
*/
function resolveRemoteGatewayUrl(
gateway: { host: string; port: number },
request: NextRequest,
): string | null {
const normalized = (gateway.host || '').toLowerCase().trim()
if (!LOCALHOST_HOSTS.has(normalized)) return null // remote host — use normal path
const browserHost = getBrowserHostname(request)
if (!browserHost || LOCALHOST_HOSTS.has(browserHost.toLowerCase())) return null // local access
// Browser is remote — determine the correct proxied URL
if (isTailscaleServe()) {
// Tailscale Serve proxies /gw → localhost:18789 with TLS
return `wss://${browserHost}/gw`
}
// No Tailscale Serve — try direct connection to dashboard host on gateway port
const protocol = inferBrowserProtocol(request) === 'https:' ? 'wss' : 'ws'
return `${protocol}://${browserHost}:${gateway.port}`
}
function ensureTable(db: ReturnType<typeof getDatabase>) {
@ -35,7 +148,10 @@ function ensureTable(db: ReturnType<typeof getDatabase>) {
* Resolves websocket URL and token for a selected gateway without exposing tokens in list payloads.
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
// Any authenticated dashboard user may initiate a gateway websocket connect.
// Restricting this to operator can cause startup fallback to connect without auth,
// which then fails as "device identity required".
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const db = getDatabase()
@ -53,19 +169,32 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: 'id is required' }, { status: 400 })
}
const gateway = db.prepare('SELECT id, host, port, token FROM gateways WHERE id = ?').get(id) as GatewayEntry | undefined
const gateway = db.prepare('SELECT id, host, port, token, is_primary FROM gateways WHERE id = ?').get(id) as GatewayEntry | undefined
if (!gateway) {
return NextResponse.json({ error: 'Gateway not found' }, { status: 404 })
}
const ws_url = buildGatewayWebSocketUrl({
// When gateway host is localhost but the browser is remote (e.g. Tailscale),
// resolve the correct browser-accessible WebSocket URL.
const remoteUrl = resolveRemoteGatewayUrl(gateway, request)
const ws_url = remoteUrl || buildGatewayWebSocketUrl({
host: gateway.host,
port: gateway.port,
browserProtocol: request.nextUrl.protocol,
browserProtocol: inferBrowserProtocol(request),
})
const envToken = (process.env.NEXT_PUBLIC_GATEWAY_TOKEN || process.env.NEXT_PUBLIC_WS_TOKEN || '').trim()
const token = (gateway.token || '').trim() || envToken
const dbToken = (gateway.token || '').trim()
const detectedToken = gateway.is_primary === 1 ? getDetectedGatewayToken() : ''
const token = detectedToken || dbToken
// Keep runtime DB aligned with detected OpenClaw gateway token for primary gateway.
if (gateway.is_primary === 1 && detectedToken && detectedToken !== dbToken) {
try {
db.prepare('UPDATE gateways SET token = ?, updated_at = (unixepoch()) WHERE id = ?').run(detectedToken, gateway.id)
} catch {
// Non-fatal: connect still succeeds with detected token even if persistence fails.
}
}
return NextResponse.json({
id: gateway.id,

View File

@ -0,0 +1,98 @@
import { NextRequest, NextResponse } from 'next/server'
import { readFileSync } from 'node:fs'
import { execFileSync } from 'node:child_process'
import { requireRole } from '@/lib/auth'
interface DiscoveredGateway {
user: string
port: number
active: boolean
description: string
}
/**
* GET /api/gateways/discover
* Discovers OpenClaw gateways via systemd services and port scanning.
* Does not require filesystem access to other users' configs.
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const discovered: DiscoveredGateway[] = []
// Parse systemd services for openclaw-gateway instances
try {
const output = execFileSync('systemctl', [
'list-units', '--type=service', '--plain', '--no-legend', '--no-pager',
], { encoding: 'utf-8', timeout: 3000 })
const gwLines = output.split('\n').filter(l => l.includes('openclaw') && l.includes('gateway'))
for (const line of gwLines) {
// e.g. "openclaw-gateway@quant.service loaded active running OpenClaw Gateway (quant)"
const parts = line.trim().split(/\s+/)
const serviceName = parts[0] || ''
const state = parts[2] || '' // active/inactive
const description = parts.slice(4).join(' ') // "OpenClaw Gateway (quant)"
// Extract user from service name
let user = ''
const templateMatch = serviceName.match(/openclaw-gateway@(\w+)\.service/)
if (templateMatch) {
user = templateMatch[1]
} else {
// Custom service name like "openclaw-leads-gateway.service"
const customMatch = serviceName.match(/openclaw-(\w+)-gateway\.service/)
if (customMatch) user = customMatch[1]
}
if (!user) continue
// Find the port by checking what openclaw-gateway processes are listening on
let port = 0
try {
const configPath = `/home/${user}/.openclaw/openclaw.json`
const raw = readFileSync(configPath, 'utf-8')
const config = JSON.parse(raw)
if (typeof config?.gateway?.port === 'number') port = config.gateway.port
} catch {
// Can't read config — try to detect from ss output
}
// If we couldn't read config, try finding port via ss for the service PID
if (!port) {
try {
const pidOutput = execFileSync('systemctl', [
'show', serviceName, '--property=ExecMainPID', '--value',
], { encoding: 'utf-8', timeout: 2000 }).trim()
const pid = parseInt(pidOutput, 10)
if (pid > 0) {
const ssOutput = execFileSync('ss', ['-ltnp'], {
encoding: 'utf-8', timeout: 2000,
})
const pidPattern = `pid=${pid},`
for (const ssLine of ssOutput.split('\n')) {
if (ssLine.includes(pidPattern)) {
const portMatch = ssLine.match(/:(\d+)\s/)
if (portMatch) { port = parseInt(portMatch[1], 10); break }
}
}
}
} catch { /* ignore */ }
}
if (!port) continue
discovered.push({
user,
port,
active: state === 'active',
description: description.replace(/[()]/g, '').trim(),
})
}
} catch {
// systemctl not available or failed — fall back silently
}
return NextResponse.json({ gateways: discovered })
}

View File

@ -46,20 +46,95 @@ function hasOpenClaw32ToolsProfileRisk(version: string | null): boolean {
return minor >= 2
}
function isBlockedUrl(urlStr: string): boolean {
/** Check whether an IPv4 address falls within a CIDR block. */
function ipv4InCidr(ip: string, cidr: string): boolean {
const [base, bits] = cidr.split('/')
const mask = ~((1 << (32 - Number(bits))) - 1) >>> 0
const ipNum = ipv4ToNum(ip)
const baseNum = ipv4ToNum(base)
if (ipNum === null || baseNum === null) return false
return (ipNum & mask) === (baseNum & mask)
}
function ipv4ToNum(ip: string): number | null {
const parts = ip.split('.')
if (parts.length !== 4) return null
let num = 0
for (const p of parts) {
const n = Number(p)
if (!Number.isFinite(n) || n < 0 || n > 255) return null
num = (num << 8) | n
}
return num >>> 0
}
const BLOCKED_PRIVATE_CIDRS = [
'10.0.0.0/8',
'172.16.0.0/12',
'192.168.0.0/16',
'169.254.0.0/16',
'127.0.0.0/8',
]
const BLOCKED_HOSTNAMES = new Set([
'metadata.google.internal',
'metadata.internal',
'instance-data',
])
function isBlockedUrl(urlStr: string, userConfiguredHosts: Set<string>): boolean {
try {
const url = new URL(urlStr)
const hostname = url.hostname
// Block link-local / cloud metadata endpoints
if (hostname.startsWith('169.254.')) return true
// Allow user-configured gateway hosts (operators intentionally target their own infra)
if (userConfiguredHosts.has(hostname)) return false
// Block well-known cloud metadata hostnames
if (hostname === 'metadata.google.internal') return true
if (BLOCKED_HOSTNAMES.has(hostname)) return true
// Block private/reserved IPv4 ranges
if (/^\d{1,3}(\.\d{1,3}){3}$/.test(hostname)) {
for (const cidr of BLOCKED_PRIVATE_CIDRS) {
if (ipv4InCidr(hostname, cidr)) return true
}
}
return false
} catch {
return true // Block malformed URLs
}
}
function buildGatewayProbeUrl(host: string, port: number): string | null {
const rawHost = String(host || '').trim()
if (!rawHost) return null
const hasProtocol =
rawHost.startsWith('ws://') ||
rawHost.startsWith('wss://') ||
rawHost.startsWith('http://') ||
rawHost.startsWith('https://')
if (hasProtocol) {
try {
const parsed = new URL(rawHost)
if (parsed.protocol === 'ws:') parsed.protocol = 'http:'
if (parsed.protocol === 'wss:') parsed.protocol = 'https:'
if (!parsed.port && Number.isFinite(port) && port > 0) {
parsed.port = String(port)
}
if (!parsed.pathname) parsed.pathname = '/'
return parsed.toString()
} catch {
return null
}
}
if (!Number.isFinite(port) || port <= 0) return null
return `http://${rawHost}:${port}/`
}
/**
* POST /api/gateways/health - Server-side health probe for all gateways
* Probes gateways from the server where loopback addresses are reachable.
@ -71,6 +146,15 @@ export async function POST(request: NextRequest) {
const db = getDatabase()
const gateways = db.prepare("SELECT * FROM gateways ORDER BY is_primary DESC, name ASC").all() as GatewayEntry[]
// Build set of user-configured gateway hosts so the SSRF filter allows them
const configuredHosts = new Set<string>()
for (const gw of gateways) {
const h = (gw.host || '').trim()
if (h) {
try { configuredHosts.add(new URL(h.includes('://') ? h : `http://${h}`).hostname) } catch { configuredHosts.add(h) }
}
}
// Prepare update statements once (avoids N+1)
const updateOnlineStmt = db.prepare(
"UPDATE gateways SET status = ?, latency = ?, last_seen = (unixepoch()), updated_at = (unixepoch()) WHERE id = ?"
@ -82,9 +166,13 @@ export async function POST(request: NextRequest) {
const results: HealthResult[] = []
for (const gw of gateways) {
const probeUrl = "http://" + gw.host + ":" + gw.port + "/"
const probeUrl = buildGatewayProbeUrl(gw.host, gw.port)
if (!probeUrl) {
results.push({ id: gw.id, name: gw.name, status: 'error', latency: null, agents: [], sessions_count: 0, error: 'Invalid gateway address' })
continue
}
if (isBlockedUrl(probeUrl)) {
if (isBlockedUrl(probeUrl, configuredHosts)) {
results.push({ id: gw.id, name: gw.name, status: 'error', latency: null, agents: [], sessions_count: 0, error: 'Blocked URL' })
continue
}
@ -106,8 +194,6 @@ export async function POST(request: NextRequest) {
? 'OpenClaw 2026.3.2+ defaults tools.profile=messaging; Mission Control should enforce coding profile when spawning.'
: undefined
updateOnlineStmt.run(status, latency, gw.id)
results.push({
id: gw.id,
name: gw.name,
@ -119,8 +205,6 @@ export async function POST(request: NextRequest) {
compatibility_warning: compatibilityWarning,
})
} catch (err: any) {
updateOfflineStmt.run("offline", gw.id)
results.push({
id: gw.id,
name: gw.name,
@ -133,5 +217,16 @@ export async function POST(request: NextRequest) {
}
}
// Persist all probe results in a single transaction
db.transaction(() => {
for (const r of results) {
if (r.status === 'online' || r.status === 'error') {
updateOnlineStmt.run(r.status, r.latency, r.id)
} else {
updateOfflineStmt.run(r.status, r.id)
}
}
})()
return NextResponse.json({ results, probed_at: Date.now() })
}

View File

@ -1,6 +1,7 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { getDatabase } from '@/lib/db'
import { getDetectedGatewayPort, getDetectedGatewayToken } from '@/lib/gateway-runtime'
interface GatewayEntry {
id: number
@ -54,11 +55,8 @@ export async function GET(request: NextRequest) {
if (gateways.length === 0) {
const name = String(process.env.MC_DEFAULT_GATEWAY_NAME || 'primary')
const host = String(process.env.OPENCLAW_GATEWAY_HOST || '127.0.0.1')
const mainPort = parseInt(process.env.OPENCLAW_GATEWAY_PORT || process.env.GATEWAY_PORT || process.env.NEXT_PUBLIC_GATEWAY_PORT || '18789')
const mainToken =
process.env.OPENCLAW_GATEWAY_TOKEN ||
process.env.GATEWAY_TOKEN ||
''
const mainPort = getDetectedGatewayPort() || parseInt(process.env.NEXT_PUBLIC_GATEWAY_PORT || '18789')
const mainToken = getDetectedGatewayToken()
db.prepare(`
INSERT INTO gateways (name, host, port, token, is_primary) VALUES (?, ?, ?, ?, 1)

View File

@ -14,6 +14,7 @@ import {
updateIssueState,
type GitHubIssue,
} from '@/lib/github'
import { initializeLabels, pullFromGitHub } from '@/lib/github-sync-engine'
/**
* GET /api/github?action=issues&repo=owner/repo&state=open&labels=bug
@ -83,6 +84,10 @@ export async function POST(request: NextRequest) {
return await handleClose(body, auth.user.username, auth.user.workspace_id ?? 1)
case 'status':
return handleStatus(auth.user.workspace_id ?? 1)
case 'init-labels':
return await handleInitLabels(body, auth.user.workspace_id ?? 1)
case 'sync-project':
return await handleSyncProject(body, auth.user.username, auth.user.workspace_id ?? 1)
default:
return NextResponse.json({ error: 'Unknown action' }, { status: 400 })
}
@ -417,6 +422,67 @@ async function handleGitHubStats() {
})
}
// ── Init Labels: create MC labels on repo ────────────────────────
async function handleInitLabels(
body: { repo?: string },
workspaceId: number
) {
const repo = body.repo || process.env.GITHUB_DEFAULT_REPO
if (!repo) {
return NextResponse.json({ error: 'repo is required' }, { status: 400 })
}
await initializeLabels(repo)
// Mark project labels as initialized
const db = getDatabase()
db.prepare(`
UPDATE projects
SET github_labels_initialized = 1, updated_at = unixepoch()
WHERE github_repo = ? AND workspace_id = ?
`).run(repo, workspaceId)
return NextResponse.json({ ok: true, repo })
}
// ── Sync Project: pull from GitHub for a project ─────────────────
async function handleSyncProject(
body: { project_id?: number },
actor: string,
workspaceId: number
) {
if (typeof body.project_id !== 'number') {
return NextResponse.json({ error: 'project_id is required' }, { status: 400 })
}
const db = getDatabase()
const project = db.prepare(`
SELECT id, github_repo, github_sync_enabled, github_default_branch
FROM projects
WHERE id = ? AND workspace_id = ? AND status = 'active'
`).get(body.project_id, workspaceId) as any | undefined
if (!project) {
return NextResponse.json({ error: 'Project not found' }, { status: 404 })
}
if (!project.github_repo || !project.github_sync_enabled) {
return NextResponse.json({ error: 'GitHub sync not enabled for this project' }, { status: 400 })
}
const result = await pullFromGitHub(project, workspaceId)
db_helpers.logActivity(
'github_sync', 'project', project.id, actor,
`Manual sync: pulled ${result.pulled}, pushed ${result.pushed}`,
{ repo: project.github_repo, ...result },
workspaceId
)
return NextResponse.json({ ok: true, ...result })
}
// ── Priority mapping helper ─────────────────────────────────────
function mapPriority(labels: string[]): 'critical' | 'high' | 'medium' | 'low' {

View File

@ -0,0 +1,109 @@
import { NextRequest, NextResponse } from 'next/server'
import { getDatabase } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { logger } from '@/lib/logger'
import { pullFromGitHub } from '@/lib/github-sync-engine'
import { getSyncPollerStatus } from '@/lib/github-sync-poller'
/**
* GET /api/github/sync sync status for all GitHub-linked projects.
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const syncs = db.prepare(`
SELECT
gs.project_id,
p.name as project_name,
p.github_repo,
MAX(gs.last_synced_at) as last_synced_at,
SUM(gs.changes_pushed) as total_pushed,
SUM(gs.changes_pulled) as total_pulled,
COUNT(*) as sync_count
FROM github_syncs gs
LEFT JOIN projects p ON p.id = gs.project_id AND p.workspace_id = gs.workspace_id
WHERE gs.workspace_id = ? AND gs.project_id IS NOT NULL
GROUP BY gs.project_id
ORDER BY last_synced_at DESC
`).all(workspaceId)
const poller = getSyncPollerStatus()
return NextResponse.json({ syncs, poller })
} catch (error) {
logger.error({ err: error }, 'GET /api/github/sync error')
return NextResponse.json({ error: 'Failed to fetch sync status' }, { status: 500 })
}
}
/**
* POST /api/github/sync trigger sync manually.
* Body: { action: 'trigger', project_id: number } or { action: 'trigger-all' }
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const body = await request.json()
const { action, project_id } = body
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
if (action === 'trigger' && typeof project_id === 'number') {
const project = db.prepare(`
SELECT id, github_repo, github_sync_enabled, github_default_branch
FROM projects
WHERE id = ? AND workspace_id = ? AND status = 'active'
`).get(project_id, workspaceId) as any | undefined
if (!project) {
return NextResponse.json({ error: 'Project not found' }, { status: 404 })
}
if (!project.github_repo || !project.github_sync_enabled) {
return NextResponse.json({ error: 'GitHub sync not enabled for this project' }, { status: 400 })
}
const result = await pullFromGitHub(project, workspaceId)
return NextResponse.json({ ok: true, ...result })
}
if (action === 'trigger-all') {
const projects = db.prepare(`
SELECT id, github_repo, github_sync_enabled, github_default_branch
FROM projects
WHERE github_sync_enabled = 1 AND github_repo IS NOT NULL AND workspace_id = ? AND status = 'active'
`).all(workspaceId) as any[]
let totalPulled = 0
let totalPushed = 0
for (const project of projects) {
try {
const result = await pullFromGitHub(project, workspaceId)
totalPulled += result.pulled
totalPushed += result.pushed
} catch (err) {
logger.error({ err, projectId: project.id }, 'Trigger-all: project sync failed')
}
}
return NextResponse.json({
ok: true,
projects_synced: projects.length,
pulled: totalPulled,
pushed: totalPushed,
})
}
return NextResponse.json({ error: 'Unknown action. Use trigger or trigger-all' }, { status: 400 })
} catch (error) {
logger.error({ err: error }, 'POST /api/github/sync error')
return NextResponse.json({ error: 'Sync trigger failed' }, { status: 500 })
}
}

View File

@ -0,0 +1,16 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { getHermesMemory } from '@/lib/hermes-memory'
/**
* GET /api/hermes/memory Returns Hermes memory file contents
* Read-only bridge: MC reads from ~/.hermes/memories/
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const result = getHermesMemory()
return NextResponse.json(result)
}

182
src/app/api/hermes/route.ts Normal file
View File

@ -0,0 +1,182 @@
import { NextRequest, NextResponse } from 'next/server'
import { existsSync, mkdirSync, writeFileSync, rmSync } from 'node:fs'
import { join } from 'node:path'
import { requireRole } from '@/lib/auth'
import { config } from '@/lib/config'
import { isHermesInstalled, isHermesGatewayRunning, scanHermesSessions } from '@/lib/hermes-sessions'
import { getHermesTasks } from '@/lib/hermes-tasks'
import { getHermesMemory } from '@/lib/hermes-memory'
import { logger } from '@/lib/logger'
const HERMES_HOME = join(config.homeDir, '.hermes')
const HOOK_DIR = join(HERMES_HOME, 'hooks', 'mission-control')
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const installed = isHermesInstalled()
const gatewayRunning = installed ? isHermesGatewayRunning() : false
const hookInstalled = existsSync(join(HOOK_DIR, 'HOOK.yaml'))
const activeSessions = installed ? scanHermesSessions(50).filter(s => s.isActive).length : 0
const cronJobCount = installed ? getHermesTasks().cronJobs.length : 0
const memoryEntries = installed ? getHermesMemory().agentMemoryEntries : 0
return NextResponse.json({
installed,
gatewayRunning,
hookInstalled,
activeSessions,
cronJobCount,
memoryEntries,
hookDir: HOOK_DIR,
})
} catch (err) {
logger.error({ err }, 'Hermes status check failed')
return NextResponse.json({ error: 'Failed to check hermes status' }, { status: 500 })
}
}
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const body = await request.json()
const { action } = body
if (action === 'install-hook') {
if (!isHermesInstalled()) {
return NextResponse.json({ error: 'Hermes is not installed (~/.hermes/ not found)' }, { status: 400 })
}
mkdirSync(HOOK_DIR, { recursive: true })
// Write HOOK.yaml
writeFileSync(join(HOOK_DIR, 'HOOK.yaml'), HOOK_YAML, 'utf8')
// Write handler.py
writeFileSync(join(HOOK_DIR, 'handler.py'), HANDLER_PY, 'utf8')
logger.info('Installed Mission Control hook for Hermes Agent')
return NextResponse.json({ success: true, message: 'Hook installed', hookDir: HOOK_DIR })
}
if (action === 'uninstall-hook') {
if (existsSync(HOOK_DIR)) {
rmSync(HOOK_DIR, { recursive: true, force: true })
}
logger.info('Uninstalled Mission Control hook for Hermes Agent')
return NextResponse.json({ success: true, message: 'Hook uninstalled' })
}
return NextResponse.json({ error: 'Invalid action. Must be: install-hook, uninstall-hook' }, { status: 400 })
} catch (err: any) {
logger.error({ err }, 'Hermes hook management failed')
return NextResponse.json({ error: err.message || 'Hook operation failed' }, { status: 500 })
}
}
// ---------------------------------------------------------------------------
// Hook file contents
// ---------------------------------------------------------------------------
const HOOK_YAML = `name: mission-control
description: Reports agent telemetry to Mission Control
version: "1.0"
events:
- agent:start
- agent:end
- session:start
`
const HANDLER_PY = `"""
Mission Control hook for Hermes Agent.
Reports session telemetry to the MC /api/sessions endpoint.
Configuration (via ~/.hermes/.env or environment):
MC_URL - Mission Control base URL (default: http://localhost:3000)
MC_API_KEY - API key for authentication (optional)
"""
import os
import logging
from datetime import datetime, timezone
logger = logging.getLogger("hooks.mission-control")
MC_URL = os.environ.get("MC_URL", "http://localhost:3000")
MC_API_KEY = os.environ.get("MC_API_KEY", "")
def _headers():
h = {"Content-Type": "application/json"}
if MC_API_KEY:
h["X-Api-Key"] = MC_API_KEY
return h
async def handle_event(event_name: str, payload: dict) -> None:
"""
Called by the Hermes hook registry on matching events.
Fire-and-forget with a short timeout never blocks the agent.
"""
try:
import httpx
except ImportError:
logger.debug("httpx not available, skipping MC telemetry")
return
try:
if event_name == "agent:start":
await _report_agent_start(payload)
elif event_name == "agent:end":
await _report_agent_end(payload)
elif event_name == "session:start":
await _report_session_start(payload)
except Exception as exc:
logger.debug("MC hook error (%s): %s", event_name, exc)
async def _report_agent_start(payload: dict) -> None:
import httpx
data = {
"name": payload.get("agent_name", "hermes"),
"role": "Hermes Agent",
"status": "active",
"source": "hermes-hook",
}
async with httpx.AsyncClient(timeout=2.0) as client:
await client.post(f"{MC_URL}/api/agents", json=data, headers=_headers())
async def _report_agent_end(payload: dict) -> None:
import httpx
data = {
"name": payload.get("agent_name", "hermes"),
"status": "idle",
"source": "hermes-hook",
}
async with httpx.AsyncClient(timeout=2.0) as client:
await client.post(f"{MC_URL}/api/agents", json=data, headers=_headers())
async def _report_session_start(payload: dict) -> None:
import httpx
data = {
"event": "session:start",
"session_id": payload.get("session_id", ""),
"source": payload.get("source", "cli"),
"timestamp": datetime.now(timezone.utc).isoformat(),
}
async with httpx.AsyncClient(timeout=2.0) as client:
await client.post(f"{MC_URL}/api/hermes/events", json=data, headers=_headers())
`
export const dynamic = 'force-dynamic'

View File

@ -0,0 +1,17 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { getHermesTasks } from '@/lib/hermes-tasks'
/**
* GET /api/hermes/tasks Returns Hermes cron jobs
* Read-only bridge: MC reads from ~/.hermes/cron/
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const force = request.nextUrl.searchParams.get('force') === 'true'
const result = getHermesTasks(force)
return NextResponse.json(result)
}

183
src/app/api/index/route.ts Normal file
View File

@ -0,0 +1,183 @@
import { NextResponse } from 'next/server'
const VERSION = '1.3.0'
export const revalidate = 300
interface Endpoint {
path: string
methods: string[]
description: string
tag: string
auth: string
}
const endpoints: Endpoint[] = [
// ── Tasks ─────────────────────────────────────────
{ path: '/api/tasks', methods: ['GET', 'POST'], description: 'Task CRUD — list, create', tag: 'Tasks', auth: 'viewer/operator' },
{ path: '/api/tasks/:id', methods: ['GET', 'PATCH', 'DELETE'], description: 'Task detail — read, update, delete', tag: 'Tasks', auth: 'viewer/operator/admin' },
{ path: '/api/tasks/:id/comments', methods: ['GET', 'POST'], description: 'Task comments — list, add', tag: 'Tasks', auth: 'viewer/operator' },
{ path: '/api/tasks/:id/broadcast', methods: ['POST'], description: 'Broadcast task update via SSE', tag: 'Tasks', auth: 'operator' },
{ path: '/api/tasks/queue', methods: ['GET'], description: 'Task queue — next assignable tasks', tag: 'Tasks', auth: 'viewer' },
{ path: '/api/tasks/outcomes', methods: ['GET'], description: 'Task outcome analytics', tag: 'Tasks', auth: 'viewer' },
{ path: '/api/tasks/regression', methods: ['GET'], description: 'Task regression detection', tag: 'Tasks', auth: 'viewer' },
// ── Projects ──────────────────────────────────────
{ path: '/api/workspaces', methods: ['GET'], description: 'Tenant-scoped workspace listing', tag: 'Projects', auth: 'viewer' },
{ path: '/api/projects', methods: ['GET', 'POST'], description: 'Project CRUD — list, create', tag: 'Projects', auth: 'viewer/operator' },
{ path: '/api/projects/:id', methods: ['GET', 'PATCH', 'DELETE'], description: 'Project detail — read, update, archive/delete', tag: 'Projects', auth: 'viewer/operator/admin' },
{ path: '/api/projects/:id/tasks', methods: ['GET'], description: 'Tasks scoped to project', tag: 'Projects', auth: 'viewer' },
{ path: '/api/projects/:id/agents', methods: ['GET', 'POST', 'DELETE'], description: 'Project agent assignments — list, assign, unassign', tag: 'Projects', auth: 'viewer/operator' },
// ── Agents ────────────────────────────────────────
{ path: '/api/agents', methods: ['GET', 'POST'], description: 'Agent CRUD — list, register', tag: 'Agents', auth: 'viewer/operator' },
{ path: '/api/agents/:id', methods: ['GET', 'PATCH', 'DELETE'], description: 'Agent detail — read, update, delete', tag: 'Agents', auth: 'viewer/operator/admin' },
{ path: '/api/agents/:id/heartbeat', methods: ['POST'], description: 'Agent heartbeat ping', tag: 'Agents', auth: 'operator' },
{ path: '/api/agents/:id/wake', methods: ['POST'], description: 'Wake idle agent', tag: 'Agents', auth: 'operator' },
{ path: '/api/agents/:id/soul', methods: ['GET', 'PUT'], description: 'Agent soul file — read, write', tag: 'Agents', auth: 'viewer/operator' },
{ path: '/api/agents/:id/memory', methods: ['GET'], description: 'Agent memory files', tag: 'Agents', auth: 'viewer' },
{ path: '/api/agents/:id/files', methods: ['GET'], description: 'Agent workspace files', tag: 'Agents', auth: 'viewer' },
{ path: '/api/agents/:id/diagnostics', methods: ['GET'], description: 'Agent diagnostics', tag: 'Agents', auth: 'viewer' },
{ path: '/api/agents/:id/attribution', methods: ['GET'], description: 'Agent token usage attribution', tag: 'Agents', auth: 'viewer' },
{ path: '/api/agents/sync', methods: ['POST'], description: 'Sync agents from gateway sessions', tag: 'Agents', auth: 'operator' },
{ path: '/api/agents/comms', methods: ['GET'], description: 'Agent communication feed', tag: 'Agents', auth: 'viewer' },
{ path: '/api/agents/message', methods: ['POST'], description: 'Send message to agent', tag: 'Agents', auth: 'operator' },
// ── Chat ──────────────────────────────────────────
{ path: '/api/chat/messages', methods: ['GET', 'POST'], description: 'Chat messages — list, send', tag: 'Chat', auth: 'viewer/operator' },
{ path: '/api/chat/messages/:id', methods: ['PATCH'], description: 'Mark chat message read', tag: 'Chat', auth: 'operator' },
{ path: '/api/chat/conversations', methods: ['GET'], description: 'List conversations', tag: 'Chat', auth: 'viewer' },
{ path: '/api/chat/session-prefs', methods: ['GET', 'PATCH'], description: 'Local session chat preferences (rename + color)', tag: 'Chat', auth: 'viewer/operator' },
// ── Sessions ──────────────────────────────────────
{ path: '/api/sessions', methods: ['GET'], description: 'List gateway sessions', tag: 'Sessions', auth: 'viewer' },
{ path: '/api/sessions/:id/control', methods: ['POST'], description: 'Session control (stop, message)', tag: 'Sessions', auth: 'operator' },
{ path: '/api/sessions/continue', methods: ['POST'], description: 'Continue a local Claude/Codex session with a prompt', tag: 'Sessions', auth: 'operator' },
{ path: '/api/sessions/transcript', methods: ['GET'], description: 'Read local Claude/Codex session transcript snippets', tag: 'Sessions', auth: 'viewer' },
{ path: '/api/claude/sessions', methods: ['GET'], description: 'Claude CLI session scanner', tag: 'Sessions', auth: 'viewer' },
// ── Activities & Notifications ────────────────────
{ path: '/api/activities', methods: ['GET'], description: 'Activity feed', tag: 'Activities', auth: 'viewer' },
{ path: '/api/notifications', methods: ['GET', 'PATCH'], description: 'Notifications — list, mark read', tag: 'Notifications', auth: 'viewer/operator' },
{ path: '/api/notifications/deliver', methods: ['POST'], description: 'Deliver notification', tag: 'Notifications', auth: 'operator' },
// ── Quality & Standup ─────────────────────────────
{ path: '/api/quality-review', methods: ['GET', 'POST'], description: 'Quality review gate', tag: 'Quality', auth: 'viewer/operator' },
{ path: '/api/standup', methods: ['GET', 'POST'], description: 'Daily standup reports', tag: 'Standup', auth: 'viewer/operator' },
// ── Workflows & Pipelines ─────────────────────────
{ path: '/api/workflows', methods: ['GET', 'POST', 'PUT', 'DELETE'], description: 'Workflow templates CRUD', tag: 'Workflows', auth: 'viewer/operator' },
{ path: '/api/pipelines', methods: ['GET', 'POST', 'DELETE'], description: 'Pipeline CRUD', tag: 'Pipelines', auth: 'viewer/operator' },
{ path: '/api/pipelines/run', methods: ['POST'], description: 'Execute pipeline', tag: 'Pipelines', auth: 'operator' },
// ── Webhooks ──────────────────────────────────────
{ path: '/api/webhooks', methods: ['GET', 'POST', 'PATCH', 'DELETE'], description: 'Webhook CRUD', tag: 'Webhooks', auth: 'viewer/operator' },
{ path: '/api/webhooks/deliveries', methods: ['GET'], description: 'Webhook delivery history', tag: 'Webhooks', auth: 'viewer' },
{ path: '/api/webhooks/retry', methods: ['POST'], description: 'Retry webhook delivery', tag: 'Webhooks', auth: 'operator' },
{ path: '/api/webhooks/test', methods: ['POST'], description: 'Send test webhook', tag: 'Webhooks', auth: 'operator' },
{ path: '/api/webhooks/verify-docs', methods: ['GET'], description: 'Webhook verification docs', tag: 'Webhooks', auth: 'public' },
// ── Alerts ────────────────────────────────────────
{ path: '/api/alerts', methods: ['GET', 'POST', 'PATCH', 'DELETE'], description: 'Alert rules CRUD', tag: 'Alerts', auth: 'viewer/operator' },
// ── Auth ──────────────────────────────────────────
{ path: '/api/auth/login', methods: ['POST'], description: 'User login', tag: 'Auth', auth: 'public' },
{ path: '/api/auth/logout', methods: ['POST'], description: 'User logout', tag: 'Auth', auth: 'authenticated' },
{ path: '/api/auth/me', methods: ['GET'], description: 'Current user info', tag: 'Auth', auth: 'authenticated' },
{ path: '/api/auth/users', methods: ['GET', 'POST', 'PATCH', 'DELETE'], description: 'User management', tag: 'Auth', auth: 'admin' },
{ path: '/api/auth/google', methods: ['POST'], description: 'Google OAuth callback', tag: 'Auth', auth: 'public' },
{ path: '/api/auth/access-requests', methods: ['GET', 'PATCH'], description: 'Access request approvals', tag: 'Auth', auth: 'admin' },
// ── Tokens & Costs ────────────────────────────────
{ path: '/api/tokens', methods: ['GET', 'POST'], description: 'Token usage tracking', tag: 'Tokens', auth: 'viewer/operator' },
// ── Cron & Scheduler ──────────────────────────────
{ path: '/api/cron', methods: ['GET', 'POST', 'PATCH', 'DELETE'], description: 'Cron job management', tag: 'Cron', auth: 'viewer/operator' },
{ path: '/api/scheduler', methods: ['POST'], description: 'Scheduler tick (internal)', tag: 'Cron', auth: 'operator' },
// ── Spawn ─────────────────────────────────────────
{ path: '/api/spawn', methods: ['POST'], description: 'Spawn agent subprocess', tag: 'Spawn', auth: 'operator' },
// ── Memory ────────────────────────────────────────
{ path: '/api/memory', methods: ['GET', 'POST', 'PUT', 'DELETE'], description: 'Memory browser — list, read, write, delete', tag: 'Memory', auth: 'viewer/operator' },
// ── Search & Mentions ─────────────────────────────
{ path: '/api/search', methods: ['GET'], description: 'Full-text search across entities', tag: 'Search', auth: 'viewer' },
{ path: '/api/mentions', methods: ['GET'], description: 'Autocomplete for @mentions', tag: 'Search', auth: 'viewer' },
// ── Logs ──────────────────────────────────────────
{ path: '/api/logs', methods: ['GET'], description: 'Application logs', tag: 'Logs', auth: 'viewer' },
// ── Settings ──────────────────────────────────────
{ path: '/api/settings', methods: ['GET', 'PATCH'], description: 'System settings', tag: 'Settings', auth: 'viewer/admin' },
{ path: '/api/integrations', methods: ['GET', 'PATCH'], description: 'Integration configuration', tag: 'Settings', auth: 'viewer/admin' },
{ path: '/api/skills', methods: ['GET', 'POST', 'PUT', 'DELETE'], description: 'Installed skills index and disk CRUD', tag: 'Settings', auth: 'viewer/operator' },
// ── Gateway ───────────────────────────────────────
{ path: '/api/gateways', methods: ['GET', 'POST', 'PATCH', 'DELETE'], description: 'Gateway management', tag: 'Gateway', auth: 'admin' },
{ path: '/api/gateways/connect', methods: ['POST'], description: 'Connect to gateway WebSocket', tag: 'Gateway', auth: 'operator' },
{ path: '/api/gateways/health', methods: ['GET'], description: 'Gateway health check', tag: 'Gateway', auth: 'viewer' },
{ path: '/api/gateway-config', methods: ['GET', 'PATCH'], description: 'Gateway configuration', tag: 'Gateway', auth: 'admin' },
{ path: '/api/connect', methods: ['POST'], description: 'WebSocket connection info', tag: 'Gateway', auth: 'operator' },
// ── GitHub ────────────────────────────────────────
{ path: '/api/github', methods: ['GET', 'POST'], description: 'GitHub issue sync', tag: 'GitHub', auth: 'viewer/operator' },
// ── Super Admin ───────────────────────────────────
{ path: '/api/super/tenants', methods: ['GET', 'POST', 'PATCH', 'DELETE'], description: 'Tenant management', tag: 'Super Admin', auth: 'admin' },
{ path: '/api/super/tenants/:id/decommission', methods: ['POST'], description: 'Decommission tenant', tag: 'Super Admin', auth: 'admin' },
{ path: '/api/super/provision-jobs', methods: ['GET', 'POST'], description: 'Provision job management', tag: 'Super Admin', auth: 'admin' },
{ path: '/api/super/provision-jobs/:id', methods: ['GET'], description: 'Provision job detail', tag: 'Super Admin', auth: 'admin' },
{ path: '/api/super/provision-jobs/:id/run', methods: ['POST'], description: 'Execute provision job', tag: 'Super Admin', auth: 'admin' },
{ path: '/api/super/os-users', methods: ['GET'], description: 'OS user listing', tag: 'Super Admin', auth: 'admin' },
// ── System ────────────────────────────────────────
{ path: '/api/status', methods: ['GET'], description: 'System status & capabilities', tag: 'System', auth: 'public' },
{ path: '/api/audit', methods: ['GET'], description: 'Audit trail', tag: 'System', auth: 'admin' },
{ path: '/api/backup', methods: ['POST'], description: 'Database backup', tag: 'System', auth: 'admin' },
{ path: '/api/cleanup', methods: ['POST'], description: 'Database cleanup', tag: 'System', auth: 'admin' },
{ path: '/api/export', methods: ['GET'], description: 'Data export', tag: 'System', auth: 'viewer' },
{ path: '/api/workload', methods: ['GET'], description: 'Agent workload stats', tag: 'System', auth: 'viewer' },
{ path: '/api/releases/check', methods: ['GET'], description: 'Check for updates', tag: 'System', auth: 'public' },
{ path: '/api/openclaw/version', methods: ['GET'], description: 'Installed OpenClaw version and latest release metadata', tag: 'System', auth: 'public' },
{ path: '/api/openclaw/update', methods: ['POST'], description: 'Update OpenClaw to the latest stable release', tag: 'System', auth: 'admin' },
{ path: '/api/openclaw/doctor', methods: ['GET', 'POST'], description: 'Inspect and fix OpenClaw configuration drift', tag: 'System', auth: 'admin' },
// ── Local ─────────────────────────────────────────
{ path: '/api/local/flight-deck', methods: ['GET'], description: 'Local flight deck status', tag: 'Local', auth: 'viewer' },
{ path: '/api/local/agents-doc', methods: ['GET'], description: 'Local AGENTS.md discovery and content', tag: 'Local', auth: 'viewer' },
{ path: '/api/local/terminal', methods: ['POST'], description: 'Local terminal command', tag: 'Local', auth: 'admin' },
// ── Docs ──────────────────────────────────────────
{ path: '/api/docs', methods: ['GET'], description: 'OpenAPI spec (JSON)', tag: 'Docs', auth: 'public' },
{ path: '/api/docs/tree', methods: ['GET'], description: 'Documentation tree', tag: 'Docs', auth: 'public' },
{ path: '/api/docs/content', methods: ['GET'], description: 'Documentation page content', tag: 'Docs', auth: 'public' },
{ path: '/api/docs/search', methods: ['GET'], description: 'Documentation search', tag: 'Docs', auth: 'public' },
// ── Discovery ─────────────────────────────────────
{ path: '/api/index', methods: ['GET'], description: 'API endpoint catalog (this endpoint)', tag: 'Discovery', auth: 'public' },
]
const payload = {
version: VERSION,
generated_at: new Date().toISOString(),
total_endpoints: endpoints.length,
endpoints,
event_stream: {
path: '/api/events',
protocol: 'SSE',
description: 'Real-time server-sent events for tasks, agents, chat, and activity updates',
},
docs: {
openapi: '/api/docs',
tree: '/api/docs/tree',
search: '/api/docs/search',
},
}
export async function GET() {
return NextResponse.json(payload, {
headers: {
'Cache-Control': 'public, s-maxage=300, stale-while-revalidate=600',
},
})
}

View File

@ -4,23 +4,42 @@ import { logAuditEvent } from '@/lib/db'
import { config } from '@/lib/config'
import { join } from 'path'
import { readFile, writeFile, rename } from 'fs/promises'
import { existsSync } from 'fs'
import os from 'os'
import { execFileSync } from 'child_process'
import { validateBody, integrationActionSchema } from '@/lib/validation'
import { mutationLimiter } from '@/lib/rate-limit'
import { detectProviderSubscriptions } from '@/lib/provider-subscriptions'
import { getPluginIntegrations, getPluginCategories } from '@/lib/plugins'
import type { PluginIntegrationDef } from '@/lib/plugins'
// ---------------------------------------------------------------------------
// Integration registry
// ---------------------------------------------------------------------------
type BuiltinCategory = 'ai' | 'search' | 'social' | 'messaging' | 'devtools' | 'security' | 'infra' | 'productivity' | 'browser'
interface IntegrationDef {
id: string
name: string
category: 'ai' | 'search' | 'social' | 'messaging' | 'devtools' | 'security' | 'infra'
category: string
envVars: string[]
vaultItem?: string // 1Password item name
testable?: boolean
recommendation?: string
}
interface IntegrationProbeSnapshot {
opAvailable: boolean
xint: { installed: boolean; oauthConfigured: boolean; envConfigured: boolean }
ollamaInstalled: boolean
ollamaReachable: boolean
gwsInstalled: boolean
}
let integrationProbeCache: { ts: number; value: IntegrationProbeSnapshot } | null = null
const INTEGRATION_PROBE_TTL_MS = 5000
const INTEGRATIONS: IntegrationDef[] = [
// AI Providers
{ id: 'anthropic', name: 'Anthropic', category: 'ai', envVars: ['ANTHROPIC_API_KEY'], vaultItem: 'openclaw-anthropic-api-key', testable: true },
@ -34,7 +53,13 @@ const INTEGRATIONS: IntegrationDef[] = [
{ id: 'brave', name: 'Brave Search', category: 'search', envVars: ['BRAVE_API_KEY'], vaultItem: 'openclaw-brave-api-key' },
// Social
{ id: 'x_twitter', name: 'X / Twitter', category: 'social', envVars: ['X_COOKIES_PATH'] },
{
id: 'x_twitter',
name: 'X / Twitter',
category: 'social',
envVars: ['X_COOKIES_PATH'],
recommendation: 'Recommended: use xint CLI as default (`xint auth`) instead of manual cookies path.',
},
{ id: 'linkedin', name: 'LinkedIn', category: 'social', envVars: ['LINKEDIN_ACCESS_TOKEN'] },
// Messaging — add entries here for each Telegram bot you run
@ -43,11 +68,24 @@ const INTEGRATIONS: IntegrationDef[] = [
// Dev Tools
{ id: 'github', name: 'GitHub', category: 'devtools', envVars: ['GITHUB_TOKEN'], vaultItem: 'openclaw-github-token', testable: true },
// Productivity
{
id: 'google_workspace',
name: 'Google Workspace',
category: 'productivity',
envVars: ['GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE'],
testable: true,
recommendation: 'Install: npm i -g @googleworkspace/cli — then run `gws auth login` or set a service account credentials file.',
},
// Security
{ id: 'onepassword', name: '1Password', category: 'security', envVars: ['OP_SERVICE_ACCOUNT_TOKEN'] },
// Infrastructure
{ id: 'gateway', name: 'Gateway Auth', category: 'infra', envVars: ['OPENCLAW_GATEWAY_TOKEN'], vaultItem: 'openclaw-openclaw-gateway-token' },
// Browser Automation
{ id: 'hyperbrowser', name: 'Hyperbrowser', category: 'browser', envVars: ['HYPERBROWSER_API_KEY'], testable: true, recommendation: 'Cloud browser automation for AI agents. Get a key at hyperbrowser.ai' },
]
// Category metadata
@ -59,6 +97,8 @@ const CATEGORIES: Record<string, { label: string; order: number }> = {
devtools: { label: 'Dev Tools', order: 4 },
security: { label: 'Security', order: 5 },
infra: { label: 'Infrastructure', order: 6 },
productivity: { label: 'Productivity', order: 7 },
browser: { label: 'Browser Automation', order: 8 },
}
// Vars that must never be written via this API
@ -142,6 +182,95 @@ function isVarBlocked(key: string): boolean {
return BLOCKED_PREFIXES.some(p => key.startsWith(p))
}
function getEffectiveEnvValue(envMap: Map<string, string>, key: string): string {
const fromFile = envMap.get(key)
if (typeof fromFile === 'string' && fromFile.length > 0) return fromFile
const fromProcess = process.env[key]
if (typeof fromProcess === 'string' && fromProcess.length > 0) return fromProcess
return ''
}
function isPathLikeEnvVar(key: string): boolean {
return key.endsWith('_PATH') || key.endsWith('_FILE')
}
function isConfiguredValue(key: string, value: string): boolean {
if (!value || value.length === 0) return false
if (isPathLikeEnvVar(key)) {
try {
return existsSync(value)
} catch {
return false
}
}
return true
}
function checkOpAuthenticated(opEnv?: NodeJS.ProcessEnv): boolean {
try {
execFileSync('op', ['whoami', '--format', 'json'], {
stdio: 'pipe',
timeout: 3000,
env: opEnv || process.env,
})
return true
} catch {
return false
}
}
function checkCommandAvailable(command: string): boolean {
try {
execFileSync('which', [command], { stdio: 'pipe', timeout: 3000 })
return true
} catch {
return false
}
}
function checkXintState(): { installed: boolean; oauthConfigured: boolean; envConfigured: boolean } {
const installed = checkCommandAvailable('xint')
const oauthPath = join(os.homedir(), '.xint', 'data', 'oauth-tokens.json')
const envPath = join(os.homedir(), '.xint', '.env')
const oauthConfigured = existsSync(oauthPath)
const envConfigured = existsSync(envPath)
return { installed, oauthConfigured, envConfigured }
}
function resolveOllamaBaseUrl(): string {
const raw = String(process.env.OLLAMA_HOST || '').trim()
if (!raw) return 'http://127.0.0.1:11434'
if (raw.startsWith('http://') || raw.startsWith('https://')) return raw
return `http://${raw}`
}
async function checkOllamaReachable(): Promise<boolean> {
try {
const base = resolveOllamaBaseUrl().replace(/\/+$/, '')
const res = await fetch(`${base}/api/tags`, { signal: AbortSignal.timeout(1200) })
return res.ok
} catch {
return false
}
}
async function getIntegrationProbeSnapshot(): Promise<IntegrationProbeSnapshot> {
const now = Date.now()
if (integrationProbeCache && (now - integrationProbeCache.ts) < INTEGRATION_PROBE_TTL_MS) {
return integrationProbeCache.value
}
const value: IntegrationProbeSnapshot = {
opAvailable: checkOpAvailable(),
xint: checkXintState(),
ollamaInstalled: checkCommandAvailable('ollama'),
ollamaReachable: await checkOllamaReachable(),
gwsInstalled: checkCommandAvailable('gws'),
}
integrationProbeCache = { ts: now, value }
return value
}
// Uses execFileSync (no shell) to avoid command injection
function checkOpAvailable(): boolean {
try {
@ -194,16 +323,44 @@ export async function GET(request: NextRequest) {
}
}
const opAvailable = checkOpAvailable()
const probe = await getIntegrationProbeSnapshot()
const { opAvailable, xint, ollamaInstalled, ollamaReachable, gwsInstalled } = probe
const providerSubscriptions = detectProviderSubscriptions()
const integrations = INTEGRATIONS.map(def => {
// Merge plugin integrations and categories
const pluginIntegrations = getPluginIntegrations()
const allIntegrations: IntegrationDef[] = [...INTEGRATIONS]
const pluginIntegrationMap = new Map<string, PluginIntegrationDef>()
for (const pi of pluginIntegrations) {
if (!allIntegrations.some(i => i.id === pi.id)) {
allIntegrations.push({
id: pi.id,
name: pi.name,
category: pi.category,
envVars: pi.envVars,
vaultItem: pi.vaultItem,
testable: pi.testable,
recommendation: pi.recommendation,
})
}
pluginIntegrationMap.set(pi.id, pi)
}
const allCategories = { ...CATEGORIES }
for (const pc of getPluginCategories()) {
if (!(pc.id in allCategories)) {
allCategories[pc.id] = { label: pc.label, order: pc.order }
}
}
const integrations = allIntegrations.map(def => {
const vars: Record<string, { redacted: string; set: boolean }> = {}
let allSet = true
let anySet = false
for (const envVar of def.envVars) {
const val = envMap.get(envVar)
if (val && val.length > 0) {
const val = getEffectiveEnvValue(envMap, envVar)
if (isConfiguredValue(envVar, val)) {
vars[envVar] = { redacted: redactValue(val), set: true }
anySet = true
} else {
@ -212,23 +369,90 @@ export async function GET(request: NextRequest) {
}
}
if (def.id === 'onepassword' && !anySet && opAvailable) {
const opEnv = { ...process.env }
const fileToken = envMap.get('OP_SERVICE_ACCOUNT_TOKEN')
if (fileToken) opEnv.OP_SERVICE_ACCOUNT_TOKEN = fileToken
if (checkOpAuthenticated(opEnv)) {
vars.OP_SERVICE_ACCOUNT_TOKEN = {
redacted: fileToken ? redactValue(fileToken) : 'op session',
set: true,
}
allSet = true
anySet = true
}
}
// Support OAuth/subscription-based auth for providers that may not expose API keys.
if ((def.id === 'anthropic' || def.id === 'openai') && !anySet) {
const sub = providerSubscriptions.active[def.id]
if (sub) {
const primaryVar = def.envVars[0]
vars[primaryVar] = {
redacted: `${sub.type} (${sub.source})`,
set: true,
}
allSet = true
anySet = true
}
}
// Local Ollama can be available without API key-based auth.
if (def.id === 'ollama' && !anySet) {
const primaryVar = def.envVars[0]
if (ollamaReachable) {
vars[primaryVar] = { redacted: 'local daemon', set: true }
allSet = true
anySet = true
} else if (ollamaInstalled) {
vars[primaryVar] = { redacted: 'installed (daemon not reachable)', set: true }
allSet = false
anySet = true
}
}
// Google Workspace CLI detection
if (def.id === 'google_workspace' && !anySet) {
const primaryVar = def.envVars[0]
if (gwsInstalled) {
vars[primaryVar] = { redacted: 'gws CLI installed (run `gws auth login`)', set: true }
allSet = false
anySet = true
}
}
// X integration should default to xint auth when present.
if (def.id === 'x_twitter' && !anySet) {
const primaryVar = def.envVars[0]
if (xint.oauthConfigured) {
vars[primaryVar] = { redacted: 'xint oauth', set: true }
allSet = true
anySet = true
} else if (xint.installed || xint.envConfigured) {
vars[primaryVar] = { redacted: 'xint installed (run `xint auth`)', set: true }
allSet = false
anySet = true
}
}
const status = allSet && anySet ? 'connected' : anySet ? 'partial' : 'not_configured'
return {
id: def.id,
name: def.name,
category: def.category,
categoryLabel: CATEGORIES[def.category]?.label ?? def.category,
categoryLabel: allCategories[def.category]?.label ?? def.category,
envVars: vars,
status,
vaultItem: def.vaultItem ?? null,
testable: def.testable ?? false,
recommendation: def.recommendation ?? null,
}
})
return NextResponse.json({
integrations,
categories: Object.entries(CATEGORIES)
categories: Object.entries(allCategories)
.sort(([, a], [, b]) => a.order - b.order)
.map(([id, meta]) => ({ id, label: meta.label })),
opAvailable,
@ -377,7 +601,22 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: 'integrationId required' }, { status: 400 })
}
const integration = INTEGRATIONS.find(i => i.id === body.integrationId)
let integration: IntegrationDef | undefined = INTEGRATIONS.find(i => i.id === body.integrationId)
if (!integration) {
// Check plugin integrations
const pi = getPluginIntegrations().find(i => i.id === body.integrationId)
if (pi) {
integration = {
id: pi.id,
name: pi.name,
category: pi.category,
envVars: pi.envVars,
vaultItem: pi.vaultItem,
testable: pi.testable,
recommendation: pi.recommendation,
}
}
}
if (!integration) {
return NextResponse.json({ error: `Unknown integration: ${body.integrationId}` }, { status: 404 })
}
@ -418,10 +657,11 @@ async function handleTest(
try {
let result: { ok: boolean; detail: string }
const providerSubscriptions = detectProviderSubscriptions()
switch (integration.id) {
case 'telegram': {
const token = envMap.get(integration.envVars[0])
const token = getEffectiveEnvValue(envMap, integration.envVars[0])
if (!token) return NextResponse.json({ ok: false, detail: 'Token not set' })
const res = await fetch(`https://api.telegram.org/bot${token}/getMe`, { signal: AbortSignal.timeout(5000) })
const data = await res.json()
@ -432,7 +672,7 @@ async function handleTest(
}
case 'github': {
const token = envMap.get('GITHUB_TOKEN')
const token = getEffectiveEnvValue(envMap, 'GITHUB_TOKEN')
if (!token) return NextResponse.json({ ok: false, detail: 'Token not set' })
const res = await fetch('https://api.github.com/user', {
headers: { Authorization: `Bearer ${token}`, 'User-Agent': 'MissionControl/1.0' },
@ -448,8 +688,12 @@ async function handleTest(
}
case 'anthropic': {
const key = envMap.get('ANTHROPIC_API_KEY')
if (!key) return NextResponse.json({ ok: false, detail: 'API key not set' })
const key = getEffectiveEnvValue(envMap, 'ANTHROPIC_API_KEY')
if (!key) {
const sub = providerSubscriptions.active.anthropic
if (sub) return NextResponse.json({ ok: true, detail: `OAuth/subscription detected: ${sub.type}` })
return NextResponse.json({ ok: false, detail: 'API key not set' })
}
const res = await fetch('https://api.anthropic.com/v1/models', {
method: 'GET',
headers: { 'x-api-key': key, 'anthropic-version': '2023-06-01' },
@ -462,8 +706,12 @@ async function handleTest(
}
case 'openai': {
const key = envMap.get('OPENAI_API_KEY')
if (!key) return NextResponse.json({ ok: false, detail: 'API key not set' })
const key = getEffectiveEnvValue(envMap, 'OPENAI_API_KEY')
if (!key) {
const sub = providerSubscriptions.active.openai
if (sub) return NextResponse.json({ ok: true, detail: `OAuth/subscription detected: ${sub.type}` })
return NextResponse.json({ ok: false, detail: 'API key not set' })
}
const res = await fetch('https://api.openai.com/v1/models', {
headers: { Authorization: `Bearer ${key}` },
signal: AbortSignal.timeout(5000),
@ -475,7 +723,7 @@ async function handleTest(
}
case 'openrouter': {
const key = envMap.get('OPENROUTER_API_KEY')
const key = getEffectiveEnvValue(envMap, 'OPENROUTER_API_KEY')
if (!key) return NextResponse.json({ ok: false, detail: 'API key not set' })
const res = await fetch('https://openrouter.ai/api/v1/models', {
headers: { Authorization: `Bearer ${key}` },
@ -487,8 +735,70 @@ async function handleTest(
break
}
default:
return NextResponse.json({ error: 'Test not implemented for this integration' }, { status: 400 })
case 'hyperbrowser': {
const key = getEffectiveEnvValue(envMap, 'HYPERBROWSER_API_KEY')
if (!key) return NextResponse.json({ ok: false, detail: 'API key not set' })
const res = await fetch('https://app.hyperbrowser.ai/api/v2/sessions', {
headers: { 'x-api-key': key },
signal: AbortSignal.timeout(5000),
})
result = res.ok
? { ok: true, detail: 'API key valid' }
: { ok: false, detail: `HTTP ${res.status}` }
break
}
case 'google_workspace': {
const credsFile = getEffectiveEnvValue(envMap, 'GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE')
const gwsAvail = checkCommandAvailable('gws')
if (!gwsAvail) {
result = { ok: false, detail: 'gws CLI not installed — run: npm i -g @googleworkspace/cli' }
break
}
try {
const env: NodeJS.ProcessEnv = { ...process.env }
if (credsFile) env.GOOGLE_WORKSPACE_CLI_CREDENTIALS_FILE = credsFile
execFileSync('gws', ['auth', 'status'], {
timeout: 10000,
stdio: ['pipe', 'pipe', 'pipe'],
env,
})
result = { ok: true, detail: 'Authenticated' }
} catch (err: any) {
const stderr = err.stderr?.toString() || ''
result = { ok: false, detail: stderr.slice(0, 120) || 'Not authenticated — run `gws auth login`' }
}
break
}
default: {
// Check plugin testHandler first
const pluginDef = getPluginIntegrations().find(pi => pi.id === integration.id)
if (pluginDef?.testHandler) {
result = await pluginDef.testHandler(envMap)
break
}
// Generic connectivity test: attempt a HEAD request to known base URLs
const baseUrls: Record<string, string> = {
nvidia: 'https://api.nvidia.com',
moonshot: 'https://api.moonshot.cn',
brave: 'https://api.search.brave.com',
linkedin: 'https://api.linkedin.com',
ollama: resolveOllamaBaseUrl(),
gateway: String(process.env.OPENCLAW_GATEWAY_URL || '').trim() || '',
}
const url = baseUrls[integration.id]
if (url) {
const res = await fetch(url, { method: 'HEAD', signal: AbortSignal.timeout(5000) })
result = res.ok || res.status < 500
? { ok: true, detail: `Reachable (HTTP ${res.status})` }
: { ok: false, detail: `Unreachable (HTTP ${res.status})` }
} else {
return NextResponse.json({ ok: false, detail: 'No test available — configure the integration URL to enable testing' })
}
break
}
}
const ipAddress = request.headers.get('x-forwarded-for') || request.headers.get('x-real-ip') || 'unknown'

View File

@ -0,0 +1,53 @@
import { NextRequest, NextResponse } from 'next/server'
import { access, readFile } from 'node:fs/promises'
import { constants } from 'node:fs'
import { join } from 'node:path'
import { homedir } from 'node:os'
import { requireRole } from '@/lib/auth'
async function findFirstReadable(paths: string[]): Promise<string | null> {
for (const p of paths) {
try {
await access(p, constants.R_OK)
return p
} catch {
// Try next candidate
}
}
return null
}
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const cwd = process.cwd()
const home = homedir()
const candidates = [
join(cwd, 'AGENTS.md'),
join(cwd, 'agents.md'),
join(home, '.codex', 'AGENTS.md'),
join(home, '.agents', 'AGENTS.md'),
join(home, '.config', 'codex', 'AGENTS.md'),
]
const found = await findFirstReadable(candidates)
if (!found) {
return NextResponse.json({
found: false,
path: null,
content: null,
candidates,
})
}
const content = await readFile(found, 'utf8')
return NextResponse.json({
found: true,
path: found,
content,
candidates,
})
}
export const dynamic = 'force-dynamic'

View File

@ -0,0 +1,32 @@
import { NextRequest, NextResponse } from 'next/server'
import { config } from '@/lib/config'
import { requireRole } from '@/lib/auth'
import { readLimiter } from '@/lib/rate-limit'
import { generateContextPayload } from '@/lib/memory-utils'
import { logger } from '@/lib/logger'
const MEMORY_PATH = config.memoryDir
/**
* Context injection endpoint generates a payload for agent session start.
* Returns workspace tree, recent files, health summary, and maintenance signals.
*/
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const limited = readLimiter(request)
if (limited) return limited
if (!MEMORY_PATH) {
return NextResponse.json({ error: 'Memory directory not configured' }, { status: 500 })
}
try {
const payload = await generateContextPayload(MEMORY_PATH)
return NextResponse.json(payload)
} catch (err) {
logger.error({ err }, 'Memory context API error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -0,0 +1,115 @@
import { NextRequest, NextResponse } from 'next/server'
import { existsSync, readdirSync, statSync } from 'fs'
import path from 'path'
import Database from 'better-sqlite3'
import { config } from '@/lib/config'
import { requireRole } from '@/lib/auth'
import { readLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
interface AgentFileInfo {
path: string
chunks: number
textSize: number
}
interface AgentGraphData {
name: string
dbSize: number
totalChunks: number
totalFiles: number
files: AgentFileInfo[]
}
const memoryDbDir = config.openclawStateDir
? path.join(config.openclawStateDir, 'memory')
: ''
function getAgentData(dbPath: string, agentName: string): AgentGraphData | null {
try {
const dbStat = statSync(dbPath)
const db = new Database(dbPath, { readonly: true, fileMustExist: true })
let files: AgentFileInfo[] = []
let totalChunks = 0
let totalFiles = 0
try {
// Check if chunks table exists
const tableCheck = db
.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='chunks'")
.get() as { name: string } | undefined
if (tableCheck) {
// Use COUNT only — skip SUM(LENGTH(text)) which forces a full data scan
const rows = db
.prepare(
'SELECT path, COUNT(*) as chunks FROM chunks GROUP BY path ORDER BY chunks DESC'
)
.all() as Array<{ path: string; chunks: number }>
files = rows.map((r) => ({
path: r.path || '(unknown)',
chunks: r.chunks,
textSize: 0,
}))
totalChunks = files.reduce((sum, f) => sum + f.chunks, 0)
totalFiles = files.length
}
} finally {
db.close()
}
return {
name: agentName,
dbSize: dbStat.size,
totalChunks,
totalFiles,
files,
}
} catch (err) {
logger.warn(`Failed to read memory DB for agent "${agentName}": ${err}`)
return null
}
}
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const limited = readLimiter(request)
if (limited) return limited
if (!memoryDbDir || !existsSync(memoryDbDir)) {
return NextResponse.json(
{ error: 'Memory directory not available', agents: [] },
{ status: 404 }
)
}
const agentFilter = request.nextUrl.searchParams.get('agent') || 'all'
try {
const entries = readdirSync(memoryDbDir).filter((f) => f.endsWith('.sqlite'))
const agents: AgentGraphData[] = []
for (const entry of entries) {
const agentName = entry.replace('.sqlite', '')
if (agentFilter !== 'all' && agentName !== agentFilter) continue
const dbPath = path.join(memoryDbDir, entry)
const data = getAgentData(dbPath, agentName)
if (data) agents.push(data)
}
// Sort by total chunks descending
agents.sort((a, b) => b.totalChunks - a.totalChunks)
return NextResponse.json({ agents })
} catch (err) {
logger.error(`Failed to build memory graph data: ${err}`)
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -0,0 +1,28 @@
import { NextRequest, NextResponse } from 'next/server'
import { config } from '@/lib/config'
import { requireRole } from '@/lib/auth'
import { readLimiter } from '@/lib/rate-limit'
import { runHealthDiagnostics } from '@/lib/memory-utils'
import { logger } from '@/lib/logger'
const MEMORY_PATH = config.memoryDir
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const limited = readLimiter(request)
if (limited) return limited
if (!MEMORY_PATH) {
return NextResponse.json({ error: 'Memory directory not configured' }, { status: 500 })
}
try {
const report = await runHealthDiagnostics(MEMORY_PATH)
return NextResponse.json(report)
} catch (err) {
logger.error({ err }, 'Memory health API error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -0,0 +1,74 @@
import { NextRequest, NextResponse } from 'next/server'
import { config } from '@/lib/config'
import { requireRole } from '@/lib/auth'
import { readLimiter } from '@/lib/rate-limit'
import { buildLinkGraph, extractWikiLinks } from '@/lib/memory-utils'
import { readFile } from 'fs/promises'
import { join, basename, extname } from 'path'
import { logger } from '@/lib/logger'
const MEMORY_PATH = config.memoryDir
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const limited = readLimiter(request)
if (limited) return limited
if (!MEMORY_PATH) {
return NextResponse.json({ error: 'Memory directory not configured' }, { status: 500 })
}
const { searchParams } = new URL(request.url)
const filePath = searchParams.get('file')
try {
if (filePath) {
// Return links for a specific file
const fullPath = join(MEMORY_PATH, filePath)
// Basic path traversal check
if (!fullPath.startsWith(MEMORY_PATH)) {
return NextResponse.json({ error: 'Invalid path' }, { status: 400 })
}
const content = await readFile(fullPath, 'utf-8')
const links = extractWikiLinks(content)
// Also find backlinks from the full graph
const graph = await buildLinkGraph(MEMORY_PATH)
const node = graph.nodes[filePath]
const incoming = node?.incoming ?? []
const outgoing = node?.outgoing ?? []
return NextResponse.json({
file: filePath,
wikiLinks: links,
outgoing,
incoming,
})
}
// Return full link graph
const graph = await buildLinkGraph(MEMORY_PATH)
// Serialize for the frontend (strip wikiLinks detail for the full graph)
const nodes = Object.values(graph.nodes).map((n) => ({
path: n.path,
name: n.name,
outgoing: n.outgoing,
incoming: n.incoming,
linkCount: n.outgoing.length + n.incoming.length,
hasSchema: n.schema !== null,
}))
return NextResponse.json({
nodes,
totalFiles: graph.totalFiles,
totalLinks: graph.totalLinks,
orphans: graph.orphans,
})
} catch (err) {
logger.error({ err }, 'Memory links API error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -0,0 +1,59 @@
import { NextRequest, NextResponse } from 'next/server'
import { config } from '@/lib/config'
import { requireRole } from '@/lib/auth'
import { mutationLimiter } from '@/lib/rate-limit'
import { reflectPass, reweavePass, generateMOCs } from '@/lib/memory-utils'
import { logger } from '@/lib/logger'
const MEMORY_PATH = config.memoryDir
/**
* Processing pipeline endpoint runs knowledge maintenance operations.
* Actions: reflect, reweave, generate-moc
*
* These mirror Ars Contexta's 6 Rs processing pipeline, adapted for MC:
* - reflect: Find connection opportunities between files
* - reweave: Identify stale files needing updates from newer linked files
* - generate-moc: Auto-generate Maps of Content from file clusters
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateCheck = mutationLimiter(request)
if (rateCheck) return rateCheck
if (!MEMORY_PATH) {
return NextResponse.json({ error: 'Memory directory not configured' }, { status: 500 })
}
try {
const body = await request.json()
const { action } = body
if (action === 'reflect') {
const result = await reflectPass(MEMORY_PATH)
return NextResponse.json(result)
}
if (action === 'reweave') {
const result = await reweavePass(MEMORY_PATH)
return NextResponse.json(result)
}
if (action === 'generate-moc') {
const mocs = await generateMOCs(MEMORY_PATH)
return NextResponse.json({
action: 'generate-moc',
groups: mocs,
totalGroups: mocs.length,
totalEntries: mocs.reduce((s, g) => s + g.entries.length, 0),
})
}
return NextResponse.json({ error: 'Invalid action. Use: reflect, reweave, generate-moc' }, { status: 400 })
} catch (err) {
logger.error({ err }, 'Memory process API error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -3,10 +3,12 @@ import { readdir, readFile, stat, lstat, realpath, writeFile, mkdir, unlink } fr
import { existsSync, mkdirSync } from 'fs'
import { join, dirname, sep } from 'path'
import { config } from '@/lib/config'
import { db_helpers } from '@/lib/db'
import { resolveWithin } from '@/lib/paths'
import { requireRole } from '@/lib/auth'
import { readLimiter, mutationLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
import { validateSchema, extractWikiLinks } from '@/lib/memory-utils'
const MEMORY_PATH = config.memoryDir
const MEMORY_ALLOWED_PREFIXES = (config.memoryAllowedPrefixes || []).map((p) => p.replace(/\\/g, '/'))
@ -85,7 +87,11 @@ async function resolveSafeMemoryPath(baseDir: string, relativePath: string): Pro
return fullPath
}
async function buildFileTree(dirPath: string, relativePath: string = ''): Promise<MemoryFile[]> {
async function buildFileTree(
dirPath: string,
relativePath: string = '',
maxDepth: number = Number.POSITIVE_INFINITY,
): Promise<MemoryFile[]> {
try {
const items = await readdir(dirPath, { withFileTypes: true })
const files: MemoryFile[] = []
@ -101,7 +107,10 @@ async function buildFileTree(dirPath: string, relativePath: string = ''): Promis
const stats = await stat(itemPath)
if (item.isDirectory()) {
const children = await buildFileTree(itemPath, itemRelativePath)
const children =
maxDepth > 0
? await buildFileTree(itemPath, itemRelativePath, maxDepth - 1)
: undefined
files.push({
path: itemRelativePath,
name: item.name,
@ -147,12 +156,26 @@ export async function GET(request: NextRequest) {
const { searchParams } = new URL(request.url)
const path = searchParams.get('path')
const action = searchParams.get('action')
const depthParam = Number.parseInt(searchParams.get('depth') || '', 10)
const maxDepth = Number.isFinite(depthParam) ? Math.max(0, Math.min(depthParam, 8)) : Number.POSITIVE_INFINITY
if (action === 'tree') {
// Return the file tree
if (!MEMORY_PATH) {
return NextResponse.json({ tree: [] })
}
if (path) {
if (!isPathAllowed(path)) {
return NextResponse.json({ error: 'Path not allowed' }, { status: 403 })
}
const fullPath = await resolveSafeMemoryPath(MEMORY_PATH, path)
const stats = await stat(fullPath).catch(() => null)
if (!stats?.isDirectory()) {
return NextResponse.json({ error: 'Directory not found' }, { status: 404 })
}
const tree = await buildFileTree(fullPath, path, maxDepth)
return NextResponse.json({ tree })
}
if (MEMORY_ALLOWED_PREFIXES.length) {
const tree: MemoryFile[] = []
for (const prefix of MEMORY_ALLOWED_PREFIXES) {
@ -167,7 +190,7 @@ export async function GET(request: NextRequest) {
name: folder,
type: 'directory',
modified: stats.mtime.getTime(),
children: await buildFileTree(fullPath, folder),
children: await buildFileTree(fullPath, folder, maxDepth),
})
} catch {
// Skip unreadable roots
@ -175,7 +198,7 @@ export async function GET(request: NextRequest) {
}
return NextResponse.json({ tree })
}
const tree = await buildFileTree(MEMORY_PATH)
const tree = await buildFileTree(MEMORY_PATH, '', maxDepth)
return NextResponse.json({ tree })
}
@ -192,12 +215,19 @@ export async function GET(request: NextRequest) {
try {
const content = await readFile(fullPath, 'utf-8')
const stats = await stat(fullPath)
// Extract wiki-links and schema validation for .md files
const isMarkdown = path.endsWith('.md')
const wikiLinks = isMarkdown ? extractWikiLinks(content) : []
const schemaResult = isMarkdown ? validateSchema(content) : null
return NextResponse.json({
content,
size: stats.size,
modified: stats.mtime.getTime(),
path
path,
wikiLinks,
schema: schemaResult,
})
} catch (error) {
return NextResponse.json({ error: 'File not found' }, { status: 404 })
@ -321,8 +351,19 @@ export async function POST(request: NextRequest) {
return NextResponse.json({ error: 'Content is required for save action' }, { status: 400 })
}
// Validate schema if present (warn but don't block save)
const schemaResult = path.endsWith('.md') ? validateSchema(content) : null
const schemaWarnings = schemaResult?.errors ?? []
await writeFile(fullPath, content, 'utf-8')
return NextResponse.json({ success: true, message: 'File saved successfully' })
try {
db_helpers.logActivity('memory_file_saved', 'memory', 0, auth.user.username || 'unknown', `Updated ${path}`, { path, size: content.length })
} catch { /* best-effort */ }
return NextResponse.json({
success: true,
message: 'File saved successfully',
schemaWarnings,
})
}
if (action === 'create') {
@ -345,6 +386,9 @@ export async function POST(request: NextRequest) {
}
await writeFile(fullPath, content || '', 'utf-8')
try {
db_helpers.logActivity('memory_file_created', 'memory', 0, auth.user.username || 'unknown', `Created ${path}`, { path })
} catch { /* best-effort */ }
return NextResponse.json({ success: true, message: 'File created successfully' })
}
@ -387,6 +431,9 @@ export async function DELETE(request: NextRequest) {
}
await unlink(fullPath)
try {
db_helpers.logActivity('memory_file_deleted', 'memory', 0, auth.user.username || 'unknown', `Deleted ${path}`, { path })
} catch { /* best-effort */ }
return NextResponse.json({ success: true, message: 'File deleted successfully' })
}

132
src/app/api/nodes/route.ts Normal file
View File

@ -0,0 +1,132 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { config } from '@/lib/config'
import { logger } from '@/lib/logger'
const GATEWAY_TIMEOUT = 5000
function gatewayUrl(path: string): string {
return `http://${config.gatewayHost}:${config.gatewayPort}${path}`
}
async function fetchGateway(path: string, init?: RequestInit): Promise<Response> {
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), GATEWAY_TIMEOUT)
try {
return await fetch(gatewayUrl(path), {
...init,
signal: controller.signal,
})
} finally {
clearTimeout(timeout)
}
}
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const action = request.nextUrl.searchParams.get('action') || 'list'
if (action === 'list') {
try {
const res = await fetchGateway('/api/presence')
if (!res.ok) {
logger.warn({ status: res.status }, 'Gateway presence endpoint returned non-OK')
return NextResponse.json({ nodes: [], connected: false })
}
const data = await res.json()
return NextResponse.json(data)
} catch (err) {
logger.warn({ err }, 'Gateway unreachable for presence listing')
return NextResponse.json({ nodes: [], connected: false })
}
}
if (action === 'devices') {
try {
const res = await fetchGateway('/api/devices')
if (!res.ok) {
logger.warn({ status: res.status }, 'Gateway devices endpoint returned non-OK')
return NextResponse.json({ devices: [] })
}
const data = await res.json()
return NextResponse.json(data)
} catch (err) {
logger.warn({ err }, 'Gateway unreachable for device listing')
return NextResponse.json({ devices: [] })
}
}
return NextResponse.json({ error: `Unknown action: ${action}` }, { status: 400 })
}
const VALID_DEVICE_ACTIONS = ['approve', 'reject', 'rotate-token', 'revoke-token'] as const
type DeviceAction = (typeof VALID_DEVICE_ACTIONS)[number]
const ACTION_RPC_MAP: Record<DeviceAction, { method: string; paramKey: 'requestId' | 'deviceId' }> = {
'approve': { method: 'device.pair.approve', paramKey: 'requestId' },
'reject': { method: 'device.pair.reject', paramKey: 'requestId' },
'rotate-token': { method: 'device.token.rotate', paramKey: 'deviceId' },
'revoke-token': { method: 'device.token.revoke', paramKey: 'deviceId' },
}
/**
* POST /api/nodes - Device management actions
* Body: { action: DeviceAction, requestId?: string, deviceId?: string, role?: string, scopes?: string[] }
*/
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
let body: Record<string, unknown>
try {
body = await request.json()
} catch {
return NextResponse.json({ error: 'Invalid JSON body' }, { status: 400 })
}
const action = body.action as string
if (!action || !VALID_DEVICE_ACTIONS.includes(action as DeviceAction)) {
return NextResponse.json(
{ error: `Invalid action. Must be one of: ${VALID_DEVICE_ACTIONS.join(', ')}` },
{ status: 400 },
)
}
const spec = ACTION_RPC_MAP[action as DeviceAction]
// Validate required param
const id = body[spec.paramKey] as string | undefined
if (!id || typeof id !== 'string') {
return NextResponse.json({ error: `Missing required field: ${spec.paramKey}` }, { status: 400 })
}
// Build RPC params
const params: Record<string, unknown> = { [spec.paramKey]: id }
if ((action === 'rotate-token' || action === 'revoke-token') && body.role) {
params.role = body.role
}
if (action === 'rotate-token' && Array.isArray(body.scopes)) {
params.scopes = body.scopes
}
try {
const res = await fetchGateway('/api/rpc', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ method: spec.method, params }),
})
const data = await res.json()
return NextResponse.json(data, { status: res.status })
} catch (err: unknown) {
const name = err instanceof Error ? err.name : ''
if (name === 'AbortError') {
logger.error('Gateway device action request timed out')
return NextResponse.json({ error: 'Gateway request timed out' }, { status: 504 })
}
logger.error({ err }, 'Gateway device action failed')
return NextResponse.json({ error: 'Gateway unreachable' }, { status: 502 })
}
}

View File

@ -0,0 +1,150 @@
import { NextRequest, NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { getDatabase } from '@/lib/db'
import { logger } from '@/lib/logger'
import { nextIncompleteStepIndex, parseCompletedSteps, shouldShowOnboarding, markStepCompleted } from '@/lib/onboarding-state'
const ONBOARDING_STEPS = [
{ id: 'welcome', title: 'Welcome' },
{ id: 'interface-mode', title: 'Interface' },
{ id: 'gateway-link', title: 'Gateway' },
{ id: 'credentials', title: 'Credentials' },
] as const
const ONBOARDING_SETTING_KEYS = {
completed: 'onboarding.completed',
completedAt: 'onboarding.completed_at',
skipped: 'onboarding.skipped',
completedSteps: 'onboarding.completed_steps',
checklistDismissed: 'onboarding.checklist_dismissed',
} as const
type OnboardingSettingKey = typeof ONBOARDING_SETTING_KEYS[keyof typeof ONBOARDING_SETTING_KEYS]
function scopedOnboardingKey(key: OnboardingSettingKey, username: string): string {
return `user.${username}.${key}`
}
function getOnboardingSetting(key: string): string {
try {
const db = getDatabase()
const row = db.prepare('SELECT value FROM settings WHERE key = ?').get(key) as { value: string } | undefined
return row?.value ?? ''
} catch {
return ''
}
}
function setOnboardingSetting(key: string, value: string, actor: string) {
const db = getDatabase()
db.prepare(`
INSERT INTO settings (key, value, description, category, updated_by, updated_at)
VALUES (?, ?, ?, 'onboarding', ?, unixepoch())
ON CONFLICT(key) DO UPDATE SET
value = excluded.value,
updated_by = excluded.updated_by,
updated_at = unixepoch()
`).run(key, value, `Onboarding: ${key}`, actor)
}
function readUserOnboardingSetting(key: OnboardingSettingKey, username: string): string {
const scopedValue = getOnboardingSetting(scopedOnboardingKey(key, username))
if (scopedValue !== '') return scopedValue
return getOnboardingSetting(key)
}
function writeUserOnboardingSetting(key: OnboardingSettingKey, value: string, actor: string) {
setOnboardingSetting(scopedOnboardingKey(key, actor), value, actor)
}
export async function GET(request: NextRequest) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const completed = readUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completed, auth.user.username) === 'true'
const skipped = readUserOnboardingSetting(ONBOARDING_SETTING_KEYS.skipped, auth.user.username) === 'true'
const checklistDismissed = readUserOnboardingSetting(ONBOARDING_SETTING_KEYS.checklistDismissed, auth.user.username) === 'true'
const completedStepsRaw = readUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completedSteps, auth.user.username)
const completedSteps = parseCompletedSteps(completedStepsRaw, ONBOARDING_STEPS)
const isAdmin = auth.user.role === 'admin'
const showOnboarding = shouldShowOnboarding({ completed, skipped, isAdmin })
const steps = ONBOARDING_STEPS.map((s) => ({
...s,
completed: completedSteps.includes(s.id),
}))
const currentStep = nextIncompleteStepIndex(ONBOARDING_STEPS, completedSteps)
return NextResponse.json({
showOnboarding,
completed,
skipped,
checklistDismissed,
isAdmin,
currentStep: currentStep === -1 ? steps.length - 1 : currentStep,
steps,
})
} catch (error) {
logger.error({ err: error }, 'Onboarding GET error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}
export async function POST(request: NextRequest) {
const auth = requireRole(request, 'admin')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const body = await request.json()
const { action, step } = body as { action: string; step?: string }
switch (action) {
case 'complete_step': {
if (!step) return NextResponse.json({ error: 'step is required' }, { status: 400 })
const valid = ONBOARDING_STEPS.some(s => s.id === step)
if (!valid) return NextResponse.json({ error: 'Invalid step' }, { status: 400 })
const raw = readUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completedSteps, auth.user.username)
const parsed = parseCompletedSteps(raw, ONBOARDING_STEPS)
const steps = markStepCompleted(parsed, step, ONBOARDING_STEPS)
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completedSteps, JSON.stringify(steps), auth.user.username)
return NextResponse.json({ ok: true, completedSteps: steps })
}
case 'complete': {
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completed, 'true', auth.user.username)
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completedAt, String(Date.now()), auth.user.username)
return NextResponse.json({ ok: true })
}
case 'skip': {
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.skipped, 'true', auth.user.username)
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completedAt, String(Date.now()), auth.user.username)
return NextResponse.json({ ok: true })
}
case 'dismiss_checklist': {
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.checklistDismissed, 'true', auth.user.username)
return NextResponse.json({ ok: true })
}
case 'reset': {
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completed, 'false', auth.user.username)
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completedAt, '', auth.user.username)
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.skipped, 'false', auth.user.username)
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.completedSteps, '[]', auth.user.username)
writeUserOnboardingSetting(ONBOARDING_SETTING_KEYS.checklistDismissed, 'false', auth.user.username)
return NextResponse.json({ ok: true })
}
default:
return NextResponse.json({ error: 'Invalid action' }, { status: 400 })
}
} catch (error) {
logger.error({ err: error }, 'Onboarding POST error')
return NextResponse.json({ error: 'Internal server error' }, { status: 500 })
}
}

View File

@ -0,0 +1,127 @@
import { NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { runOpenClaw } from '@/lib/command'
import { config } from '@/lib/config'
import { getDatabase } from '@/lib/db'
import { logger } from '@/lib/logger'
import { archiveOrphanTranscriptsForStateDir } from '@/lib/openclaw-doctor-fix'
import { parseOpenClawDoctorOutput } from '@/lib/openclaw-doctor'
function getCommandDetail(error: unknown): { detail: string; code: number | null } {
const err = error as {
stdout?: string
stderr?: string
message?: string
code?: number | null
}
return {
detail: [err?.stdout, err?.stderr, err?.message].filter(Boolean).join('\n').trim(),
code: typeof err?.code === 'number' ? err.code : null,
}
}
function isMissingOpenClaw(detail: string): boolean {
return /enoent|not installed|not reachable|command not found/i.test(detail)
}
export async function GET(request: Request) {
const auth = requireRole(request, 'admin')
if ('error' in auth) {
return NextResponse.json({ error: auth.error }, { status: auth.status })
}
try {
const result = await runOpenClaw(['doctor'], { timeoutMs: 15000 })
return NextResponse.json(parseOpenClawDoctorOutput(`${result.stdout}\n${result.stderr}`, result.code ?? 0, {
stateDir: config.openclawStateDir,
}), {
headers: { 'Cache-Control': 'no-store' },
})
} catch (error) {
const { detail, code } = getCommandDetail(error)
if (isMissingOpenClaw(detail)) {
return NextResponse.json({ error: 'OpenClaw is not installed or not reachable' }, { status: 400 })
}
return NextResponse.json(parseOpenClawDoctorOutput(detail, code ?? 1, {
stateDir: config.openclawStateDir,
}), {
headers: { 'Cache-Control': 'no-store' },
})
}
}
export async function POST(request: Request) {
const auth = requireRole(request, 'admin')
if ('error' in auth) {
return NextResponse.json({ error: auth.error }, { status: auth.status })
}
try {
const progress: Array<{ step: string; detail: string }> = []
const fixResult = await runOpenClaw(['doctor', '--fix'], { timeoutMs: 120000 })
progress.push({ step: 'doctor', detail: 'Applied OpenClaw doctor config fixes.' })
try {
await runOpenClaw(['sessions', 'cleanup', '--all-agents', '--enforce', '--fix-missing'], { timeoutMs: 120000 })
progress.push({ step: 'sessions', detail: 'Pruned missing transcript entries from session stores.' })
} catch (error) {
const { detail } = getCommandDetail(error)
progress.push({ step: 'sessions', detail: detail || 'Session cleanup skipped.' })
}
const orphanFix = archiveOrphanTranscriptsForStateDir(config.openclawStateDir)
progress.push({
step: 'orphans',
detail:
orphanFix.archivedOrphans > 0
? `Archived ${orphanFix.archivedOrphans} orphan transcript file(s) across ${orphanFix.storesScanned} session store(s).`
: `No orphan transcript files found across ${orphanFix.storesScanned} session store(s).`,
})
const postFix = await runOpenClaw(['doctor'], { timeoutMs: 15000 })
const status = parseOpenClawDoctorOutput(`${postFix.stdout}\n${postFix.stderr}`, postFix.code ?? 0, {
stateDir: config.openclawStateDir,
})
try {
const db = getDatabase()
db.prepare(
'INSERT INTO audit_log (action, actor, detail) VALUES (?, ?, ?)'
).run(
'openclaw.doctor.fix',
auth.user.username,
JSON.stringify({ level: status.level, healthy: status.healthy, issues: status.issues })
)
} catch {
// Non-critical.
}
return NextResponse.json({
success: true,
output: `${fixResult.stdout}\n${fixResult.stderr}`.trim(),
progress,
status,
})
} catch (error) {
const { detail, code } = getCommandDetail(error)
if (isMissingOpenClaw(detail)) {
return NextResponse.json({ error: 'OpenClaw is not installed or not reachable' }, { status: 400 })
}
logger.error({ err: error }, 'OpenClaw doctor fix failed')
return NextResponse.json(
{
error: 'OpenClaw doctor fix failed',
detail,
status: parseOpenClawDoctorOutput(detail, code ?? 1, {
stateDir: config.openclawStateDir,
}),
},
{ status: 500 }
)
}
}

View File

@ -0,0 +1,71 @@
import { NextResponse } from 'next/server'
import { requireRole } from '@/lib/auth'
import { runOpenClaw } from '@/lib/command'
import { getDatabase } from '@/lib/db'
import { logger } from '@/lib/logger'
export async function POST(request: Request) {
const auth = requireRole(request, 'admin')
if ('error' in auth) {
return NextResponse.json({ error: auth.error }, { status: auth.status })
}
let installedBefore: string | null = null
try {
const vResult = await runOpenClaw(['--version'], { timeoutMs: 3000 })
const match = vResult.stdout.match(/(\d+\.\d+\.\d+)/)
if (match) installedBefore = match[1]
} catch {
return NextResponse.json(
{ error: 'OpenClaw is not installed or not reachable' },
{ status: 400 }
)
}
try {
const result = await runOpenClaw(['update', '--channel', 'stable'], {
timeoutMs: 5 * 60 * 1000,
})
// Read new version after update
let installedAfter: string | null = null
try {
const vResult = await runOpenClaw(['--version'], { timeoutMs: 3000 })
const match = vResult.stdout.match(/(\d+\.\d+\.\d+)/)
if (match) installedAfter = match[1]
} catch { /* keep null */ }
// Audit log
try {
const db = getDatabase()
db.prepare(
'INSERT INTO audit_log (action, actor, detail) VALUES (?, ?, ?)'
).run(
'openclaw.update',
auth.user.username,
JSON.stringify({ previousVersion: installedBefore, newVersion: installedAfter })
)
} catch { /* non-critical */ }
return NextResponse.json({
success: true,
previousVersion: installedBefore,
newVersion: installedAfter,
output: result.stdout,
})
} catch (err: any) {
const detail =
err?.stderr?.toString?.()?.trim() ||
err?.stdout?.toString?.()?.trim() ||
err?.message ||
'Unknown error during OpenClaw update'
logger.error({ err }, 'OpenClaw update failed')
return NextResponse.json(
{ error: 'OpenClaw update failed', detail },
{ status: 500 }
)
}
}

View File

@ -0,0 +1,77 @@
import { NextResponse } from 'next/server'
import { runOpenClaw } from '@/lib/command'
const GITHUB_RELEASES_URL =
'https://api.github.com/repos/openclaw/openclaw/releases/latest'
function compareSemver(a: string, b: string): number {
const pa = a.replace(/^v/, '').split('.').map(Number)
const pb = b.replace(/^v/, '').split('.').map(Number)
for (let i = 0; i < Math.max(pa.length, pb.length); i++) {
const na = pa[i] ?? 0
const nb = pb[i] ?? 0
if (na > nb) return 1
if (na < nb) return -1
}
return 0
}
const headers = { 'Cache-Control': 'public, max-age=3600' }
export async function GET() {
let installed: string | null = null
try {
const result = await runOpenClaw(['--version'], { timeoutMs: 3000 })
const match = result.stdout.match(/(\d+\.\d+\.\d+)/)
if (match) installed = match[1]
} catch {
// OpenClaw not installed or not reachable
return NextResponse.json(
{ installed: null, latest: null, updateAvailable: false },
{ headers }
)
}
if (!installed) {
return NextResponse.json(
{ installed: null, latest: null, updateAvailable: false },
{ headers }
)
}
try {
const res = await fetch(GITHUB_RELEASES_URL, {
headers: { Accept: 'application/vnd.github+json' },
next: { revalidate: 3600 },
})
if (!res.ok) {
return NextResponse.json(
{ installed, latest: null, updateAvailable: false },
{ headers }
)
}
const release = await res.json()
const latest = (release.tag_name ?? '').replace(/^v/, '')
const updateAvailable = compareSemver(latest, installed) > 0
return NextResponse.json(
{
installed,
latest,
updateAvailable,
releaseUrl: release.html_url ?? '',
releaseNotes: release.body ?? '',
updateCommand: 'openclaw update --channel stable',
},
{ headers }
)
} catch {
return NextResponse.json(
{ installed, latest: null, updateAvailable: false },
{ headers }
)
}
}

View File

@ -0,0 +1,179 @@
import { NextRequest, NextResponse } from 'next/server'
import { getDatabase } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { mutationLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
import {
ensureTenantWorkspaceAccess,
ForbiddenError
} from '@/lib/workspaces'
function toProjectId(raw: string): number {
const id = Number.parseInt(raw, 10)
return Number.isFinite(id) ? id : NaN
}
export async function GET(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const auth = requireRole(request, 'viewer')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const tenantId = auth.user.tenant_id ?? 1
const forwardedFor = (request.headers.get('x-forwarded-for') || '').split(',')[0]?.trim() || null
ensureTenantWorkspaceAccess(db, tenantId, workspaceId, {
actor: auth.user.username,
actorId: auth.user.id,
route: '/api/projects/[id]/agents',
ipAddress: forwardedFor,
userAgent: request.headers.get('user-agent'),
})
const { id } = await params
const projectId = toProjectId(id)
if (Number.isNaN(projectId)) return NextResponse.json({ error: 'Invalid project ID' }, { status: 400 })
const projectScope = db.prepare(`
SELECT p.id
FROM projects p
JOIN workspaces w ON w.id = p.workspace_id
WHERE p.id = ? AND p.workspace_id = ? AND w.tenant_id = ?
LIMIT 1
`).get(projectId, workspaceId, tenantId)
if (!projectScope) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
// Verify project belongs to workspace
const project = db.prepare(`SELECT id FROM projects WHERE id = ? AND workspace_id = ?`).get(projectId, workspaceId)
if (!project) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const assignments = db.prepare(`
SELECT id, project_id, agent_name, role, assigned_at
FROM project_agent_assignments
WHERE project_id = ?
ORDER BY assigned_at ASC
`).all(projectId)
return NextResponse.json({ assignments })
} catch (error) {
if (error instanceof ForbiddenError) {
return NextResponse.json({ error: error.message }, { status: error.status })
}
logger.error({ err: error }, 'GET /api/projects/[id]/agents error')
return NextResponse.json({ error: 'Failed to fetch agent assignments' }, { status: 500 })
}
}
export async function POST(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateCheck = mutationLimiter(request)
if (rateCheck) return rateCheck
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const tenantId = auth.user.tenant_id ?? 1
const forwardedFor = (request.headers.get('x-forwarded-for') || '').split(',')[0]?.trim() || null
ensureTenantWorkspaceAccess(db, tenantId, workspaceId, {
actor: auth.user.username,
actorId: auth.user.id,
route: '/api/projects/[id]/agents',
ipAddress: forwardedFor,
userAgent: request.headers.get('user-agent'),
})
const { id } = await params
const projectId = toProjectId(id)
if (Number.isNaN(projectId)) return NextResponse.json({ error: 'Invalid project ID' }, { status: 400 })
const projectScope = db.prepare(`
SELECT p.id
FROM projects p
JOIN workspaces w ON w.id = p.workspace_id
WHERE p.id = ? AND p.workspace_id = ? AND w.tenant_id = ?
LIMIT 1
`).get(projectId, workspaceId, tenantId)
if (!projectScope) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const project = db.prepare(`SELECT id FROM projects WHERE id = ? AND workspace_id = ?`).get(projectId, workspaceId)
if (!project) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const body = await request.json()
const agentName = String(body?.agent_name || '').trim()
const role = String(body?.role || 'member').trim()
if (!agentName) return NextResponse.json({ error: 'agent_name is required' }, { status: 400 })
db.prepare(`
INSERT OR IGNORE INTO project_agent_assignments (project_id, agent_name, role)
VALUES (?, ?, ?)
`).run(projectId, agentName, role)
return NextResponse.json({ success: true }, { status: 201 })
} catch (error) {
if (error instanceof ForbiddenError) {
return NextResponse.json({ error: error.message }, { status: error.status })
}
logger.error({ err: error }, 'POST /api/projects/[id]/agents error')
return NextResponse.json({ error: 'Failed to assign agent' }, { status: 500 })
}
}
export async function DELETE(
request: NextRequest,
{ params }: { params: Promise<{ id: string }> }
) {
const auth = requireRole(request, 'operator')
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
const rateCheck = mutationLimiter(request)
if (rateCheck) return rateCheck
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const tenantId = auth.user.tenant_id ?? 1
const forwardedFor = (request.headers.get('x-forwarded-for') || '').split(',')[0]?.trim() || null
ensureTenantWorkspaceAccess(db, tenantId, workspaceId, {
actor: auth.user.username,
actorId: auth.user.id,
route: '/api/projects/[id]/agents',
ipAddress: forwardedFor,
userAgent: request.headers.get('user-agent'),
})
const { id } = await params
const projectId = toProjectId(id)
if (Number.isNaN(projectId)) return NextResponse.json({ error: 'Invalid project ID' }, { status: 400 })
const projectScope = db.prepare(`
SELECT p.id
FROM projects p
JOIN workspaces w ON w.id = p.workspace_id
WHERE p.id = ? AND p.workspace_id = ? AND w.tenant_id = ?
LIMIT 1
`).get(projectId, workspaceId, tenantId)
if (!projectScope) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const project = db.prepare(`SELECT id FROM projects WHERE id = ? AND workspace_id = ?`).get(projectId, workspaceId)
if (!project) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const agentName = new URL(request.url).searchParams.get('agent_name')
if (!agentName) return NextResponse.json({ error: 'agent_name query parameter is required' }, { status: 400 })
db.prepare(`
DELETE FROM project_agent_assignments
WHERE project_id = ? AND agent_name = ?
`).run(projectId, agentName)
return NextResponse.json({ success: true })
} catch (error) {
if (error instanceof ForbiddenError) {
return NextResponse.json({ error: error.message }, { status: error.status })
}
logger.error({ err: error }, 'DELETE /api/projects/[id]/agents error')
return NextResponse.json({ error: 'Failed to unassign agent' }, { status: 500 })
}
}

View File

@ -3,6 +3,10 @@ import { getDatabase } from '@/lib/db'
import { requireRole } from '@/lib/auth'
import { mutationLimiter } from '@/lib/rate-limit'
import { logger } from '@/lib/logger'
import {
ensureTenantWorkspaceAccess,
ForbiddenError
} from '@/lib/workspaces'
function normalizePrefix(input: string): string {
const normalized = input.trim().toUpperCase().replace(/[^A-Z0-9]/g, '')
@ -24,19 +28,48 @@ export async function GET(
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const tenantId = auth.user.tenant_id ?? 1
const forwardedFor = (request.headers.get('x-forwarded-for') || '').split(',')[0]?.trim() || null
ensureTenantWorkspaceAccess(db, tenantId, workspaceId, {
actor: auth.user.username,
actorId: auth.user.id,
route: '/api/projects/[id]',
ipAddress: forwardedFor,
userAgent: request.headers.get('user-agent'),
})
const { id } = await params
const projectId = toProjectId(id)
if (Number.isNaN(projectId)) return NextResponse.json({ error: 'Invalid project ID' }, { status: 400 })
const projectScope = db.prepare(`
SELECT p.id
FROM projects p
JOIN workspaces w ON w.id = p.workspace_id
WHERE p.id = ? AND p.workspace_id = ? AND w.tenant_id = ?
LIMIT 1
`).get(projectId, workspaceId, tenantId)
if (!projectScope) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const project = db.prepare(`
SELECT id, workspace_id, name, slug, description, ticket_prefix, ticket_counter, status, created_at, updated_at
FROM projects
WHERE id = ? AND workspace_id = ?
`).get(projectId, workspaceId)
if (!project) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const row = db.prepare(`
SELECT p.id, p.workspace_id, p.name, p.slug, p.description, p.ticket_prefix, p.ticket_counter, p.status,
p.github_repo, p.deadline, p.color, p.github_sync_enabled, p.github_labels_initialized, p.github_default_branch, p.created_at, p.updated_at,
(SELECT COUNT(*) FROM tasks t WHERE t.project_id = p.id) as task_count,
(SELECT GROUP_CONCAT(paa.agent_name) FROM project_agent_assignments paa WHERE paa.project_id = p.id) as assigned_agents_csv
FROM projects p
WHERE p.id = ? AND p.workspace_id = ?
`).get(projectId, workspaceId) as Record<string, unknown> | undefined
if (!row) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const project = {
...row,
assigned_agents: row.assigned_agents_csv ? String(row.assigned_agents_csv).split(',') : [],
assigned_agents_csv: undefined,
}
return NextResponse.json({ project })
} catch (error) {
if (error instanceof ForbiddenError) {
return NextResponse.json({ error: error.message }, { status: error.status })
}
logger.error({ err: error }, 'GET /api/projects/[id] error')
return NextResponse.json({ error: 'Failed to fetch project' }, { status: 500 })
}
@ -55,9 +88,26 @@ export async function PATCH(
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const tenantId = auth.user.tenant_id ?? 1
const forwardedFor = (request.headers.get('x-forwarded-for') || '').split(',')[0]?.trim() || null
ensureTenantWorkspaceAccess(db, tenantId, workspaceId, {
actor: auth.user.username,
actorId: auth.user.id,
route: '/api/projects/[id]',
ipAddress: forwardedFor,
userAgent: request.headers.get('user-agent'),
})
const { id } = await params
const projectId = toProjectId(id)
if (Number.isNaN(projectId)) return NextResponse.json({ error: 'Invalid project ID' }, { status: 400 })
const projectScope = db.prepare(`
SELECT p.id
FROM projects p
JOIN workspaces w ON w.id = p.workspace_id
WHERE p.id = ? AND p.workspace_id = ? AND w.tenant_id = ?
LIMIT 1
`).get(projectId, workspaceId, tenantId)
if (!projectScope) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const current = db.prepare(`SELECT * FROM projects WHERE id = ? AND workspace_id = ?`).get(projectId, workspaceId) as any
if (!current) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
@ -99,6 +149,30 @@ export async function PATCH(
updates.push('status = ?')
paramsList.push(status)
}
if (body?.github_repo !== undefined) {
updates.push('github_repo = ?')
paramsList.push(typeof body.github_repo === 'string' ? body.github_repo.trim() || null : null)
}
if (body?.deadline !== undefined) {
updates.push('deadline = ?')
paramsList.push(typeof body.deadline === 'number' ? body.deadline : null)
}
if (body?.color !== undefined) {
updates.push('color = ?')
paramsList.push(typeof body.color === 'string' ? body.color.trim() || null : null)
}
if (body?.github_sync_enabled !== undefined) {
updates.push('github_sync_enabled = ?')
paramsList.push(body.github_sync_enabled ? 1 : 0)
}
if (body?.github_default_branch !== undefined) {
updates.push('github_default_branch = ?')
paramsList.push(typeof body.github_default_branch === 'string' ? body.github_default_branch.trim() || 'main' : 'main')
}
if (body?.github_labels_initialized !== undefined) {
updates.push('github_labels_initialized = ?')
paramsList.push(body.github_labels_initialized ? 1 : 0)
}
if (updates.length === 0) return NextResponse.json({ error: 'No fields to update' }, { status: 400 })
@ -110,13 +184,17 @@ export async function PATCH(
`).run(...paramsList, projectId, workspaceId)
const project = db.prepare(`
SELECT id, workspace_id, name, slug, description, ticket_prefix, ticket_counter, status, created_at, updated_at
SELECT id, workspace_id, name, slug, description, ticket_prefix, ticket_counter, status,
github_repo, deadline, color, github_sync_enabled, github_labels_initialized, github_default_branch, created_at, updated_at
FROM projects
WHERE id = ? AND workspace_id = ?
`).get(projectId, workspaceId)
return NextResponse.json({ project })
} catch (error) {
if (error instanceof ForbiddenError) {
return NextResponse.json({ error: error.message }, { status: error.status })
}
logger.error({ err: error }, 'PATCH /api/projects/[id] error')
return NextResponse.json({ error: 'Failed to update project' }, { status: 500 })
}
@ -135,9 +213,26 @@ export async function DELETE(
try {
const db = getDatabase()
const workspaceId = auth.user.workspace_id ?? 1
const tenantId = auth.user.tenant_id ?? 1
const forwardedFor = (request.headers.get('x-forwarded-for') || '').split(',')[0]?.trim() || null
ensureTenantWorkspaceAccess(db, tenantId, workspaceId, {
actor: auth.user.username,
actorId: auth.user.id,
route: '/api/projects/[id]',
ipAddress: forwardedFor,
userAgent: request.headers.get('user-agent'),
})
const { id } = await params
const projectId = toProjectId(id)
if (Number.isNaN(projectId)) return NextResponse.json({ error: 'Invalid project ID' }, { status: 400 })
const projectScope = db.prepare(`
SELECT p.id
FROM projects p
JOIN workspaces w ON w.id = p.workspace_id
WHERE p.id = ? AND p.workspace_id = ? AND w.tenant_id = ?
LIMIT 1
`).get(projectId, workspaceId, tenantId)
if (!projectScope) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
const current = db.prepare(`SELECT * FROM projects WHERE id = ? AND workspace_id = ?`).get(projectId, workspaceId) as any
if (!current) return NextResponse.json({ error: 'Project not found' }, { status: 404 })
@ -171,6 +266,9 @@ export async function DELETE(
return NextResponse.json({ success: true, mode: 'delete' })
} catch (error) {
if (error instanceof ForbiddenError) {
return NextResponse.json({ error: error.message }, { status: error.status })
}
logger.error({ err: error }, 'DELETE /api/projects/[id] error')
return NextResponse.json({ error: 'Failed to delete project' }, { status: 500 })
}

Some files were not shown because too many files have changed in this diff Show More