Merge pull request #466 from builderz-labs/feat/api-parity-tranche-c-cli-audit
feat: API parity tranche C — CLI, MCP server, TUI, task routing fixes
This commit is contained in:
commit
60f6dc07a1
|
|
@ -27,9 +27,17 @@ jobs:
|
|||
node-version-file: '.nvmrc'
|
||||
cache: 'pnpm'
|
||||
|
||||
- name: Configure git identity
|
||||
run: |
|
||||
git config --global user.email "ci@mission-control.dev"
|
||||
git config --global user.name "CI"
|
||||
|
||||
- name: Install dependencies
|
||||
run: pnpm install --frozen-lockfile
|
||||
|
||||
- name: API contract parity
|
||||
run: pnpm api:parity
|
||||
|
||||
- name: Lint
|
||||
run: pnpm lint
|
||||
|
||||
|
|
|
|||
25
CLAUDE.md
25
CLAUDE.md
|
|
@ -72,6 +72,31 @@ Database path: `MISSION_CONTROL_DB_PATH` (defaults to `.data/mission-control.db`
|
|||
- **Icons**: No icon libraries -- use raw text/emoji in components
|
||||
- **Standalone output**: `next.config.js` sets `output: 'standalone'`
|
||||
|
||||
## Agent Control Interfaces
|
||||
|
||||
Mission Control provides three interfaces for autonomous agents:
|
||||
|
||||
### MCP Server (recommended for agents)
|
||||
```bash
|
||||
# Add to any Claude Code agent:
|
||||
claude mcp add mission-control -- node /path/to/mission-control/scripts/mc-mcp-server.cjs
|
||||
|
||||
# Environment config:
|
||||
MC_URL=http://127.0.0.1:3000 MC_API_KEY=<key>
|
||||
```
|
||||
35 tools: agents, tasks, sessions, memory, soul, comments, tokens, skills, cron, status.
|
||||
See `docs/cli-agent-control.md` for full tool list.
|
||||
|
||||
### CLI
|
||||
```bash
|
||||
pnpm mc agents list --json
|
||||
pnpm mc tasks queue --agent Aegis --max-capacity 2 --json
|
||||
pnpm mc events watch --types agent,task
|
||||
```
|
||||
|
||||
### REST API
|
||||
OpenAPI spec: `openapi.json`. Interactive docs at `/docs` when running.
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- **Standalone mode**: Use `node .next/standalone/server.js`, not `pnpm start` (which requires full `node_modules`)
|
||||
|
|
|
|||
40
README.md
40
README.md
|
|
@ -2,9 +2,9 @@
|
|||
|
||||
# Mission Control
|
||||
|
||||
**The open-source dashboard for AI agent orchestration.**
|
||||
**Open-source dashboard for AI agent orchestration.**
|
||||
|
||||
Manage agent fleets, track tasks, monitor costs, and orchestrate workflows — all from a single pane of glass.
|
||||
Manage AI agent fleets, dispatch tasks, track costs, and coordinate multi-agent workflows — self-hosted, zero external dependencies, powered by SQLite.
|
||||
|
||||
[](LICENSE)
|
||||
[](https://nextjs.org/)
|
||||
|
|
@ -146,6 +146,42 @@ bash scripts/station-doctor.sh
|
|||
bash scripts/security-audit.sh
|
||||
```
|
||||
|
||||
## Getting Started with Agents
|
||||
|
||||
Once Mission Control is running, set up your first agent in under 5 minutes:
|
||||
|
||||
```bash
|
||||
export MC_URL=http://localhost:3000
|
||||
export MC_API_KEY=your-api-key # shown in Settings after first login
|
||||
|
||||
# Register an agent
|
||||
curl -X POST "$MC_URL/api/agents/register" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "scout", "role": "researcher"}'
|
||||
|
||||
# Create a task
|
||||
curl -X POST "$MC_URL/api/tasks" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"title": "Research competitors", "assigned_to": "scout", "priority": "medium"}'
|
||||
|
||||
# Poll the queue as the agent
|
||||
curl "$MC_URL/api/tasks/queue?agent=scout" \
|
||||
-H "Authorization: Bearer $MC_API_KEY"
|
||||
```
|
||||
|
||||
No gateway or OpenClaw needed — this works with pure HTTP.
|
||||
|
||||
For the full walkthrough, see the **[Quickstart Guide](docs/quickstart.md)**.
|
||||
|
||||
| Guide | What you'll learn |
|
||||
|-------|-------------------|
|
||||
| [Quickstart](docs/quickstart.md) | Register an agent, create a task, complete it — 5 minutes |
|
||||
| [Agent Setup](docs/agent-setup.md) | SOUL personalities, config, heartbeats, agent sources |
|
||||
| [Orchestration](docs/orchestration.md) | Multi-agent workflows, auto-dispatch, quality review gates |
|
||||
| [CLI Reference](docs/cli-agent-control.md) | Full CLI command list for headless/scripted usage |
|
||||
|
||||
## Project Status
|
||||
|
||||
### What Works
|
||||
|
|
|
|||
|
|
@ -275,3 +275,83 @@ Internet
|
|||
- Mission Control listens on localhost or a private network
|
||||
- OpenClaw Gateway is bound to loopback only
|
||||
- Agent workspaces are isolated per-agent directories
|
||||
|
||||
---
|
||||
|
||||
## Agent Auth: Least-Privilege Key Guidance
|
||||
|
||||
### The Problem
|
||||
|
||||
The global API key (`API_KEY` env var) grants full `admin` access. When agents use it, they can:
|
||||
- Create/delete other agents
|
||||
- Modify any task or project
|
||||
- Rotate the API key itself
|
||||
- Access all workspaces
|
||||
|
||||
This violates least-privilege. A compromised agent session leaks admin access.
|
||||
|
||||
### Recommended: Agent-Scoped Keys
|
||||
|
||||
Create per-agent keys with limited scopes:
|
||||
|
||||
```bash
|
||||
# Create a scoped key for agent "Aegis" (via CLI)
|
||||
pnpm mc raw --method POST --path /api/agents/5/keys --body '{
|
||||
"name": "aegis-worker",
|
||||
"scopes": ["viewer", "agent:self", "agent:diagnostics", "tasks:write"],
|
||||
"expires_in_days": 30
|
||||
}' --json
|
||||
```
|
||||
|
||||
Scoped keys:
|
||||
- Can only act as the agent they belong to (no cross-agent access)
|
||||
- Have explicit scope lists (viewer, agent:self, tasks:write, etc.)
|
||||
- Auto-expire after a set period
|
||||
- Can be revoked without affecting other agents
|
||||
- Are logged separately in the audit trail
|
||||
|
||||
### Auth Hierarchy
|
||||
|
||||
| Method | Role | Use Case |
|
||||
|--------|------|----------|
|
||||
| Agent-scoped key (`mca_...`) | Per-scope | Autonomous agents (recommended) |
|
||||
| Global API key | admin | Admin scripts, CI/CD, initial setup |
|
||||
| Session cookie | Per-user role | Human operators via web UI |
|
||||
| Proxy header | Per-user role | SSO/gateway-authenticated users |
|
||||
|
||||
### Monitoring Global Key Usage
|
||||
|
||||
Mission Control logs a security event (`global_api_key_used`) every time the global API key is used. Monitor these in the audit log:
|
||||
|
||||
```bash
|
||||
pnpm mc raw --method GET --path '/api/security-audit?event_type=global_api_key_used&timeframe=day' --json
|
||||
```
|
||||
|
||||
Goal: drive global key usage to zero in production by replacing with scoped agent keys.
|
||||
|
||||
### Rate Limiting by Agent Identity
|
||||
|
||||
Agent-facing endpoints use per-agent rate limiters (keyed by `x-agent-name` header):
|
||||
- Heartbeat: 30/min per agent
|
||||
- Task polling: 20/min per agent
|
||||
- Self-registration: 5/min per IP
|
||||
|
||||
This prevents a runaway agent from consuming the entire rate limit budget.
|
||||
|
||||
---
|
||||
|
||||
## Rate Limit Backend Strategy
|
||||
|
||||
Current: in-memory `Map` per process (suitable for single-instance deployments).
|
||||
|
||||
For multi-instance deployments, the rate limiter supports a pluggable backend via the `createRateLimiter` factory. Future options:
|
||||
- **Redis**: shared state across instances (use Upstash or self-hosted)
|
||||
- **SQLite WAL**: leverage the existing DB for cross-process coordination
|
||||
- **Edge KV**: for edge-deployed instances
|
||||
|
||||
The current implementation includes:
|
||||
- Periodic cleanup (60s interval)
|
||||
- Capacity-bounded maps (default 10K entries, LRU eviction)
|
||||
- Trusted proxy IP parsing (`MC_TRUSTED_PROXIES`)
|
||||
|
||||
No action needed for single-instance deployments. For multi-instance, implement a custom `RateLimitStore` interface when scaling beyond 1 node.
|
||||
|
|
|
|||
|
|
@ -0,0 +1,342 @@
|
|||
# Agent Setup Guide
|
||||
|
||||
This guide covers everything you need to configure agents in Mission Control: registration methods, SOUL personalities, working files, configuration, and liveness monitoring.
|
||||
|
||||
## Agent Registration
|
||||
|
||||
There are three ways to register agents with Mission Control.
|
||||
|
||||
### Method 1: API Self-Registration (Recommended for Autonomous Agents)
|
||||
|
||||
Agents register themselves at startup. This is the simplest path and requires no manual setup:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/agents/register \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "scout",
|
||||
"role": "researcher",
|
||||
"capabilities": ["web-search", "summarization"],
|
||||
"framework": "claude-sdk"
|
||||
}'
|
||||
```
|
||||
|
||||
**Name rules**: 1-63 characters, alphanumeric plus `.`, `-`, `_`. Must start with a letter or digit.
|
||||
|
||||
**Valid roles**: `coder`, `reviewer`, `tester`, `devops`, `researcher`, `assistant`, `agent`
|
||||
|
||||
The endpoint is idempotent — registering the same name again updates the agent's status to `idle` and refreshes `last_seen`. Rate-limited to 5 registrations per minute per IP.
|
||||
|
||||
### Method 2: Manual Creation (UI or API)
|
||||
|
||||
Create agents through the dashboard UI or the API:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/agents \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "aegis",
|
||||
"role": "reviewer",
|
||||
"status": "offline",
|
||||
"soul_content": "You are Aegis, the quality reviewer...",
|
||||
"config": {
|
||||
"dispatchModel": "9router/cc/claude-opus-4-6",
|
||||
"openclawId": "aegis"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
This requires `operator` role and supports additional fields like `soul_content`, `config`, and `template`.
|
||||
|
||||
### Method 3: Config Sync (OpenClaw or Local Discovery)
|
||||
|
||||
Mission Control can auto-discover agents from:
|
||||
|
||||
**OpenClaw config sync** — Reads agents from your `openclaw.json` file:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/agents/sync \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"source": "config"}'
|
||||
```
|
||||
|
||||
Set `OPENCLAW_CONFIG_PATH` to point to your `openclaw.json`.
|
||||
|
||||
**Local agent discovery** — Scans standard directories for agent definitions:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/agents/sync \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"source": "local"}'
|
||||
```
|
||||
|
||||
Scanned directories:
|
||||
- `~/.agents/` — Top-level agent directories or `.md` files
|
||||
- `~/.codex/agents/` — Codex agent definitions
|
||||
- `~/.claude/agents/` — Claude Code agent definitions
|
||||
- `~/.hermes/skills/` — Hermes skill definitions
|
||||
|
||||
Agent directories are detected by the presence of marker files: `soul.md`, `AGENT.md`, `identity.md`, `config.json`, or `agent.json`.
|
||||
|
||||
**Flat markdown files** (Claude Code format) are also supported:
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: my-agent
|
||||
description: A research assistant
|
||||
model: claude-opus-4
|
||||
tools: ["read", "write", "web-search"]
|
||||
---
|
||||
You are a research assistant specializing in competitive analysis...
|
||||
```
|
||||
|
||||
## SOUL.md — Agent Personality
|
||||
|
||||
SOUL is the personality and capability definition for an agent. It's a markdown file that gets injected into dispatch prompts, shaping how the agent approaches tasks.
|
||||
|
||||
### What Goes in a SOUL
|
||||
|
||||
A SOUL defines:
|
||||
- **Identity** — Who the agent is, its name, role
|
||||
- **Expertise** — What domains it specializes in
|
||||
- **Behavior** — How it approaches problems, communication style
|
||||
- **Constraints** — What it should avoid, limitations
|
||||
|
||||
### Example: Developer Agent
|
||||
|
||||
```markdown
|
||||
# Scout — Developer
|
||||
|
||||
You are Scout, a senior developer agent specializing in full-stack TypeScript development.
|
||||
|
||||
## Expertise
|
||||
- Next.js, React, Node.js
|
||||
- Database design (PostgreSQL, SQLite)
|
||||
- API architecture and testing
|
||||
|
||||
## Approach
|
||||
- Read existing code before proposing changes
|
||||
- Write tests alongside implementation
|
||||
- Keep changes minimal and focused
|
||||
|
||||
## Constraints
|
||||
- Never commit secrets or credentials
|
||||
- Ask for clarification on ambiguous requirements
|
||||
- Flag security concerns immediately
|
||||
```
|
||||
|
||||
### Example: Researcher Agent
|
||||
|
||||
```markdown
|
||||
# Iris — Researcher
|
||||
|
||||
You are Iris, a research agent focused on gathering and synthesizing information.
|
||||
|
||||
## Expertise
|
||||
- Web research and source verification
|
||||
- Competitive analysis
|
||||
- Data synthesis and report writing
|
||||
|
||||
## Approach
|
||||
- Always cite sources with URLs
|
||||
- Present findings in structured format
|
||||
- Distinguish facts from inferences
|
||||
|
||||
## Output Format
|
||||
- Use bullet points for key findings
|
||||
- Include a "Sources" section at the end
|
||||
- Highlight actionable insights
|
||||
```
|
||||
|
||||
### Example: Reviewer Agent
|
||||
|
||||
```markdown
|
||||
# Aegis — Quality Reviewer
|
||||
|
||||
You are Aegis, the quality gate for all agent work in Mission Control.
|
||||
|
||||
## Role
|
||||
Review completed tasks for correctness, completeness, and quality.
|
||||
|
||||
## Review Criteria
|
||||
- Does the output address all parts of the task?
|
||||
- Are there factual errors or hallucinations?
|
||||
- Is the work actionable and well-structured?
|
||||
|
||||
## Verdict Format
|
||||
Respond with EXACTLY one of:
|
||||
|
||||
VERDICT: APPROVED
|
||||
NOTES: <brief summary>
|
||||
|
||||
VERDICT: REJECTED
|
||||
NOTES: <specific issues to fix>
|
||||
```
|
||||
|
||||
### Managing SOUL Content
|
||||
|
||||
**Read** an agent's SOUL:
|
||||
|
||||
```bash
|
||||
curl -s http://localhost:3000/api/agents/1/soul \
|
||||
-H "Authorization: Bearer $MC_API_KEY" | jq
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"soul_content": "# Scout — Developer\n...",
|
||||
"source": "workspace",
|
||||
"available_templates": ["developer", "researcher", "reviewer"],
|
||||
"updated_at": 1711234567
|
||||
}
|
||||
```
|
||||
|
||||
The `source` field tells you where the SOUL was loaded from:
|
||||
- `workspace` — Read from the agent's workspace `soul.md` file on disk
|
||||
- `database` — Read from the MC database (no workspace file found)
|
||||
- `none` — No SOUL content set
|
||||
|
||||
**Update** a SOUL:
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:3000/api/agents/1/soul \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"soul_content": "# Scout — Developer\n\nYou are Scout..."}'
|
||||
```
|
||||
|
||||
**Apply a template**:
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:3000/api/agents/1/soul \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"template_name": "developer"}'
|
||||
```
|
||||
|
||||
Templates support substitution variables: `{{AGENT_NAME}}`, `{{AGENT_ROLE}}`, `{{TIMESTAMP}}`.
|
||||
|
||||
SOUL content syncs bidirectionally — edits in the UI write back to the workspace `soul.md` file, and changes on disk are picked up on the next sync.
|
||||
|
||||
## WORKING.md — Runtime Scratchpad
|
||||
|
||||
`WORKING.md` is an agent's runtime state file. It tracks:
|
||||
- Current task context
|
||||
- Intermediate results
|
||||
- Session notes from the agent's perspective
|
||||
|
||||
**Do not hand-edit WORKING.md** — it's written and managed by the agent during task execution. If you need to give an agent persistent instructions, use SOUL.md instead.
|
||||
|
||||
## Agent Configuration
|
||||
|
||||
Each agent has a JSON `config` object stored in the database. Key fields:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `openclawId` | string | Gateway agent identifier (falls back to agent name) |
|
||||
| `dispatchModel` | string | Model override for auto-dispatch (e.g., `9router/cc/claude-opus-4-6`) |
|
||||
| `capabilities` | string[] | List of agent capabilities |
|
||||
| `framework` | string | Framework that created the agent (e.g., `claude-sdk`, `crewai`) |
|
||||
|
||||
Example config:
|
||||
|
||||
```json
|
||||
{
|
||||
"openclawId": "scout",
|
||||
"dispatchModel": "9router/cc/claude-sonnet-4-6",
|
||||
"capabilities": ["code-review", "testing", "documentation"],
|
||||
"framework": "claude-sdk"
|
||||
}
|
||||
```
|
||||
|
||||
Update via API:
|
||||
|
||||
```bash
|
||||
curl -X PUT http://localhost:3000/api/agents \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"id": 1,
|
||||
"config": {
|
||||
"dispatchModel": "9router/cc/claude-opus-4-6"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
## Heartbeat and Liveness
|
||||
|
||||
Mission Control tracks agent health through heartbeats.
|
||||
|
||||
### How It Works
|
||||
|
||||
1. Agent sends `POST /api/agents/{id}/heartbeat` every 30 seconds
|
||||
2. MC updates `status` to `idle` and refreshes `last_seen`
|
||||
3. If no heartbeat for 10 minutes (configurable), agent is marked `offline`
|
||||
4. Stale tasks (in_progress for 10+ min with offline agent) are requeued
|
||||
|
||||
### Heartbeat Request
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/agents/1/heartbeat \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"token_usage": {
|
||||
"model": "claude-sonnet-4-6",
|
||||
"inputTokens": 1500,
|
||||
"outputTokens": 300
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
The heartbeat response includes pending work items (assigned tasks, mentions, notifications), so agents can use it as both a keepalive and a lightweight work check.
|
||||
|
||||
### Agent Status Values
|
||||
|
||||
| Status | Meaning |
|
||||
|--------|---------|
|
||||
| `offline` | No recent heartbeat, agent is unreachable |
|
||||
| `idle` | Online and ready for work |
|
||||
| `busy` | Currently executing a task |
|
||||
| `sleeping` | Paused by user (wake with `POST /api/agents/{id}/wake`) |
|
||||
| `error` | Agent reported an error state |
|
||||
|
||||
## Agent Sources
|
||||
|
||||
The `source` field on each agent indicates how it was registered:
|
||||
|
||||
| Source | Origin |
|
||||
|--------|--------|
|
||||
| `manual` | Created through UI or direct API call |
|
||||
| `self` | Agent self-registered via `/api/agents/register` |
|
||||
| `local` | Discovered from `~/.agents/`, `~/.claude/agents/`, etc. |
|
||||
| `config` | Synced from `openclaw.json` |
|
||||
| `gateway` | Registered by a gateway connection |
|
||||
|
||||
## Agent Templates
|
||||
|
||||
When creating agents via API, you can specify a `template` name to pre-populate the config:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/agents \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "scout", "role": "coder", "template": "developer"}'
|
||||
```
|
||||
|
||||
Templates define model tier, tool permissions, and default configuration. Available templates include:
|
||||
- `developer` — Full coding toolset (read, write, edit, exec, bash)
|
||||
- `researcher` — Read-only tools plus web and memory access
|
||||
- `reviewer` — Read-only tools for code review and quality checks
|
||||
|
||||
## What's Next
|
||||
|
||||
- **[Quickstart](quickstart.md)** — 5-minute first agent tutorial
|
||||
- **[Orchestration Patterns](orchestration.md)** — Multi-agent workflows, auto-dispatch, quality review
|
||||
- **[CLI Reference](cli-agent-control.md)** — Full CLI command reference
|
||||
|
|
@ -0,0 +1,163 @@
|
|||
# Mission Control CLI for Agent-Complete Operations (v2)
|
||||
|
||||
This repository includes a first-party CLI at:
|
||||
|
||||
- scripts/mc-cli.cjs
|
||||
|
||||
Designed for autonomous/headless usage first:
|
||||
- API key auth support
|
||||
- Profile persistence (~/.mission-control/profiles/*.json)
|
||||
- Stable JSON mode (`--json`) with NDJSON for streaming
|
||||
- Deterministic exit code categories
|
||||
- SSE streaming for real-time event watching
|
||||
- Compound subcommands for memory, soul, comments
|
||||
|
||||
## Quick start
|
||||
|
||||
1) Ensure Mission Control API is running.
|
||||
2) Set environment variables or use profile flags:
|
||||
|
||||
- MC_URL=http://127.0.0.1:3000
|
||||
- MC_API_KEY=your-key
|
||||
|
||||
3) Run commands:
|
||||
|
||||
```bash
|
||||
node scripts/mc-cli.cjs agents list --json
|
||||
node scripts/mc-cli.cjs tasks queue --agent Aegis --max-capacity 2 --json
|
||||
node scripts/mc-cli.cjs sessions control --id <session-id> --action terminate
|
||||
```
|
||||
|
||||
## Command groups
|
||||
|
||||
### auth
|
||||
- login --username --password
|
||||
- logout
|
||||
- whoami
|
||||
|
||||
### agents
|
||||
- list
|
||||
- get --id
|
||||
- create --name --role [--body '{}']
|
||||
- update --id [--body '{}']
|
||||
- delete --id
|
||||
- wake --id
|
||||
- diagnostics --id
|
||||
- heartbeat --id
|
||||
- attribution --id [--hours 24] [--section identity,cost] [--privileged]
|
||||
- memory get --id
|
||||
- memory set --id --content "..." [--append]
|
||||
- memory set --id --file ./memory.md
|
||||
- memory clear --id
|
||||
- soul get --id
|
||||
- soul set --id --content "..."
|
||||
- soul set --id --file ./soul.md
|
||||
- soul set --id --template operator
|
||||
- soul templates --id [--template name]
|
||||
|
||||
### tasks
|
||||
- list
|
||||
- get --id
|
||||
- create --title [--body '{}']
|
||||
- update --id [--body '{}']
|
||||
- delete --id
|
||||
- queue --agent <name> [--max-capacity 2]
|
||||
- broadcast --id --message "..."
|
||||
- comments list --id
|
||||
- comments add --id --content "..." [--parent-id 5]
|
||||
|
||||
### sessions
|
||||
- list
|
||||
- control --id --action monitor|pause|terminate
|
||||
- continue --kind claude-code|codex-cli --id --prompt "..."
|
||||
- transcript --kind claude-code|codex-cli|hermes --id [--limit 40] [--source]
|
||||
|
||||
### connect
|
||||
- register --tool-name --agent-name [--body '{}']
|
||||
- list
|
||||
- disconnect --connection-id
|
||||
|
||||
### tokens
|
||||
- list [--timeframe hour|day|week|month|all]
|
||||
- stats [--timeframe]
|
||||
- by-agent [--days 30]
|
||||
- agent-costs [--timeframe]
|
||||
- task-costs [--timeframe]
|
||||
- trends [--timeframe]
|
||||
- export [--format json|csv] [--timeframe] [--limit]
|
||||
- rotate (shows current key info)
|
||||
- rotate --confirm (generates new key -- admin only)
|
||||
|
||||
### skills
|
||||
- list
|
||||
- content --source --name
|
||||
- check --source --name
|
||||
- upsert --source --name --file ./skill.md
|
||||
- delete --source --name
|
||||
|
||||
### cron
|
||||
- list
|
||||
- create/update/pause/resume/remove/run [--body '{}']
|
||||
|
||||
### events
|
||||
- watch [--types agent,task] [--timeout-ms 3600000]
|
||||
|
||||
Streams SSE events to stdout. In `--json` mode, outputs NDJSON (one JSON object per line). Press Ctrl+C to stop.
|
||||
|
||||
### status
|
||||
- health (no auth required)
|
||||
- overview
|
||||
- dashboard
|
||||
- gateway
|
||||
- models
|
||||
- capabilities
|
||||
|
||||
### export (admin)
|
||||
- audit [--format json|csv] [--since <unix>] [--until <unix>] [--limit]
|
||||
- tasks [--format json|csv] [--since] [--until] [--limit]
|
||||
- activities [--format json|csv] [--since] [--until] [--limit]
|
||||
- pipelines [--format json|csv] [--since] [--until] [--limit]
|
||||
|
||||
### raw
|
||||
- raw --method GET --path /api/... [--body '{}']
|
||||
|
||||
## Exit code contract
|
||||
|
||||
- 0 success
|
||||
- 2 usage error
|
||||
- 3 auth error (401)
|
||||
- 4 permission error (403)
|
||||
- 5 network/timeout
|
||||
- 6 server error (5xx)
|
||||
|
||||
## API contract parity gate
|
||||
|
||||
To detect drift between Next.js route handlers and openapi.json, use:
|
||||
|
||||
```bash
|
||||
node scripts/check-api-contract-parity.mjs \
|
||||
--root . \
|
||||
--openapi openapi.json \
|
||||
--ignore-file scripts/api-contract-parity.ignore
|
||||
```
|
||||
|
||||
Machine output:
|
||||
|
||||
```bash
|
||||
node scripts/check-api-contract-parity.mjs --json
|
||||
```
|
||||
|
||||
The checker scans `src/app/api/**/route.ts(x)`, derives operations (METHOD + /api/path), compares against OpenAPI operations, and exits non-zero on mismatch.
|
||||
|
||||
Baseline policy in this repo:
|
||||
- `scripts/api-contract-parity.ignore` currently stores a temporary baseline of known drift.
|
||||
- CI enforces no regressions beyond baseline.
|
||||
- When you fix a mismatch, remove its line from ignore file in the same PR.
|
||||
- Goal is monotonic burn-down to an empty ignore file.
|
||||
|
||||
## Next steps
|
||||
|
||||
- Promote script to package.json bin entry (`mc`).
|
||||
- Add retry/backoff for transient failures.
|
||||
- Add integration tests that run the CLI against a test server fixture.
|
||||
- Add richer pagination/filter flags for list commands.
|
||||
|
|
@ -115,6 +115,82 @@ See `.env.example` for the full list. Key variables:
|
|||
| `OPENCLAW_HOME` | No | - | Path to OpenClaw installation |
|
||||
| `MC_ALLOWED_HOSTS` | No | `localhost,127.0.0.1` | Allowed hosts in production |
|
||||
|
||||
## Kubernetes Sidecar Deployment
|
||||
|
||||
When running Mission Control alongside a gateway as containers in the same pod (sidecar pattern), agents are not discovered via the filesystem. Instead, use the gateway's agent registration API.
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌──────────────── Pod ────────────────┐
|
||||
│ ┌─────────┐ ┌───────────────┐ │
|
||||
│ │ MC │◄───►│ Gateway │ │
|
||||
│ │ :3000 │ │ :18789 │ │
|
||||
│ └─────────┘ └───────────────┘ │
|
||||
│ ▲ ▲ │
|
||||
│ │ localhost │ │
|
||||
│ └──────────────────┘ │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Required Configuration
|
||||
|
||||
**Environment variables** for the MC container:
|
||||
|
||||
```bash
|
||||
AUTH_USER=admin
|
||||
AUTH_PASS=<secure-password>
|
||||
API_KEY=<your-api-key>
|
||||
OPENCLAW_GATEWAY_HOST=127.0.0.1
|
||||
NEXT_PUBLIC_GATEWAY_PORT=18789
|
||||
```
|
||||
|
||||
### Agent Registration
|
||||
|
||||
The gateway must register its agents with MC on startup. Include the `agents` array in the gateway registration request:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/gateways \
|
||||
-H "Authorization: Bearer <API_KEY>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "sidecar-gateway",
|
||||
"host": "127.0.0.1",
|
||||
"port": 18789,
|
||||
"is_primary": true,
|
||||
"agents": [
|
||||
{ "name": "developer-1", "role": "developer" },
|
||||
{ "name": "researcher-1", "role": "researcher" }
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
To update the agent list on reconnect, use `PUT /api/gateways` with the same `agents` field.
|
||||
|
||||
Alternatively, each agent can register itself via the direct connection endpoint:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:3000/api/connect \
|
||||
-H "Authorization: Bearer <API_KEY>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"tool_name": "openclaw-gateway",
|
||||
"agent_name": "developer-1",
|
||||
"agent_role": "developer"
|
||||
}'
|
||||
```
|
||||
|
||||
### Health Checks
|
||||
|
||||
Agents must send heartbeats to stay visible:
|
||||
|
||||
```bash
|
||||
curl http://localhost:3000/api/agents/<agent-id>/heartbeat \
|
||||
-H "Authorization: Bearer <API_KEY>"
|
||||
```
|
||||
|
||||
Without heartbeats, agents will be marked offline after 10 minutes (configurable via `general.agent_timeout_minutes` setting).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Module not found: better-sqlite3"
|
||||
|
|
@ -208,3 +284,12 @@ Then point UI to:
|
|||
```bash
|
||||
NEXT_PUBLIC_GATEWAY_URL=wss://your-domain.com/gateway-ws
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
Once deployed, set up your agents and orchestration:
|
||||
|
||||
- **[Quickstart](quickstart.md)** — Register your first agent and complete a task in 5 minutes
|
||||
- **[Agent Setup](agent-setup.md)** — SOUL personalities, heartbeats, config sync, agent sources
|
||||
- **[Orchestration Patterns](orchestration.md)** — Auto-dispatch, quality review, multi-agent workflows
|
||||
- **[CLI Reference](cli-agent-control.md)** — Full CLI command list for headless/scripted usage
|
||||
|
|
|
|||
|
|
@ -0,0 +1,335 @@
|
|||
# Orchestration Patterns
|
||||
|
||||
This guide covers the task orchestration patterns available in Mission Control, from simple manual assignment to fully automated multi-agent workflows.
|
||||
|
||||
## Task Lifecycle
|
||||
|
||||
Every task in Mission Control follows this status flow:
|
||||
|
||||
```
|
||||
inbox ──► assigned ──► in_progress ──► review ──► done
|
||||
│ │ │ │
|
||||
│ │ │ └──► rejected ──► assigned (retry)
|
||||
│ │ │
|
||||
│ │ └──► failed (max retries or timeout)
|
||||
│ │
|
||||
│ └──► cancelled
|
||||
│
|
||||
└──► assigned (triaged by human or auto-dispatch)
|
||||
```
|
||||
|
||||
Key transitions:
|
||||
- **inbox → assigned**: Human triages or auto-dispatch picks it up
|
||||
- **assigned → in_progress**: Agent claims via queue poll or auto-dispatch sends it
|
||||
- **in_progress → review**: Agent completes work, awaits quality check
|
||||
- **review → done**: Aegis approves the work
|
||||
- **review → assigned**: Aegis rejects, task is requeued with feedback
|
||||
|
||||
## Pattern 1: Manual Assignment
|
||||
|
||||
The simplest pattern. A human creates a task and assigns it to a specific agent.
|
||||
|
||||
```bash
|
||||
# Create and assign in one step
|
||||
curl -X POST "$MC_URL/api/tasks" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Fix login page CSS",
|
||||
"description": "The login button overlaps the form on mobile viewports.",
|
||||
"priority": "high",
|
||||
"assigned_to": "scout"
|
||||
}'
|
||||
```
|
||||
|
||||
The agent picks it up on the next queue poll:
|
||||
|
||||
```bash
|
||||
curl "$MC_URL/api/tasks/queue?agent=scout" \
|
||||
-H "Authorization: Bearer $MC_API_KEY"
|
||||
```
|
||||
|
||||
**When to use**: Small teams, well-known agent capabilities, human-driven task triage.
|
||||
|
||||
## Pattern 2: Queue-Based Dispatch
|
||||
|
||||
Agents poll the queue and MC assigns the highest-priority available task. No human triage needed.
|
||||
|
||||
### Setup
|
||||
|
||||
1. Create tasks in `inbox` status (no `assigned_to`):
|
||||
|
||||
```bash
|
||||
curl -X POST "$MC_URL/api/tasks" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Update API documentation",
|
||||
"priority": "medium"
|
||||
}'
|
||||
```
|
||||
|
||||
2. Agents poll the queue. MC atomically claims the best task:
|
||||
|
||||
```bash
|
||||
# Agent "scout" asks for work
|
||||
curl "$MC_URL/api/tasks/queue?agent=scout" \
|
||||
-H "Authorization: Bearer $MC_API_KEY"
|
||||
|
||||
# Agent "iris" also asks — gets a different task (no race condition)
|
||||
curl "$MC_URL/api/tasks/queue?agent=iris" \
|
||||
-H "Authorization: Bearer $MC_API_KEY"
|
||||
```
|
||||
|
||||
### Priority Ordering
|
||||
|
||||
Tasks are assigned in this order:
|
||||
1. **Priority**: critical > high > medium > low
|
||||
2. **Due date**: Earliest due date first (null = last)
|
||||
3. **Created at**: Oldest first (FIFO within same priority)
|
||||
|
||||
### Capacity Control
|
||||
|
||||
Each agent can set `max_capacity` to limit concurrent tasks:
|
||||
|
||||
```bash
|
||||
# Agent can handle 3 tasks at once
|
||||
curl "$MC_URL/api/tasks/queue?agent=scout&max_capacity=3" \
|
||||
-H "Authorization: Bearer $MC_API_KEY"
|
||||
```
|
||||
|
||||
If the agent already has `max_capacity` tasks in `in_progress`, the response returns `"reason": "at_capacity"` with no task.
|
||||
|
||||
**When to use**: Multiple agents with overlapping capabilities, want automatic load balancing.
|
||||
|
||||
## Pattern 3: Auto-Dispatch (Gateway Required)
|
||||
|
||||
The scheduler automatically dispatches `assigned` tasks to agents through the OpenClaw gateway. This is the fully hands-off mode.
|
||||
|
||||
### How It Works
|
||||
|
||||
1. Tasks are created with `assigned_to` set
|
||||
2. The scheduler's `dispatchAssignedTasks` job runs periodically
|
||||
3. For each task, MC:
|
||||
- Marks it `in_progress`
|
||||
- Classifies the task complexity to select a model
|
||||
- Sends the task prompt to the agent via the gateway
|
||||
- Parses the response and stores the resolution
|
||||
- Moves the task to `review` status
|
||||
|
||||
### Model Routing
|
||||
|
||||
MC automatically selects a model based on task content:
|
||||
|
||||
| Tier | Model | Signals |
|
||||
|------|-------|---------|
|
||||
| **Complex** | Opus | debug, diagnose, architect, security audit, incident, refactor, migration |
|
||||
| **Routine** | Haiku | status check, format, rename, ping, summarize, translate, simple, minor |
|
||||
| **Default** | Agent's configured model | Everything else |
|
||||
|
||||
Critical priority tasks always get Opus. Low priority with routine signals get Haiku.
|
||||
|
||||
Override per-agent by setting `config.dispatchModel`:
|
||||
|
||||
```bash
|
||||
curl -X PUT "$MC_URL/api/agents" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"id": 1, "config": {"dispatchModel": "9router/cc/claude-opus-4-6"}}'
|
||||
```
|
||||
|
||||
### Retry Handling
|
||||
|
||||
- Failed dispatches increment `dispatch_attempts` and revert to `assigned`
|
||||
- After 5 failed attempts, task moves to `failed`
|
||||
- Each failure is logged as a comment on the task
|
||||
|
||||
**When to use**: Fully autonomous operation with an OpenClaw gateway. Best for production agent fleets.
|
||||
|
||||
## Pattern 4: Quality Review (Aegis)
|
||||
|
||||
Aegis is MC's built-in quality gate. When a task reaches `review` status, the scheduler sends it to the Aegis reviewer agent for sign-off.
|
||||
|
||||
### Flow
|
||||
|
||||
```
|
||||
in_progress ──► review ──► Aegis reviews ──► APPROVED ──► done
|
||||
└─► REJECTED ──► assigned (with feedback)
|
||||
```
|
||||
|
||||
### How Aegis Reviews
|
||||
|
||||
1. Scheduler's `runAegisReviews` job picks up tasks in `review` status
|
||||
2. Builds a review prompt with the task description and agent's resolution
|
||||
3. Sends to the Aegis agent (configurable via `MC_COORDINATOR_AGENT`)
|
||||
4. Parses the verdict:
|
||||
- `VERDICT: APPROVED` → task moves to `done`
|
||||
- `VERDICT: REJECTED` → feedback is attached as a comment, task reverts to `assigned`
|
||||
5. Rejected tasks are re-dispatched with the feedback included in the prompt
|
||||
|
||||
### Retry Limits
|
||||
|
||||
- Up to 3 Aegis review cycles per task
|
||||
- After 3 rejections, task moves to `failed` with accumulated feedback
|
||||
- All review results are stored in the `quality_reviews` table
|
||||
|
||||
### Setting Up Aegis
|
||||
|
||||
Aegis is just a regular agent with a reviewer SOUL. Create it:
|
||||
|
||||
```bash
|
||||
# Register the Aegis agent
|
||||
curl -X POST "$MC_URL/api/agents/register" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "aegis", "role": "reviewer"}'
|
||||
|
||||
# Set its SOUL
|
||||
curl -X PUT "$MC_URL/api/agents/1/soul" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"template_name": "reviewer"}'
|
||||
```
|
||||
|
||||
**When to use**: When you want automated quality checks before tasks are marked complete.
|
||||
|
||||
## Pattern 5: Recurring Tasks (Cron)
|
||||
|
||||
Schedule tasks to be created automatically on a recurring basis using natural language or cron expressions.
|
||||
|
||||
### CLI
|
||||
|
||||
```bash
|
||||
node scripts/mc-cli.cjs cron create --body '{
|
||||
"name": "daily-standup-report",
|
||||
"schedule": "0 9 * * 1-5",
|
||||
"task_template": {
|
||||
"title": "Generate daily standup report",
|
||||
"description": "Summarize all completed tasks from the past 24 hours.",
|
||||
"priority": "medium",
|
||||
"assigned_to": "iris"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
### API
|
||||
|
||||
```bash
|
||||
curl -X POST "$MC_URL/api/cron" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"name": "weekly-security-scan",
|
||||
"schedule": "0 2 * * 0",
|
||||
"task_template": {
|
||||
"title": "Weekly security audit",
|
||||
"priority": "high",
|
||||
"assigned_to": "aegis"
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
The scheduler spawns dated child tasks from the template on each trigger. Manage cron jobs with `pause`, `resume`, and `remove` actions.
|
||||
|
||||
**When to use**: Reports, health checks, periodic audits, maintenance tasks.
|
||||
|
||||
## Pattern 6: Multi-Agent Handoff
|
||||
|
||||
Agent A completes a task, then creates a follow-up task assigned to Agent B. This chains agents into a pipeline.
|
||||
|
||||
### Example: Research → Implement → Review
|
||||
|
||||
```bash
|
||||
# Step 1: Research task for iris
|
||||
curl -X POST "$MC_URL/api/tasks" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Research caching strategies for API layer",
|
||||
"priority": "high",
|
||||
"assigned_to": "iris"
|
||||
}'
|
||||
```
|
||||
|
||||
When iris completes the research, create the implementation task:
|
||||
|
||||
```bash
|
||||
# Step 2: Implementation task for scout (after iris finishes)
|
||||
curl -X POST "$MC_URL/api/tasks" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Implement Redis caching for /api/products",
|
||||
"description": "Based on research in TASK-1: Use cache-aside pattern with 5min TTL...",
|
||||
"priority": "high",
|
||||
"assigned_to": "scout"
|
||||
}'
|
||||
```
|
||||
|
||||
After scout finishes, Aegis reviews automatically (if auto-dispatch is active), or you create a review task:
|
||||
|
||||
```bash
|
||||
# Step 3: Review task for aegis
|
||||
curl -X POST "$MC_URL/api/tasks" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Review caching implementation in TASK-2",
|
||||
"priority": "high",
|
||||
"assigned_to": "aegis"
|
||||
}'
|
||||
```
|
||||
|
||||
**When to use**: Complex workflows where different agents have different specializations.
|
||||
|
||||
## Pattern 7: Stale Task Recovery
|
||||
|
||||
MC automatically recovers from stuck agents. The `requeueStaleTasks` scheduler job:
|
||||
|
||||
1. Finds tasks stuck in `in_progress` for 10+ minutes with an offline agent
|
||||
2. Reverts them to `assigned` with a comment explaining the stall
|
||||
3. After 5 stale requeues, moves the task to `failed`
|
||||
|
||||
This happens automatically — no configuration needed.
|
||||
|
||||
## Combining Patterns
|
||||
|
||||
In practice, you'll combine these patterns. A typical production setup:
|
||||
|
||||
1. **Cron** creates recurring tasks (Pattern 5)
|
||||
2. **Queue-based dispatch** distributes tasks to available agents (Pattern 2)
|
||||
3. **Model routing** picks the right model per task (Pattern 3)
|
||||
4. **Aegis** reviews all completed work (Pattern 4)
|
||||
5. **Stale recovery** handles agent failures (Pattern 7)
|
||||
|
||||
```
|
||||
Cron ──► inbox ──► Queue assigns ──► Agent works ──► Aegis reviews ──► done
|
||||
│ │
|
||||
└── timeout ───────┘── requeue
|
||||
```
|
||||
|
||||
## Event Streaming
|
||||
|
||||
Monitor orchestration in real time with SSE:
|
||||
|
||||
```bash
|
||||
# Watch all task and agent events
|
||||
node scripts/mc-cli.cjs events watch --types task,agent --json
|
||||
```
|
||||
|
||||
Or via API:
|
||||
|
||||
```bash
|
||||
curl -N "$MC_URL/api/events" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Accept: text/event-stream"
|
||||
```
|
||||
|
||||
Events include: `task.created`, `task.updated`, `task.completed`, `agent.created`, `agent.status_changed`, and more.
|
||||
|
||||
## Reference
|
||||
|
||||
- **[Quickstart](quickstart.md)** — 5-minute first agent tutorial
|
||||
- **[Agent Setup](agent-setup.md)** — Registration, SOUL, configuration
|
||||
- **[CLI Reference](cli-agent-control.md)** — Full CLI command list
|
||||
- **[CLI Integration](cli-integration.md)** — Direct connections without a gateway
|
||||
|
|
@ -0,0 +1,259 @@
|
|||
# Mission Control Platform Hardening + Full Agent CLI/TUI PRD
|
||||
|
||||
> For Hermes: execute this plan in iterative vertical slices (contract parity -> CLI core -> TUI -> hardening), with tests at each slice.
|
||||
|
||||
Goal
|
||||
Build a production-grade Mission Control operator surface for autonomous agents via a first-party CLI (and optional lightweight TUI), while fixing platform inconsistencies discovered in audit: API contract drift, uneven reliability controls, and incomplete automation ergonomics.
|
||||
|
||||
Architecture
|
||||
Mission Control remains the source of truth with REST + SSE endpoints. A first-party CLI consumes those APIs with profile-based auth and machine-friendly output. TUI is layered on top of CLI API client primitives for shared behavior. API contract reliability is enforced through route-to-spec parity checks in CI.
|
||||
|
||||
Tech Stack
|
||||
- Existing: Next.js app-router API, SQLite, Node runtime, SSE
|
||||
- New: Node CLI runtime (no heavy deps required for v1), optional TUI in terminal ANSI mode
|
||||
- Testing: existing Playwright/Vitest patterns + CLI smoke tests + OpenAPI parity checks
|
||||
|
||||
---
|
||||
|
||||
## 1) Problem statement
|
||||
|
||||
Current Mission Control backend has strong capabilities for agent orchestration, but external automation quality is constrained by:
|
||||
1. API surface drift between route handlers, openapi.json, and /api/index.
|
||||
2. No first-party comprehensive CLI for operators/agents.
|
||||
3. Uneven hardening around operational concerns (auth posture defaults, multi-instance rate limiting strategy, spawn history durability).
|
||||
4. Incomplete UX for non-interactive agent workflows (idempotent commands, stable JSON output, strict exit codes).
|
||||
|
||||
Result: agents can use Mission Control partially, but not yet with high confidence as a full control plane.
|
||||
|
||||
## 2) Product objectives
|
||||
|
||||
Primary objectives
|
||||
1. Deliver a first-party CLI with functional parity across core agent workflows.
|
||||
2. Add optional TUI for rapid situational awareness and interactive operations.
|
||||
3. Establish API contract parity as an enforceable quality gate.
|
||||
4. Improve reliability and security defaults for autonomous operation.
|
||||
|
||||
Success criteria
|
||||
- 95%+ of documented operator workflows executable via CLI without web UI.
|
||||
- Contract parity CI gate blocks drift between route handlers and OpenAPI.
|
||||
- CLI supports machine mode: stable JSON schemas and deterministic exit codes.
|
||||
- TUI can monitor and trigger core actions (agents/tasks/sessions/events).
|
||||
|
||||
Non-goals (v1)
|
||||
- Replacing the web UI.
|
||||
- Building an advanced ncurses framework dependency stack if not needed.
|
||||
- Supporting all historical/legacy endpoint aliases immediately.
|
||||
|
||||
## 3) Personas and workflows
|
||||
|
||||
Personas
|
||||
1. Autonomous agent runtime (headless, non-interactive).
|
||||
2. Human operator (terminal-first incident response).
|
||||
3. Platform maintainer (release and contract governance).
|
||||
|
||||
Critical workflows
|
||||
- Poll task queue and claim work.
|
||||
- Manage agents (register/update/diagnose/wake).
|
||||
- Manage sessions (list/control/continue/transcript).
|
||||
- Observe events in real time.
|
||||
- Track token usage and attribution.
|
||||
- Manage skills, cron jobs, and direct CLI connections.
|
||||
|
||||
## 4) Functional requirements
|
||||
|
||||
### A. API contract governance
|
||||
- FR-A1: A parity checker must compare discovered route handlers and OpenAPI paths/methods.
|
||||
- FR-A2: CI fails on non-ignored mismatches.
|
||||
- FR-A3: Ignore list must be explicit and reviewable.
|
||||
- FR-A4: /api/index should be validated or generated from same contract source.
|
||||
|
||||
### B. CLI v1 requirements
|
||||
- FR-B1: Profile-based configuration (URL + auth mode + key/cookie).
|
||||
- FR-B2: Commands must support --json output and strict exit codes.
|
||||
- FR-B3: Support key domains:
|
||||
- auth
|
||||
- agents
|
||||
- tasks
|
||||
- sessions
|
||||
- connect
|
||||
- tokens
|
||||
- skills
|
||||
- cron
|
||||
- events watch
|
||||
- raw request fallback
|
||||
- FR-B4: Non-interactive defaults suitable for autonomous agents.
|
||||
- FR-B5: Request timeout + retry controls for reliable automation.
|
||||
|
||||
### C. TUI v1 requirements (optional but included)
|
||||
- FR-C1: Dashboard with agents/tasks/sessions summary panels.
|
||||
- FR-C2: Keyboard-driven refresh/navigation.
|
||||
- FR-C3: Trigger key operations (wake agent, queue poll, session controls).
|
||||
- FR-C4: Clear degraded mode messaging if endpoints unavailable.
|
||||
|
||||
### D. Platform hardening requirements
|
||||
- FR-D1: Document and enforce least-privilege auth guidance for agent keys.
|
||||
- FR-D2: Expose explicit warning/controls for global admin API key usage.
|
||||
- FR-D3: Add durable spawn history persistence (DB-backed) replacing log scraping fallback.
|
||||
- FR-D4: Add scalable rate-limit strategy plan (in-memory now, pluggable backend next).
|
||||
|
||||
## 5) CLI command map (v1)
|
||||
|
||||
mc auth
|
||||
- login --username --password
|
||||
- logout
|
||||
- whoami
|
||||
|
||||
mc agents
|
||||
- list
|
||||
- get --id
|
||||
- create --name --role
|
||||
- update --id ...fields
|
||||
- delete --id
|
||||
- wake --id
|
||||
- diagnostics --id
|
||||
- heartbeat --id
|
||||
- memory get|set --id
|
||||
- soul get|set --id
|
||||
|
||||
mc tasks
|
||||
- list [filters]
|
||||
- get --id
|
||||
- create --title [--description --priority --assigned-to]
|
||||
- update --id ...fields
|
||||
- delete --id
|
||||
- queue --agent [--max-capacity]
|
||||
- comments list/add --id
|
||||
- broadcast --id
|
||||
|
||||
mc sessions
|
||||
- list
|
||||
- control --id --action monitor|pause|terminate
|
||||
- continue --kind claude-code|codex-cli --id --prompt
|
||||
- transcript --id [--source]
|
||||
|
||||
mc connect
|
||||
- register --tool-name --agent-name [...]
|
||||
- list
|
||||
- disconnect --connection-id
|
||||
|
||||
mc tokens
|
||||
- list
|
||||
- stats
|
||||
- by-agent [--days]
|
||||
- export --format json|csv
|
||||
|
||||
mc skills
|
||||
- list
|
||||
- content --source --name
|
||||
- upsert --source --name --file
|
||||
- delete --source --name
|
||||
- check --source --name
|
||||
|
||||
mc cron
|
||||
- list
|
||||
- create/update/pause/resume/remove/run
|
||||
|
||||
mc events
|
||||
- watch [--types]
|
||||
|
||||
mc raw
|
||||
- raw --method GET --path /api/... [--body '{}']
|
||||
|
||||
## 6) UX and interface requirements
|
||||
|
||||
- Default output must be concise human-readable; --json returns machine-stable payload.
|
||||
- All non-2xx responses include normalized error object and non-zero exit.
|
||||
- Exit code taxonomy:
|
||||
- 0 success
|
||||
- 2 usage error
|
||||
- 3 auth error
|
||||
- 4 permission error
|
||||
- 5 network/timeout
|
||||
- 6 server error
|
||||
- Pagination/filter flags normalized across list commands.
|
||||
|
||||
## 7) Security requirements
|
||||
|
||||
- Do not log raw API keys or cookies.
|
||||
- Redact sensitive headers in verbose/debug output.
|
||||
- Provide per-profile auth scope awareness (viewer/operator/admin implied risk labeling).
|
||||
- Strong guidance: prefer agent-scoped keys over global admin key.
|
||||
|
||||
## 8) Reliability requirements
|
||||
|
||||
- Configurable timeout/retry/backoff.
|
||||
- Safe JSON parsing and clear error surfaces.
|
||||
- SSE reconnection strategy for watch mode.
|
||||
- Graceful handling for partial endpoint availability.
|
||||
|
||||
## 9) Testing strategy
|
||||
|
||||
Unit
|
||||
- CLI arg parsing and request mapping.
|
||||
- Output modes and exit codes.
|
||||
- API parity checker route extraction and mismatch detection.
|
||||
|
||||
Integration
|
||||
- CLI against local Mission Control test server.
|
||||
- Auth modes (API key, login session where enabled).
|
||||
- Session control, queue polling, skills CRUD.
|
||||
|
||||
E2E
|
||||
- Playwright/terminal-driven smoke for critical command paths.
|
||||
- TUI render and keyboard navigation smoke tests.
|
||||
|
||||
Contract tests
|
||||
- OpenAPI parity check in CI.
|
||||
- Optional index parity check in CI.
|
||||
|
||||
## 10) Rollout plan
|
||||
|
||||
Phase 0: Contract stabilization
|
||||
- Add parity checker and fail CI on drift.
|
||||
- Resolve existing mismatches.
|
||||
|
||||
Phase 1: CLI core
|
||||
- Ship profile/auth client + core command groups (auth/agents/tasks/sessions/connect).
|
||||
|
||||
Phase 2: CLI expansion
|
||||
- tokens/skills/cron/events/raw + transcript ergonomics.
|
||||
|
||||
Phase 3: TUI
|
||||
- Live dashboard + action shortcuts.
|
||||
|
||||
Phase 4: Hardening
|
||||
- durable spawn history
|
||||
- auth warnings and safeguards
|
||||
- scalable rate-limit backend abstraction
|
||||
|
||||
## 11) Risks and mitigations
|
||||
|
||||
Risk: Large API surface causes long-tail parity gaps.
|
||||
Mitigation: enforce parity checker + allowlist for temporary exceptions.
|
||||
|
||||
Risk: Auth complexity across cookie/key/proxy modes.
|
||||
Mitigation: profile abstraction + explicit mode selection and diagnostics.
|
||||
|
||||
Risk: CLI churn if endpoint contracts continue changing.
|
||||
Mitigation: typed response normalizers + compatibility layer + semver release notes.
|
||||
|
||||
## 12) Acceptance criteria
|
||||
|
||||
- PRD approved by maintainers.
|
||||
- CLI provides end-to-end control for core workflows.
|
||||
- Contract parity CI gate active and green.
|
||||
- TUI displays operational state and triggers key actions.
|
||||
- Security and reliability hardening changes documented and tested.
|
||||
|
||||
## 13) Immediate implementation tasks (next 1-2 PRs)
|
||||
|
||||
PR 1
|
||||
1. Add API parity checker script and CI command.
|
||||
2. Add first-party CLI scaffold with command routing and normalized request layer.
|
||||
3. Add docs for CLI profiles/auth/output contract.
|
||||
|
||||
PR 2
|
||||
1. Implement full command matrix.
|
||||
2. Add TUI dashboard shell.
|
||||
3. Add CLI integration tests.
|
||||
4. Introduce durable spawn history model and endpoint alignment.
|
||||
|
|
@ -0,0 +1,235 @@
|
|||
# Quickstart: Your First Agent in 5 Minutes
|
||||
|
||||
Get from zero to a working agent loop with nothing but Mission Control and `curl`. No gateway, no OpenClaw, no extra dependencies.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Mission Control running (`pnpm dev` or Docker)
|
||||
- An admin account (visit `/setup` on first run)
|
||||
- Your API key (auto-generated on first run, shown in Settings)
|
||||
|
||||
## Step 1: Start Mission Control
|
||||
|
||||
```bash
|
||||
pnpm dev
|
||||
```
|
||||
|
||||
Open http://localhost:3000 and log in. If this is your first run, visit http://localhost:3000/setup to create your admin account.
|
||||
|
||||
Your API key is displayed in **Settings > API Key**. Export it for the commands below:
|
||||
|
||||
```bash
|
||||
export MC_URL=http://localhost:3000
|
||||
export MC_API_KEY=your-api-key
|
||||
```
|
||||
|
||||
## Step 2: Register an Agent
|
||||
|
||||
Agents can self-register via the API. This is how autonomous agents announce themselves to Mission Control:
|
||||
|
||||
```bash
|
||||
curl -s -X POST "$MC_URL/api/agents/register" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name": "scout", "role": "researcher"}' | jq
|
||||
```
|
||||
|
||||
Expected response:
|
||||
|
||||
```json
|
||||
{
|
||||
"agent": {
|
||||
"id": 1,
|
||||
"name": "scout",
|
||||
"role": "researcher",
|
||||
"status": "idle",
|
||||
"created_at": 1711234567
|
||||
},
|
||||
"registered": true,
|
||||
"message": "Agent registered successfully"
|
||||
}
|
||||
```
|
||||
|
||||
Note the `id` — you'll need it for heartbeats. The registration is idempotent: calling it again with the same name just updates the agent's status to `idle`.
|
||||
|
||||
**Valid roles**: `coder`, `reviewer`, `tester`, `devops`, `researcher`, `assistant`, `agent`
|
||||
|
||||
## Step 3: Create a Task
|
||||
|
||||
```bash
|
||||
curl -s -X POST "$MC_URL/api/tasks" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"title": "Research competitor pricing",
|
||||
"description": "Find pricing pages for the top 3 competitors and summarize their tiers.",
|
||||
"priority": "medium",
|
||||
"assigned_to": "scout"
|
||||
}' | jq
|
||||
```
|
||||
|
||||
Expected response:
|
||||
|
||||
```json
|
||||
{
|
||||
"task": {
|
||||
"id": 1,
|
||||
"title": "Research competitor pricing",
|
||||
"status": "assigned",
|
||||
"priority": "medium",
|
||||
"assigned_to": "scout",
|
||||
"tags": [],
|
||||
"metadata": {}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The task starts in `assigned` status because you specified `assigned_to`. If you omit it, the task goes to `inbox` for manual triage.
|
||||
|
||||
## Step 4: Poll the Task Queue
|
||||
|
||||
This is how your agent picks up work. The queue endpoint atomically claims the highest-priority available task:
|
||||
|
||||
```bash
|
||||
curl -s "$MC_URL/api/tasks/queue?agent=scout" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" | jq
|
||||
```
|
||||
|
||||
Expected response:
|
||||
|
||||
```json
|
||||
{
|
||||
"task": {
|
||||
"id": 1,
|
||||
"title": "Research competitor pricing",
|
||||
"status": "in_progress",
|
||||
"assigned_to": "scout"
|
||||
},
|
||||
"reason": "assigned",
|
||||
"agent": "scout",
|
||||
"timestamp": 1711234600
|
||||
}
|
||||
```
|
||||
|
||||
The task status automatically moved from `assigned` to `in_progress`. The `reason` field tells you why this task was returned:
|
||||
|
||||
| Reason | Meaning |
|
||||
|--------|---------|
|
||||
| `assigned` | Claimed a new task from the queue |
|
||||
| `continue_current` | Agent already has a task in progress |
|
||||
| `at_capacity` | Agent is at max concurrent tasks |
|
||||
| `no_tasks_available` | Nothing in the queue for this agent |
|
||||
|
||||
## Step 5: Complete the Task
|
||||
|
||||
When your agent finishes work, update the task status and add a resolution:
|
||||
|
||||
```bash
|
||||
curl -s -X PUT "$MC_URL/api/tasks/1" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"status": "done",
|
||||
"resolution": "Found pricing for Acme ($29/49/99), Widget Corp ($19/39/79), and Gadget Inc ($25/50/100). All use 3-tier SaaS model. Summary doc attached."
|
||||
}' | jq
|
||||
```
|
||||
|
||||
## Step 6: Send a Heartbeat
|
||||
|
||||
Heartbeats tell Mission Control your agent is alive. Without them, agents are marked offline after 10 minutes:
|
||||
|
||||
```bash
|
||||
curl -s -X POST "$MC_URL/api/agents/1/heartbeat" \
|
||||
-H "Authorization: Bearer $MC_API_KEY" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{}' | jq
|
||||
```
|
||||
|
||||
Expected response:
|
||||
|
||||
```json
|
||||
{
|
||||
"success": true,
|
||||
"token_recorded": false,
|
||||
"work_items": [],
|
||||
"timestamp": 1711234700
|
||||
}
|
||||
```
|
||||
|
||||
In a real agent, you'd send heartbeats every 30 seconds in a background loop. The `work_items` array returns any pending tasks, mentions, or notifications.
|
||||
|
||||
## The Agent Loop
|
||||
|
||||
Here's the complete pattern your agent should follow:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────┐
|
||||
│ 1. Register with MC │
|
||||
│ POST /api/agents/register │
|
||||
└──────────────┬──────────────────┘
|
||||
│
|
||||
v
|
||||
┌─────────────────────────────────┐
|
||||
│ 2. Poll for work │◄──────┐
|
||||
│ GET /api/tasks/queue │ │
|
||||
└──────────────┬──────────────────┘ │
|
||||
│ │
|
||||
v │
|
||||
┌─────────────────────────────────┐ │
|
||||
│ 3. Do the work │ │
|
||||
│ (your agent logic here) │ │
|
||||
└──────────────┬──────────────────┘ │
|
||||
│ │
|
||||
v │
|
||||
┌─────────────────────────────────┐ │
|
||||
│ 4. Report result │ │
|
||||
│ PUT /api/tasks/{id} │ │
|
||||
└──────────────┬──────────────────┘ │
|
||||
│ │
|
||||
v │
|
||||
┌─────────────────────────────────┐ │
|
||||
│ 5. Heartbeat + repeat │───────┘
|
||||
│ POST /api/agents/{id}/hb │
|
||||
└─────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Using the CLI Instead
|
||||
|
||||
If you prefer the CLI over `curl`, the same flow works with `pnpm mc`:
|
||||
|
||||
```bash
|
||||
# List agents
|
||||
node scripts/mc-cli.cjs agents list --json
|
||||
|
||||
# Create an agent
|
||||
node scripts/mc-cli.cjs agents create --name scout --role researcher --json
|
||||
|
||||
# Create a task
|
||||
node scripts/mc-cli.cjs tasks create --title "Research competitors" --body '{"assigned_to":"scout","priority":"medium"}' --json
|
||||
|
||||
# Poll the queue
|
||||
node scripts/mc-cli.cjs tasks queue --agent scout --json
|
||||
|
||||
# Watch events in real time
|
||||
node scripts/mc-cli.cjs events watch --types task,agent
|
||||
```
|
||||
|
||||
See [CLI Reference](cli-agent-control.md) for the full command list.
|
||||
|
||||
## Using the MCP Server (for Claude Code agents)
|
||||
|
||||
For agents built with Claude Code, the MCP server is the recommended integration:
|
||||
|
||||
```bash
|
||||
claude mcp add mission-control -- node /path/to/mission-control/scripts/mc-mcp-server.cjs
|
||||
```
|
||||
|
||||
Set `MC_URL` and `MC_API_KEY` in your environment. The MCP server exposes 35+ tools for agents, tasks, sessions, memory, and more. See [CLI Integration](cli-integration.md) for details.
|
||||
|
||||
## What's Next?
|
||||
|
||||
- **[Agent Setup Guide](agent-setup.md)** — Configure SOUL personalities, agent sources, and heartbeat settings
|
||||
- **[Orchestration Patterns](orchestration.md)** — Multi-agent workflows, auto-dispatch, quality review gates
|
||||
- **[CLI Reference](cli-agent-control.md)** — Full CLI command reference
|
||||
- **[CLI Integration](cli-integration.md)** — Direct CLI and gateway-free connections
|
||||
- **[Deployment Guide](deployment.md)** — Production deployment options
|
||||
4478
openapi.json
4478
openapi.json
File diff suppressed because it is too large
Load Diff
|
|
@ -4,6 +4,11 @@
|
|||
"description": "OpenClaw Mission Control — open-source agent orchestration dashboard",
|
||||
"scripts": {
|
||||
"verify:node": "node scripts/check-node-version.mjs",
|
||||
"api:parity": "node scripts/check-api-contract-parity.mjs --root . --openapi openapi.json --ignore-file scripts/api-contract-parity.ignore",
|
||||
"api:parity:json": "node scripts/check-api-contract-parity.mjs --root . --openapi openapi.json --ignore-file scripts/api-contract-parity.ignore --json",
|
||||
"mc": "node scripts/mc-cli.cjs",
|
||||
"mc:mcp": "node scripts/mc-mcp-server.cjs",
|
||||
"mc:tui": "node scripts/mc-tui.cjs",
|
||||
"dev": "pnpm run verify:node && next dev --hostname 127.0.0.1 --port ${PORT:-3000}",
|
||||
"build": "pnpm run verify:node && next build",
|
||||
"start": "pnpm run verify:node && next start --hostname 0.0.0.0 --port ${PORT:-3000}",
|
||||
|
|
|
|||
|
|
@ -0,0 +1,30 @@
|
|||
# Mission Control
|
||||
|
||||
> Open-source dashboard for AI agent orchestration.
|
||||
|
||||
Mission Control is a self-hosted dashboard for managing AI agent fleets. It provides task dispatch, cost tracking, quality review gates, recurring task scheduling, and multi-agent coordination — all powered by SQLite with zero external dependencies.
|
||||
|
||||
## Key Features
|
||||
- Agent management with full lifecycle (register, heartbeat, wake, retire)
|
||||
- Kanban task board with priorities, assignments, and comments
|
||||
- Task queue with atomic claiming and priority-based dispatch
|
||||
- Auto-dispatch with model routing (Opus/Sonnet/Haiku by task complexity)
|
||||
- Aegis quality review gates for task sign-off
|
||||
- Real-time monitoring via WebSocket + SSE
|
||||
- Token usage and cost tracking with per-model breakdowns
|
||||
- Natural language recurring tasks with cron scheduling
|
||||
- MCP server with 35+ tools for agent integration
|
||||
- CLI for headless/scripted usage
|
||||
- Role-based access control (viewer, operator, admin)
|
||||
- REST API with OpenAPI spec
|
||||
|
||||
## Stack
|
||||
Next.js 16, React 19, TypeScript 5, SQLite (better-sqlite3), Tailwind CSS
|
||||
|
||||
## Links
|
||||
- Source: https://github.com/builderz-labs/mission-control
|
||||
- Landing page: https://mc.builderz.dev
|
||||
- License: MIT
|
||||
|
||||
## llms-full.txt
|
||||
For the complete API reference and integration guide, see docs/cli-agent-control.md in the repository.
|
||||
|
|
@ -0,0 +1,9 @@
|
|||
# Mission Control — AI Agent Orchestration Dashboard
|
||||
# https://github.com/builderz-labs/mission-control
|
||||
|
||||
User-agent: *
|
||||
Allow: /
|
||||
Disallow: /api/
|
||||
Disallow: /setup
|
||||
Disallow: /login
|
||||
Disallow: /_next/
|
||||
|
|
@ -0,0 +1,3 @@
|
|||
# API contract parity baseline ignore list
|
||||
# One operation per line: METHOD /api/path
|
||||
# Keep this list shrinking over time; remove entries when route/spec parity is fixed.
|
||||
|
|
@ -0,0 +1,153 @@
|
|||
#!/usr/bin/env node
|
||||
import fs from 'node:fs'
|
||||
import path from 'node:path'
|
||||
|
||||
const HTTP_METHODS = ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS', 'HEAD']
|
||||
|
||||
function toPosix(input) {
|
||||
return input.split(path.sep).join('/')
|
||||
}
|
||||
|
||||
function normalizeSegment(segment) {
|
||||
if (segment.startsWith('[[...') && segment.endsWith(']]')) return `{${segment.slice(5, -2)}}`
|
||||
if (segment.startsWith('[...') && segment.endsWith(']')) return `{${segment.slice(4, -1)}}`
|
||||
if (segment.startsWith('[') && segment.endsWith(']')) return `{${segment.slice(1, -1)}}`
|
||||
return segment
|
||||
}
|
||||
|
||||
function routeFileToApiPath(projectRoot, fullPath) {
|
||||
const rel = toPosix(path.relative(projectRoot, fullPath))
|
||||
const withoutRoute = rel.replace(/\/route\.tsx?$/, '')
|
||||
const trimmed = withoutRoute.startsWith('src/app/api') ? withoutRoute.slice('src/app/api'.length) : withoutRoute
|
||||
const parts = trimmed.split('/').filter(Boolean).map(normalizeSegment)
|
||||
return `/api${parts.length ? `/${parts.join('/')}` : ''}`
|
||||
}
|
||||
|
||||
function extractHttpMethods(source) {
|
||||
const methods = []
|
||||
for (const method of HTTP_METHODS) {
|
||||
const constExport = new RegExp(`export\\s+const\\s+${method}\\s*=`, 'm')
|
||||
const fnExport = new RegExp(`export\\s+(?:async\\s+)?function\\s+${method}\\s*\\(`, 'm')
|
||||
if (constExport.test(source) || fnExport.test(source)) methods.push(method)
|
||||
}
|
||||
return methods
|
||||
}
|
||||
|
||||
function walkRouteFiles(dir, out = []) {
|
||||
if (!fs.existsSync(dir)) return out
|
||||
for (const entry of fs.readdirSync(dir, { withFileTypes: true })) {
|
||||
const full = path.join(dir, entry.name)
|
||||
if (entry.isDirectory()) walkRouteFiles(full, out)
|
||||
else if (entry.isFile() && /route\.tsx?$/.test(entry.name)) out.push(full)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
function normalizeOperation(operation) {
|
||||
const [method = '', ...pathParts] = String(operation || '').trim().split(' ')
|
||||
const normalizedMethod = method.toUpperCase()
|
||||
const normalizedPath = pathParts.join(' ').trim()
|
||||
return `${normalizedMethod} ${normalizedPath}`
|
||||
}
|
||||
|
||||
function parseIgnoreArg(ignoreArg) {
|
||||
if (!ignoreArg) return []
|
||||
return ignoreArg
|
||||
.split(',')
|
||||
.map((x) => normalizeOperation(x))
|
||||
.filter(Boolean)
|
||||
}
|
||||
|
||||
function parseArgs(argv) {
|
||||
const flags = {}
|
||||
for (let i = 0; i < argv.length; i += 1) {
|
||||
const token = argv[i]
|
||||
if (!token.startsWith('--')) continue
|
||||
const key = token.slice(2)
|
||||
const next = argv[i + 1]
|
||||
if (!next || next.startsWith('--')) {
|
||||
flags[key] = true
|
||||
continue
|
||||
}
|
||||
flags[key] = next
|
||||
i += 1
|
||||
}
|
||||
return flags
|
||||
}
|
||||
|
||||
function run() {
|
||||
const flags = parseArgs(process.argv.slice(2))
|
||||
const projectRoot = path.resolve(String(flags.root || process.cwd()))
|
||||
const openapiPath = path.resolve(projectRoot, String(flags.openapi || 'openapi.json'))
|
||||
const ignoreFile = flags['ignore-file'] ? path.resolve(projectRoot, String(flags['ignore-file'])) : null
|
||||
const ignoreInline = parseIgnoreArg(flags.ignore)
|
||||
let ignore = new Set(ignoreInline)
|
||||
|
||||
if (ignoreFile && fs.existsSync(ignoreFile)) {
|
||||
const lines = fs
|
||||
.readFileSync(ignoreFile, 'utf8')
|
||||
.split('\n')
|
||||
.map((x) => x.trim())
|
||||
.filter((x) => x && !x.startsWith('#'))
|
||||
.map((x) => normalizeOperation(x))
|
||||
ignore = new Set([...ignore, ...lines])
|
||||
}
|
||||
|
||||
const openapi = JSON.parse(fs.readFileSync(openapiPath, 'utf8'))
|
||||
const openapiOps = new Set()
|
||||
for (const [rawPath, pathItem] of Object.entries(openapi.paths || {})) {
|
||||
for (const method of Object.keys(pathItem || {})) {
|
||||
const upper = method.toUpperCase()
|
||||
if (HTTP_METHODS.includes(upper)) {
|
||||
openapiOps.add(`${upper} ${rawPath}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const routeFiles = walkRouteFiles(path.join(projectRoot, 'src/app/api'))
|
||||
const routeOps = new Set()
|
||||
for (const file of routeFiles) {
|
||||
const source = fs.readFileSync(file, 'utf8')
|
||||
const methods = extractHttpMethods(source)
|
||||
const apiPath = routeFileToApiPath(projectRoot, file)
|
||||
for (const method of methods) routeOps.add(`${method} ${apiPath}`)
|
||||
}
|
||||
|
||||
const missingInOpenApi = [...routeOps].filter((op) => !openapiOps.has(op) && !ignore.has(op)).sort()
|
||||
const missingInRoutes = [...openapiOps].filter((op) => !routeOps.has(op) && !ignore.has(op)).sort()
|
||||
|
||||
const summary = {
|
||||
ok: missingInOpenApi.length === 0 && missingInRoutes.length === 0,
|
||||
totals: {
|
||||
routeOperations: routeOps.size,
|
||||
openapiOperations: openapiOps.size,
|
||||
ignoredOperations: ignore.size,
|
||||
},
|
||||
missingInOpenApi,
|
||||
missingInRoutes,
|
||||
}
|
||||
|
||||
if (flags.json) {
|
||||
console.log(JSON.stringify(summary, null, 2))
|
||||
} else {
|
||||
console.log('API contract parity check')
|
||||
console.log(`- route operations: ${summary.totals.routeOperations}`)
|
||||
console.log(`- openapi operations: ${summary.totals.openapiOperations}`)
|
||||
console.log(`- ignored entries: ${summary.totals.ignoredOperations}`)
|
||||
if (missingInOpenApi.length) {
|
||||
console.log('\nMissing in OpenAPI:')
|
||||
for (const op of missingInOpenApi) console.log(` - ${op}`)
|
||||
}
|
||||
if (missingInRoutes.length) {
|
||||
console.log('\nMissing in routes:')
|
||||
for (const op of missingInRoutes) console.log(` - ${op}`)
|
||||
}
|
||||
if (!missingInOpenApi.length && !missingInRoutes.length) {
|
||||
console.log('\n✅ Contract parity OK')
|
||||
}
|
||||
}
|
||||
|
||||
process.exit(summary.ok ? 0 : 1)
|
||||
}
|
||||
|
||||
run()
|
||||
|
|
@ -0,0 +1,733 @@
|
|||
#!/usr/bin/env node
|
||||
/*
|
||||
Mission Control CLI (v2)
|
||||
- Zero heavy dependencies
|
||||
- API-key first for agent automation
|
||||
- JSON mode + stable exit codes
|
||||
- Lazy command resolution (no eager required() calls)
|
||||
- SSE streaming for events watch
|
||||
*/
|
||||
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
|
||||
const EXIT = {
|
||||
OK: 0,
|
||||
USAGE: 2,
|
||||
AUTH: 3,
|
||||
FORBIDDEN: 4,
|
||||
NETWORK: 5,
|
||||
SERVER: 6,
|
||||
};
|
||||
|
||||
function parseArgs(argv) {
|
||||
const out = { _: [], flags: {} };
|
||||
for (let i = 0; i < argv.length; i += 1) {
|
||||
const token = argv[i];
|
||||
if (!token.startsWith('--')) {
|
||||
out._.push(token);
|
||||
continue;
|
||||
}
|
||||
const key = token.slice(2);
|
||||
const next = argv[i + 1];
|
||||
if (!next || next.startsWith('--')) {
|
||||
out.flags[key] = true;
|
||||
continue;
|
||||
}
|
||||
out.flags[key] = next;
|
||||
i += 1;
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
function usage() {
|
||||
console.log(`Mission Control CLI
|
||||
|
||||
Usage:
|
||||
mc <group> <action> [--flags]
|
||||
|
||||
Groups:
|
||||
auth login/logout/whoami
|
||||
agents list/get/create/update/delete/wake/diagnostics/heartbeat
|
||||
memory get|set|clear / soul get|set|templates / attribution
|
||||
tasks list/get/create/update/delete/queue
|
||||
comments list|add / broadcast
|
||||
sessions list/control/continue/transcript
|
||||
connect register/list/disconnect
|
||||
tokens list/stats/by-agent/agent-costs/task-costs/export/rotate
|
||||
skills list/content/upsert/delete/check
|
||||
cron list/create/update/pause/resume/remove/run
|
||||
events watch
|
||||
status health/overview/dashboard/gateway/models/capabilities
|
||||
export audit/tasks/activities/pipelines
|
||||
raw request fallback
|
||||
|
||||
Common flags:
|
||||
--profile <name> profile name (default: default)
|
||||
--url <base_url> override profile URL
|
||||
--api-key <key> override profile API key
|
||||
--json JSON output
|
||||
--timeout-ms <n> request timeout (default 20000)
|
||||
--help show help
|
||||
|
||||
Examples:
|
||||
mc agents list --json
|
||||
mc agents memory get --id 5
|
||||
mc agents soul set --id 5 --template operator
|
||||
mc tasks queue --agent Aegis --max-capacity 2
|
||||
mc tasks comments list --id 42
|
||||
mc tasks comments add --id 42 --content "Looks good"
|
||||
mc sessions transcript --kind claude-code --id abc123
|
||||
mc tokens agent-costs --timeframe week
|
||||
mc tokens export --format csv
|
||||
mc status health
|
||||
mc events watch --types agent,task
|
||||
mc raw --method GET --path /api/status --json
|
||||
`);
|
||||
}
|
||||
|
||||
function profilePath(name) {
|
||||
return path.join(os.homedir(), '.mission-control', 'profiles', `${name}.json`);
|
||||
}
|
||||
|
||||
function ensureParentDir(filePath) {
|
||||
fs.mkdirSync(path.dirname(filePath), { recursive: true });
|
||||
}
|
||||
|
||||
function loadProfile(name) {
|
||||
const p = profilePath(name);
|
||||
if (!fs.existsSync(p)) {
|
||||
return {
|
||||
name,
|
||||
url: process.env.MC_URL || 'http://127.0.0.1:3000',
|
||||
apiKey: process.env.MC_API_KEY || '',
|
||||
cookie: process.env.MC_COOKIE || '',
|
||||
};
|
||||
}
|
||||
try {
|
||||
const parsed = JSON.parse(fs.readFileSync(p, 'utf8'));
|
||||
return {
|
||||
name,
|
||||
url: parsed.url || process.env.MC_URL || 'http://127.0.0.1:3000',
|
||||
apiKey: parsed.apiKey || process.env.MC_API_KEY || '',
|
||||
cookie: parsed.cookie || process.env.MC_COOKIE || '',
|
||||
};
|
||||
} catch {
|
||||
return {
|
||||
name,
|
||||
url: process.env.MC_URL || 'http://127.0.0.1:3000',
|
||||
apiKey: process.env.MC_API_KEY || '',
|
||||
cookie: process.env.MC_COOKIE || '',
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
function saveProfile(profile) {
|
||||
const p = profilePath(profile.name);
|
||||
ensureParentDir(p);
|
||||
fs.writeFileSync(p, `${JSON.stringify(profile, null, 2)}\n`, 'utf8');
|
||||
}
|
||||
|
||||
function normalizeBaseUrl(url) {
|
||||
return String(url || '').replace(/\/+$/, '');
|
||||
}
|
||||
|
||||
function mapStatusToExit(status) {
|
||||
if (status === 401) return EXIT.AUTH;
|
||||
if (status === 403) return EXIT.FORBIDDEN;
|
||||
if (status >= 500) return EXIT.SERVER;
|
||||
return EXIT.USAGE;
|
||||
}
|
||||
|
||||
function required(flags, key) {
|
||||
const value = flags[key];
|
||||
if (value === undefined || value === true || String(value).trim() === '') {
|
||||
throw new Error(`Missing required flag --${key}`);
|
||||
}
|
||||
return value;
|
||||
}
|
||||
|
||||
function optional(flags, key, fallback) {
|
||||
const value = flags[key];
|
||||
if (value === undefined || value === true) return fallback;
|
||||
return String(value);
|
||||
}
|
||||
|
||||
function bodyFromFlags(flags) {
|
||||
if (flags.body) return JSON.parse(String(flags.body));
|
||||
return undefined;
|
||||
}
|
||||
|
||||
async function httpRequest({ baseUrl, apiKey, cookie, method, route, body, timeoutMs = 20000 }) {
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), timeoutMs);
|
||||
const headers = { Accept: 'application/json' };
|
||||
if (apiKey) headers['x-api-key'] = apiKey;
|
||||
if (cookie) headers['Cookie'] = cookie;
|
||||
let payload;
|
||||
if (body !== undefined) {
|
||||
headers['Content-Type'] = 'application/json';
|
||||
payload = JSON.stringify(body);
|
||||
}
|
||||
const url = `${normalizeBaseUrl(baseUrl)}${route.startsWith('/') ? route : `/${route}`}`;
|
||||
|
||||
try {
|
||||
const res = await fetch(url, {
|
||||
method,
|
||||
headers,
|
||||
body: payload,
|
||||
signal: controller.signal,
|
||||
});
|
||||
clearTimeout(timer);
|
||||
const text = await res.text();
|
||||
let data;
|
||||
try {
|
||||
data = text ? JSON.parse(text) : {};
|
||||
} catch {
|
||||
data = { raw: text };
|
||||
}
|
||||
return {
|
||||
ok: res.ok,
|
||||
status: res.status,
|
||||
data,
|
||||
setCookie: res.headers.get('set-cookie') || '',
|
||||
url,
|
||||
method,
|
||||
};
|
||||
} catch (err) {
|
||||
clearTimeout(timer);
|
||||
if (String(err?.name || '') === 'AbortError') {
|
||||
return { ok: false, status: 0, data: { error: `Request timeout after ${timeoutMs}ms` }, timeout: true, url, method };
|
||||
}
|
||||
return { ok: false, status: 0, data: { error: err?.message || 'Network error' }, network: true, url, method };
|
||||
}
|
||||
}
|
||||
|
||||
async function sseStream({ baseUrl, apiKey, cookie, route, timeoutMs, onEvent, onError }) {
|
||||
const headers = { Accept: 'text/event-stream' };
|
||||
if (apiKey) headers['x-api-key'] = apiKey;
|
||||
if (cookie) headers['Cookie'] = cookie;
|
||||
const url = `${normalizeBaseUrl(baseUrl)}${route}`;
|
||||
|
||||
const controller = new AbortController();
|
||||
let timer;
|
||||
if (timeoutMs && timeoutMs < Infinity) {
|
||||
timer = setTimeout(() => controller.abort(), timeoutMs);
|
||||
}
|
||||
|
||||
// Graceful shutdown on SIGINT/SIGTERM
|
||||
const shutdown = () => { controller.abort(); };
|
||||
process.on('SIGINT', shutdown);
|
||||
process.on('SIGTERM', shutdown);
|
||||
|
||||
try {
|
||||
const res = await fetch(url, { headers, signal: controller.signal });
|
||||
if (!res.ok) {
|
||||
const text = await res.text();
|
||||
onError({ status: res.status, data: text });
|
||||
return;
|
||||
}
|
||||
|
||||
const reader = res.body.getReader();
|
||||
const decoder = new TextDecoder();
|
||||
let buffer = '';
|
||||
|
||||
while (true) {
|
||||
const { done, value } = await reader.read();
|
||||
if (done) break;
|
||||
buffer += decoder.decode(value, { stream: true });
|
||||
|
||||
// Parse SSE frames
|
||||
const lines = buffer.split('\n');
|
||||
buffer = lines.pop() || '';
|
||||
let currentData = '';
|
||||
|
||||
for (const line of lines) {
|
||||
if (line.startsWith('data: ')) {
|
||||
currentData += line.slice(6);
|
||||
} else if (line === '' && currentData) {
|
||||
try {
|
||||
const event = JSON.parse(currentData);
|
||||
onEvent(event);
|
||||
} catch {
|
||||
// Non-JSON data line, emit raw
|
||||
onEvent({ raw: currentData });
|
||||
}
|
||||
currentData = '';
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
if (err?.name === 'AbortError') return; // clean shutdown
|
||||
onError({ error: err?.message || 'SSE connection error' });
|
||||
} finally {
|
||||
if (timer) clearTimeout(timer);
|
||||
process.removeListener('SIGINT', shutdown);
|
||||
process.removeListener('SIGTERM', shutdown);
|
||||
}
|
||||
}
|
||||
|
||||
function printResult(result, asJson) {
|
||||
if (asJson) {
|
||||
console.log(JSON.stringify(result, null, 2));
|
||||
return;
|
||||
}
|
||||
if (result.ok) {
|
||||
console.log(`OK ${result.status} ${result.method} ${result.url}`);
|
||||
if (result.data && Object.keys(result.data).length > 0) {
|
||||
console.log(JSON.stringify(result.data, null, 2));
|
||||
}
|
||||
return;
|
||||
}
|
||||
console.error(`ERROR ${result.status || 'NETWORK'} ${result.method} ${result.url}`);
|
||||
console.error(JSON.stringify(result.data, null, 2));
|
||||
}
|
||||
|
||||
// --- Command handlers ---
|
||||
// Each returns { method, route, body? } or handles the request directly and returns null.
|
||||
|
||||
const commands = {
|
||||
auth: {
|
||||
async login(flags, ctx) {
|
||||
const username = required(flags, 'username');
|
||||
const password = required(flags, 'password');
|
||||
const result = await httpRequest({
|
||||
baseUrl: ctx.baseUrl,
|
||||
method: 'POST',
|
||||
route: '/api/auth/login',
|
||||
body: { username, password },
|
||||
timeoutMs: ctx.timeoutMs,
|
||||
});
|
||||
if (result.ok && result.setCookie) {
|
||||
ctx.profile.url = ctx.baseUrl;
|
||||
ctx.profile.cookie = result.setCookie.split(';')[0];
|
||||
if (ctx.apiKey) ctx.profile.apiKey = ctx.apiKey;
|
||||
saveProfile(ctx.profile);
|
||||
result.data = { ...result.data, profile: ctx.profile.name, saved_cookie: true };
|
||||
}
|
||||
return result;
|
||||
},
|
||||
async logout(flags, ctx) {
|
||||
const result = await httpRequest({ baseUrl: ctx.baseUrl, apiKey: ctx.apiKey, cookie: ctx.profile.cookie, method: 'POST', route: '/api/auth/logout', timeoutMs: ctx.timeoutMs });
|
||||
if (result.ok) {
|
||||
ctx.profile.cookie = '';
|
||||
saveProfile(ctx.profile);
|
||||
}
|
||||
return result;
|
||||
},
|
||||
whoami: () => ({ method: 'GET', route: '/api/auth/me' }),
|
||||
},
|
||||
|
||||
agents: {
|
||||
list: () => ({ method: 'GET', route: '/api/agents' }),
|
||||
get: (flags) => ({ method: 'GET', route: `/api/agents/${required(flags, 'id')}` }),
|
||||
create: (flags) => ({
|
||||
method: 'POST',
|
||||
route: '/api/agents',
|
||||
body: bodyFromFlags(flags) || { name: required(flags, 'name'), role: required(flags, 'role') },
|
||||
}),
|
||||
update: (flags) => ({
|
||||
method: 'PUT',
|
||||
route: `/api/agents/${required(flags, 'id')}`,
|
||||
body: bodyFromFlags(flags) || {},
|
||||
}),
|
||||
delete: (flags) => ({ method: 'DELETE', route: `/api/agents/${required(flags, 'id')}` }),
|
||||
wake: (flags) => ({ method: 'POST', route: `/api/agents/${required(flags, 'id')}/wake` }),
|
||||
diagnostics: (flags) => ({ method: 'GET', route: `/api/agents/${required(flags, 'id')}/diagnostics` }),
|
||||
heartbeat: (flags) => ({ method: 'POST', route: `/api/agents/${required(flags, 'id')}/heartbeat` }),
|
||||
attribution: (flags) => {
|
||||
const id = required(flags, 'id');
|
||||
const hours = optional(flags, 'hours', '24');
|
||||
const section = optional(flags, 'section', undefined);
|
||||
let qs = `?hours=${encodeURIComponent(hours)}`;
|
||||
if (section) qs += `§ion=${encodeURIComponent(section)}`;
|
||||
if (flags.privileged) qs += '&privileged=1';
|
||||
return { method: 'GET', route: `/api/agents/${id}/attribution${qs}` };
|
||||
},
|
||||
// Subcommand: agents memory get|set|clear --id <id>
|
||||
memory: (flags) => {
|
||||
const id = required(flags, 'id');
|
||||
const sub = flags._sub;
|
||||
if (sub === 'get' || !sub) return { method: 'GET', route: `/api/agents/${id}/memory` };
|
||||
if (sub === 'set') {
|
||||
const content = flags.content || flags.file
|
||||
? fs.readFileSync(required(flags, 'file'), 'utf8')
|
||||
: required(flags, 'content');
|
||||
return {
|
||||
method: 'PUT',
|
||||
route: `/api/agents/${id}/memory`,
|
||||
body: { working_memory: content, append: Boolean(flags.append) },
|
||||
};
|
||||
}
|
||||
if (sub === 'clear') return { method: 'DELETE', route: `/api/agents/${id}/memory` };
|
||||
throw new Error(`Unknown agents memory subcommand: ${sub}. Use get|set|clear`);
|
||||
},
|
||||
// Subcommand: agents soul get|set|templates --id <id>
|
||||
soul: (flags) => {
|
||||
const id = required(flags, 'id');
|
||||
const sub = flags._sub;
|
||||
if (sub === 'get' || !sub) return { method: 'GET', route: `/api/agents/${id}/soul` };
|
||||
if (sub === 'set') {
|
||||
const body = {};
|
||||
if (flags.template) body.template_name = flags.template;
|
||||
else if (flags.file) body.soul_content = fs.readFileSync(String(flags.file), 'utf8');
|
||||
else body.soul_content = required(flags, 'content');
|
||||
return { method: 'PUT', route: `/api/agents/${id}/soul`, body };
|
||||
}
|
||||
if (sub === 'templates') {
|
||||
const template = optional(flags, 'template', undefined);
|
||||
const qs = template ? `?template=${encodeURIComponent(template)}` : '';
|
||||
return { method: 'PATCH', route: `/api/agents/${id}/soul${qs}` };
|
||||
}
|
||||
throw new Error(`Unknown agents soul subcommand: ${sub}. Use get|set|templates`);
|
||||
},
|
||||
},
|
||||
|
||||
tasks: {
|
||||
list: () => ({ method: 'GET', route: '/api/tasks' }),
|
||||
get: (flags) => ({ method: 'GET', route: `/api/tasks/${required(flags, 'id')}` }),
|
||||
create: (flags) => ({
|
||||
method: 'POST',
|
||||
route: '/api/tasks',
|
||||
body: bodyFromFlags(flags) || { title: required(flags, 'title') },
|
||||
}),
|
||||
update: (flags) => ({
|
||||
method: 'PUT',
|
||||
route: `/api/tasks/${required(flags, 'id')}`,
|
||||
body: bodyFromFlags(flags) || {},
|
||||
}),
|
||||
delete: (flags) => ({ method: 'DELETE', route: `/api/tasks/${required(flags, 'id')}` }),
|
||||
queue: (flags) => {
|
||||
const agent = required(flags, 'agent');
|
||||
let qs = `?agent=${encodeURIComponent(agent)}`;
|
||||
if (flags['max-capacity']) qs += `&max_capacity=${encodeURIComponent(String(flags['max-capacity']))}`;
|
||||
return { method: 'GET', route: `/api/tasks/queue${qs}` };
|
||||
},
|
||||
broadcast: (flags) => ({
|
||||
method: 'POST',
|
||||
route: `/api/tasks/${required(flags, 'id')}/broadcast`,
|
||||
body: { message: required(flags, 'message') },
|
||||
}),
|
||||
// Subcommand: tasks comments list|add --id <id>
|
||||
comments: (flags) => {
|
||||
const id = required(flags, 'id');
|
||||
const sub = flags._sub;
|
||||
if (sub === 'list' || !sub) return { method: 'GET', route: `/api/tasks/${id}/comments` };
|
||||
if (sub === 'add') {
|
||||
const body = { content: required(flags, 'content') };
|
||||
if (flags['parent-id']) body.parent_id = Number(flags['parent-id']);
|
||||
return { method: 'POST', route: `/api/tasks/${id}/comments`, body };
|
||||
}
|
||||
throw new Error(`Unknown tasks comments subcommand: ${sub}. Use list|add`);
|
||||
},
|
||||
},
|
||||
|
||||
sessions: {
|
||||
list: () => ({ method: 'GET', route: '/api/sessions' }),
|
||||
control: (flags) => ({
|
||||
method: 'POST',
|
||||
route: `/api/sessions/${required(flags, 'id')}/control`,
|
||||
body: { action: required(flags, 'action') },
|
||||
}),
|
||||
continue: (flags) => ({
|
||||
method: 'POST',
|
||||
route: '/api/sessions/continue',
|
||||
body: {
|
||||
kind: required(flags, 'kind'),
|
||||
id: required(flags, 'id'),
|
||||
prompt: required(flags, 'prompt'),
|
||||
},
|
||||
}),
|
||||
transcript: (flags) => {
|
||||
const kind = required(flags, 'kind');
|
||||
const id = required(flags, 'id');
|
||||
let qs = `?kind=${encodeURIComponent(kind)}&id=${encodeURIComponent(id)}`;
|
||||
if (flags.limit) qs += `&limit=${encodeURIComponent(String(flags.limit))}`;
|
||||
if (flags.source) qs += `&source=${encodeURIComponent(String(flags.source))}`;
|
||||
return { method: 'GET', route: `/api/sessions/transcript${qs}` };
|
||||
},
|
||||
},
|
||||
|
||||
connect: {
|
||||
register: (flags) => ({
|
||||
method: 'POST',
|
||||
route: '/api/connect',
|
||||
body: bodyFromFlags(flags) || { tool_name: required(flags, 'tool-name'), agent_name: required(flags, 'agent-name') },
|
||||
}),
|
||||
list: () => ({ method: 'GET', route: '/api/connect' }),
|
||||
disconnect: (flags) => ({
|
||||
method: 'DELETE',
|
||||
route: '/api/connect',
|
||||
body: { connection_id: required(flags, 'connection-id') },
|
||||
}),
|
||||
},
|
||||
|
||||
tokens: {
|
||||
list: (flags) => {
|
||||
let qs = '?action=list';
|
||||
if (flags.timeframe) qs += `&timeframe=${encodeURIComponent(String(flags.timeframe))}`;
|
||||
return { method: 'GET', route: `/api/tokens${qs}` };
|
||||
},
|
||||
stats: (flags) => {
|
||||
let qs = '?action=stats';
|
||||
if (flags.timeframe) qs += `&timeframe=${encodeURIComponent(String(flags.timeframe))}`;
|
||||
return { method: 'GET', route: `/api/tokens${qs}` };
|
||||
},
|
||||
'by-agent': (flags) => ({
|
||||
method: 'GET',
|
||||
route: `/api/tokens/by-agent?days=${encodeURIComponent(String(flags.days || '30'))}`,
|
||||
}),
|
||||
'agent-costs': (flags) => {
|
||||
let qs = '?action=agent-costs';
|
||||
if (flags.timeframe) qs += `&timeframe=${encodeURIComponent(String(flags.timeframe))}`;
|
||||
return { method: 'GET', route: `/api/tokens${qs}` };
|
||||
},
|
||||
'task-costs': (flags) => {
|
||||
let qs = '?action=task-costs';
|
||||
if (flags.timeframe) qs += `&timeframe=${encodeURIComponent(String(flags.timeframe))}`;
|
||||
return { method: 'GET', route: `/api/tokens${qs}` };
|
||||
},
|
||||
trends: (flags) => {
|
||||
let qs = '?action=trends';
|
||||
if (flags.timeframe) qs += `&timeframe=${encodeURIComponent(String(flags.timeframe))}`;
|
||||
return { method: 'GET', route: `/api/tokens${qs}` };
|
||||
},
|
||||
export: (flags) => {
|
||||
const format = optional(flags, 'format', 'json');
|
||||
let qs = `?action=export&format=${encodeURIComponent(format)}`;
|
||||
if (flags.timeframe) qs += `&timeframe=${encodeURIComponent(String(flags.timeframe))}`;
|
||||
if (flags.limit) qs += `&limit=${encodeURIComponent(String(flags.limit))}`;
|
||||
return { method: 'GET', route: `/api/tokens${qs}` };
|
||||
},
|
||||
rotate: (flags) => {
|
||||
if (flags.confirm) return { method: 'POST', route: '/api/tokens/rotate' };
|
||||
return { method: 'GET', route: '/api/tokens/rotate' };
|
||||
},
|
||||
},
|
||||
|
||||
skills: {
|
||||
list: () => ({ method: 'GET', route: '/api/skills' }),
|
||||
content: (flags) => ({
|
||||
method: 'GET',
|
||||
route: `/api/skills?mode=content&source=${encodeURIComponent(required(flags, 'source'))}&name=${encodeURIComponent(required(flags, 'name'))}`,
|
||||
}),
|
||||
check: (flags) => ({
|
||||
method: 'GET',
|
||||
route: `/api/skills?mode=check&source=${encodeURIComponent(required(flags, 'source'))}&name=${encodeURIComponent(required(flags, 'name'))}`,
|
||||
}),
|
||||
upsert: (flags) => ({
|
||||
method: 'PUT',
|
||||
route: '/api/skills',
|
||||
body: {
|
||||
source: required(flags, 'source'),
|
||||
name: required(flags, 'name'),
|
||||
content: fs.readFileSync(required(flags, 'file'), 'utf8'),
|
||||
},
|
||||
}),
|
||||
delete: (flags) => ({
|
||||
method: 'DELETE',
|
||||
route: `/api/skills?source=${encodeURIComponent(required(flags, 'source'))}&name=${encodeURIComponent(required(flags, 'name'))}`,
|
||||
}),
|
||||
},
|
||||
|
||||
cron: {
|
||||
list: () => ({ method: 'GET', route: '/api/cron' }),
|
||||
create: (flags) => ({ method: 'POST', route: '/api/cron', body: bodyFromFlags(flags) || {} }),
|
||||
update: (flags) => ({ method: 'POST', route: '/api/cron', body: bodyFromFlags(flags) || {} }),
|
||||
pause: (flags) => ({ method: 'POST', route: '/api/cron', body: bodyFromFlags(flags) || {} }),
|
||||
resume: (flags) => ({ method: 'POST', route: '/api/cron', body: bodyFromFlags(flags) || {} }),
|
||||
remove: (flags) => ({ method: 'POST', route: '/api/cron', body: bodyFromFlags(flags) || {} }),
|
||||
run: (flags) => ({ method: 'POST', route: '/api/cron', body: bodyFromFlags(flags) || {} }),
|
||||
},
|
||||
|
||||
status: {
|
||||
health: () => ({ method: 'GET', route: '/api/status?action=health' }),
|
||||
overview: () => ({ method: 'GET', route: '/api/status?action=overview' }),
|
||||
dashboard: () => ({ method: 'GET', route: '/api/status?action=dashboard' }),
|
||||
gateway: () => ({ method: 'GET', route: '/api/status?action=gateway' }),
|
||||
models: () => ({ method: 'GET', route: '/api/status?action=models' }),
|
||||
capabilities: () => ({ method: 'GET', route: '/api/status?action=capabilities' }),
|
||||
},
|
||||
|
||||
export: {
|
||||
audit: (flags) => {
|
||||
const format = optional(flags, 'format', 'json');
|
||||
let qs = `?type=audit&format=${encodeURIComponent(format)}`;
|
||||
if (flags.since) qs += `&since=${encodeURIComponent(String(flags.since))}`;
|
||||
if (flags.until) qs += `&until=${encodeURIComponent(String(flags.until))}`;
|
||||
if (flags.limit) qs += `&limit=${encodeURIComponent(String(flags.limit))}`;
|
||||
return { method: 'GET', route: `/api/export${qs}` };
|
||||
},
|
||||
tasks: (flags) => {
|
||||
const format = optional(flags, 'format', 'json');
|
||||
let qs = `?type=tasks&format=${encodeURIComponent(format)}`;
|
||||
if (flags.since) qs += `&since=${encodeURIComponent(String(flags.since))}`;
|
||||
if (flags.until) qs += `&until=${encodeURIComponent(String(flags.until))}`;
|
||||
if (flags.limit) qs += `&limit=${encodeURIComponent(String(flags.limit))}`;
|
||||
return { method: 'GET', route: `/api/export${qs}` };
|
||||
},
|
||||
activities: (flags) => {
|
||||
const format = optional(flags, 'format', 'json');
|
||||
let qs = `?type=activities&format=${encodeURIComponent(format)}`;
|
||||
if (flags.since) qs += `&since=${encodeURIComponent(String(flags.since))}`;
|
||||
if (flags.until) qs += `&until=${encodeURIComponent(String(flags.until))}`;
|
||||
if (flags.limit) qs += `&limit=${encodeURIComponent(String(flags.limit))}`;
|
||||
return { method: 'GET', route: `/api/export${qs}` };
|
||||
},
|
||||
pipelines: (flags) => {
|
||||
const format = optional(flags, 'format', 'json');
|
||||
let qs = `?type=pipelines&format=${encodeURIComponent(format)}`;
|
||||
if (flags.since) qs += `&since=${encodeURIComponent(String(flags.since))}`;
|
||||
if (flags.until) qs += `&until=${encodeURIComponent(String(flags.until))}`;
|
||||
if (flags.limit) qs += `&limit=${encodeURIComponent(String(flags.limit))}`;
|
||||
return { method: 'GET', route: `/api/export${qs}` };
|
||||
},
|
||||
},
|
||||
};
|
||||
|
||||
// --- Events watch (SSE streaming) ---
|
||||
|
||||
async function handleEventsWatch(flags, ctx) {
|
||||
const types = optional(flags, 'types', undefined);
|
||||
let route = '/api/events';
|
||||
if (types) route += `?types=${encodeURIComponent(types)}`;
|
||||
|
||||
if (ctx.asJson) {
|
||||
// JSON mode: one JSON object per line (NDJSON)
|
||||
await sseStream({
|
||||
baseUrl: ctx.baseUrl,
|
||||
apiKey: ctx.apiKey,
|
||||
cookie: ctx.profile.cookie,
|
||||
route,
|
||||
timeoutMs: ctx.timeoutMs,
|
||||
onEvent: (event) => {
|
||||
if (event.type === 'heartbeat') return;
|
||||
console.log(JSON.stringify(event));
|
||||
},
|
||||
onError: (err) => {
|
||||
console.error(JSON.stringify({ ok: false, error: err }));
|
||||
process.exit(EXIT.SERVER);
|
||||
},
|
||||
});
|
||||
} else {
|
||||
console.log(`Watching events at ${normalizeBaseUrl(ctx.baseUrl)}${route}`);
|
||||
console.log('Press Ctrl+C to stop.\n');
|
||||
await sseStream({
|
||||
baseUrl: ctx.baseUrl,
|
||||
apiKey: ctx.apiKey,
|
||||
cookie: ctx.profile.cookie,
|
||||
route,
|
||||
timeoutMs: ctx.timeoutMs,
|
||||
onEvent: (event) => {
|
||||
if (event.type === 'heartbeat') return;
|
||||
const ts = event.timestamp ? new Date(event.timestamp).toISOString() : new Date().toISOString();
|
||||
const type = event.type || event.data?.mutation || 'event';
|
||||
console.log(`[${ts}] ${type}: ${JSON.stringify(event.data || event)}`);
|
||||
},
|
||||
onError: (err) => {
|
||||
console.error(`SSE error: ${JSON.stringify(err)}`);
|
||||
process.exit(EXIT.SERVER);
|
||||
},
|
||||
});
|
||||
}
|
||||
process.exit(EXIT.OK);
|
||||
}
|
||||
|
||||
// --- Main ---
|
||||
|
||||
async function run() {
|
||||
const parsed = parseArgs(process.argv.slice(2));
|
||||
if (parsed.flags.help || parsed._.length === 0) {
|
||||
usage();
|
||||
process.exit(EXIT.OK);
|
||||
}
|
||||
|
||||
const asJson = Boolean(parsed.flags.json);
|
||||
const profileName = String(parsed.flags.profile || 'default');
|
||||
const profile = loadProfile(profileName);
|
||||
const baseUrl = parsed.flags.url ? String(parsed.flags.url) : profile.url;
|
||||
const apiKey = parsed.flags['api-key'] ? String(parsed.flags['api-key']) : profile.apiKey;
|
||||
const timeoutMs = Number(parsed.flags['timeout-ms'] || 20000);
|
||||
|
||||
const group = parsed._[0];
|
||||
const action = parsed._[1];
|
||||
// For compound subcommands like: agents memory get / tasks comments add
|
||||
const sub = parsed._[2];
|
||||
|
||||
const ctx = { baseUrl, apiKey, profile, timeoutMs, asJson };
|
||||
|
||||
try {
|
||||
// Raw passthrough
|
||||
if (group === 'raw') {
|
||||
const method = String(required(parsed.flags, 'method')).toUpperCase();
|
||||
const route = String(required(parsed.flags, 'path'));
|
||||
const body = bodyFromFlags(parsed.flags);
|
||||
const result = await httpRequest({ baseUrl, apiKey, cookie: profile.cookie, method, route, body, timeoutMs });
|
||||
printResult(result, asJson);
|
||||
process.exit(result.ok ? EXIT.OK : mapStatusToExit(result.status));
|
||||
}
|
||||
|
||||
// Events watch (SSE)
|
||||
if (group === 'events' && action === 'watch') {
|
||||
await handleEventsWatch(parsed.flags, { ...ctx, timeoutMs: Number(parsed.flags['timeout-ms'] || 3600000) });
|
||||
return;
|
||||
}
|
||||
|
||||
// Look up group and action in the commands map
|
||||
const groupMap = commands[group];
|
||||
if (!groupMap) {
|
||||
console.error(`Unknown group: ${group}`);
|
||||
usage();
|
||||
process.exit(EXIT.USAGE);
|
||||
}
|
||||
|
||||
let handler = groupMap[action];
|
||||
if (!handler) {
|
||||
console.error(`Unknown action: ${group} ${action}`);
|
||||
usage();
|
||||
process.exit(EXIT.USAGE);
|
||||
}
|
||||
|
||||
// Inject sub-command into flags for compound commands (memory, soul, comments)
|
||||
if (sub && typeof handler === 'function') {
|
||||
parsed.flags._sub = sub;
|
||||
}
|
||||
|
||||
// Execute handler
|
||||
const result_or_config = await (typeof handler === 'function'
|
||||
? handler(parsed.flags, ctx)
|
||||
: handler);
|
||||
|
||||
// If handler returned an http result directly (auth login/logout)
|
||||
if (result_or_config && 'ok' in result_or_config && 'status' in result_or_config) {
|
||||
printResult(result_or_config, asJson);
|
||||
process.exit(result_or_config.ok ? EXIT.OK : mapStatusToExit(result_or_config.status));
|
||||
}
|
||||
|
||||
// Otherwise it returned { method, route, body? } — execute the request
|
||||
const { method, route, body } = result_or_config;
|
||||
const result = await httpRequest({
|
||||
baseUrl,
|
||||
apiKey,
|
||||
cookie: profile.cookie,
|
||||
method,
|
||||
route,
|
||||
body,
|
||||
timeoutMs,
|
||||
});
|
||||
|
||||
printResult(result, asJson);
|
||||
process.exit(result.ok ? EXIT.OK : mapStatusToExit(result.status));
|
||||
} catch (err) {
|
||||
const message = err?.message || String(err);
|
||||
if (asJson) {
|
||||
console.log(JSON.stringify({ ok: false, error: message }, null, 2));
|
||||
} else {
|
||||
console.error(`USAGE ERROR: ${message}`);
|
||||
}
|
||||
process.exit(EXIT.USAGE);
|
||||
}
|
||||
}
|
||||
|
||||
run();
|
||||
|
|
@ -0,0 +1,637 @@
|
|||
#!/usr/bin/env node
|
||||
/*
|
||||
Mission Control MCP Server (stdio transport)
|
||||
- Zero dependencies (Node.js built-ins only)
|
||||
- JSON-RPC 2.0 over stdin/stdout
|
||||
- Wraps Mission Control REST API as MCP tools
|
||||
- Add with: claude mcp add mission-control -- node /path/to/mc-mcp-server.cjs
|
||||
|
||||
Environment:
|
||||
MC_URL Base URL (default: http://127.0.0.1:3000)
|
||||
MC_API_KEY API key for auth
|
||||
MC_COOKIE Session cookie (alternative auth)
|
||||
*/
|
||||
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Config
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function loadConfig() {
|
||||
// Try profile first, then env vars
|
||||
const profilePath = path.join(os.homedir(), '.mission-control', 'profiles', 'default.json');
|
||||
let profile = {};
|
||||
try {
|
||||
profile = JSON.parse(fs.readFileSync(profilePath, 'utf8'));
|
||||
} catch { /* no profile */ }
|
||||
|
||||
return {
|
||||
baseUrl: (process.env.MC_URL || profile.url || 'http://127.0.0.1:3000').replace(/\/+$/, ''),
|
||||
apiKey: process.env.MC_API_KEY || profile.apiKey || '',
|
||||
cookie: process.env.MC_COOKIE || profile.cookie || '',
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// HTTP client (same pattern as mc-cli.cjs)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function api(method, route, body) {
|
||||
const config = loadConfig();
|
||||
const headers = { 'Accept': 'application/json' };
|
||||
if (config.apiKey) headers['x-api-key'] = config.apiKey;
|
||||
if (config.cookie) headers['Cookie'] = config.cookie;
|
||||
|
||||
let payload;
|
||||
if (body !== undefined) {
|
||||
headers['Content-Type'] = 'application/json';
|
||||
payload = JSON.stringify(body);
|
||||
}
|
||||
|
||||
const url = `${config.baseUrl}${route.startsWith('/') ? route : `/${route}`}`;
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), 30000);
|
||||
|
||||
try {
|
||||
const res = await fetch(url, { method, headers, body: payload, signal: controller.signal });
|
||||
clearTimeout(timer);
|
||||
const text = await res.text();
|
||||
let data;
|
||||
try { data = JSON.parse(text); } catch { data = { raw: text }; }
|
||||
if (!res.ok) throw new Error(data.error || data.message || `HTTP ${res.status}: ${text.slice(0, 200)}`);
|
||||
return data;
|
||||
} catch (err) {
|
||||
clearTimeout(timer);
|
||||
if (err?.name === 'AbortError') throw new Error('Request timeout (30s)');
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tool definitions
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const TOOLS = [
|
||||
// --- Agents ---
|
||||
{
|
||||
name: 'mc_list_agents',
|
||||
description: 'List all agents registered in Mission Control',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/agents'),
|
||||
},
|
||||
{
|
||||
name: 'mc_get_agent',
|
||||
description: 'Get details of a specific agent by ID',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Agent ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('GET', `/api/agents/${id}`),
|
||||
},
|
||||
{
|
||||
name: 'mc_heartbeat',
|
||||
description: 'Send a heartbeat for an agent to indicate it is alive',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Agent ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('POST', `/api/agents/${id}/heartbeat`),
|
||||
},
|
||||
{
|
||||
name: 'mc_wake_agent',
|
||||
description: 'Wake a sleeping agent',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Agent ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('POST', `/api/agents/${id}/wake`),
|
||||
},
|
||||
{
|
||||
name: 'mc_agent_diagnostics',
|
||||
description: 'Get diagnostics info for an agent (health, config, recent activity)',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Agent ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('GET', `/api/agents/${id}/diagnostics`),
|
||||
},
|
||||
{
|
||||
name: 'mc_agent_attribution',
|
||||
description: 'Get cost attribution, audit trail, and mutation history for an agent',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: ['string', 'number'], description: 'Agent ID' },
|
||||
hours: { type: 'number', description: 'Lookback window in hours (default 24)' },
|
||||
section: { type: 'string', description: 'Comma-separated sections: identity,audit,mutations,cost' },
|
||||
},
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id, hours, section }) => {
|
||||
let qs = `?hours=${hours || 24}`;
|
||||
if (section) qs += `§ion=${encodeURIComponent(section)}`;
|
||||
return api('GET', `/api/agents/${id}/attribution${qs}`);
|
||||
},
|
||||
},
|
||||
|
||||
// --- Agent Memory ---
|
||||
{
|
||||
name: 'mc_read_memory',
|
||||
description: 'Read an agent\'s working memory',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Agent ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('GET', `/api/agents/${id}/memory`),
|
||||
},
|
||||
{
|
||||
name: 'mc_write_memory',
|
||||
description: 'Write or append to an agent\'s working memory',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: ['string', 'number'], description: 'Agent ID' },
|
||||
working_memory: { type: 'string', description: 'Memory content to write' },
|
||||
append: { type: 'boolean', description: 'Append to existing memory instead of replacing (default false)' },
|
||||
},
|
||||
required: ['id', 'working_memory'],
|
||||
},
|
||||
handler: async ({ id, working_memory, append }) =>
|
||||
api('PUT', `/api/agents/${id}/memory`, { working_memory, append: append || false }),
|
||||
},
|
||||
{
|
||||
name: 'mc_clear_memory',
|
||||
description: 'Clear an agent\'s working memory',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Agent ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('DELETE', `/api/agents/${id}/memory`),
|
||||
},
|
||||
|
||||
// --- Agent Soul ---
|
||||
{
|
||||
name: 'mc_read_soul',
|
||||
description: 'Read an agent\'s SOUL (System of Unified Logic) content — the agent\'s identity and behavioral directives',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Agent ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('GET', `/api/agents/${id}/soul`),
|
||||
},
|
||||
{
|
||||
name: 'mc_write_soul',
|
||||
description: 'Write an agent\'s SOUL content, or apply a named template',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: ['string', 'number'], description: 'Agent ID' },
|
||||
soul_content: { type: 'string', description: 'SOUL content to write (omit if using template_name)' },
|
||||
template_name: { type: 'string', description: 'Name of a SOUL template to apply (omit if providing soul_content)' },
|
||||
},
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id, soul_content, template_name }) => {
|
||||
const body = {};
|
||||
if (template_name) body.template_name = template_name;
|
||||
else if (soul_content) body.soul_content = soul_content;
|
||||
return api('PUT', `/api/agents/${id}/soul`, body);
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'mc_list_soul_templates',
|
||||
description: 'List available SOUL templates, or retrieve a specific template\'s content',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: ['string', 'number'], description: 'Agent ID' },
|
||||
template: { type: 'string', description: 'Template name to retrieve (omit to list all)' },
|
||||
},
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id, template }) => {
|
||||
const qs = template ? `?template=${encodeURIComponent(template)}` : '';
|
||||
return api('PATCH', `/api/agents/${id}/soul${qs}`);
|
||||
},
|
||||
},
|
||||
|
||||
// --- Tasks ---
|
||||
{
|
||||
name: 'mc_list_tasks',
|
||||
description: 'List all tasks in Mission Control',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/tasks'),
|
||||
},
|
||||
{
|
||||
name: 'mc_get_task',
|
||||
description: 'Get a specific task by ID',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Task ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('GET', `/api/tasks/${id}`),
|
||||
},
|
||||
{
|
||||
name: 'mc_create_task',
|
||||
description: 'Create a new task',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
title: { type: 'string', description: 'Task title' },
|
||||
description: { type: 'string', description: 'Task description' },
|
||||
priority: { type: 'string', description: 'Priority: low, medium, high, critical' },
|
||||
assigned_to: { type: 'string', description: 'Agent name to assign to' },
|
||||
},
|
||||
required: ['title'],
|
||||
},
|
||||
handler: async (args) => api('POST', '/api/tasks', args),
|
||||
},
|
||||
{
|
||||
name: 'mc_update_task',
|
||||
description: 'Update an existing task (status, priority, assigned_to, title, description, etc.)',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: ['string', 'number'], description: 'Task ID' },
|
||||
status: { type: 'string', description: 'New status' },
|
||||
priority: { type: 'string', description: 'New priority' },
|
||||
assigned_to: { type: 'string', description: 'New assignee agent name' },
|
||||
title: { type: 'string', description: 'New title' },
|
||||
description: { type: 'string', description: 'New description' },
|
||||
},
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id, ...fields }) => api('PUT', `/api/tasks/${id}`, fields),
|
||||
},
|
||||
{
|
||||
name: 'mc_poll_task_queue',
|
||||
description: 'Poll the task queue for an agent — returns the next available task(s) to work on',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
agent: { type: 'string', description: 'Agent name to poll for' },
|
||||
max_capacity: { type: 'number', description: 'Max tasks to return (default 1)' },
|
||||
},
|
||||
required: ['agent'],
|
||||
},
|
||||
handler: async ({ agent, max_capacity }) => {
|
||||
let qs = `?agent=${encodeURIComponent(agent)}`;
|
||||
if (max_capacity) qs += `&max_capacity=${max_capacity}`;
|
||||
return api('GET', `/api/tasks/queue${qs}`);
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'mc_broadcast_task',
|
||||
description: 'Broadcast a message to all subscribers of a task',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: ['string', 'number'], description: 'Task ID' },
|
||||
message: { type: 'string', description: 'Message to broadcast' },
|
||||
},
|
||||
required: ['id', 'message'],
|
||||
},
|
||||
handler: async ({ id, message }) => api('POST', `/api/tasks/${id}/broadcast`, { message }),
|
||||
},
|
||||
|
||||
// --- Task Comments ---
|
||||
{
|
||||
name: 'mc_list_comments',
|
||||
description: 'List comments on a task',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: { id: { type: ['string', 'number'], description: 'Task ID' } },
|
||||
required: ['id'],
|
||||
},
|
||||
handler: async ({ id }) => api('GET', `/api/tasks/${id}/comments`),
|
||||
},
|
||||
{
|
||||
name: 'mc_add_comment',
|
||||
description: 'Add a comment to a task',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: ['string', 'number'], description: 'Task ID' },
|
||||
content: { type: 'string', description: 'Comment text (supports @mentions)' },
|
||||
parent_id: { type: 'number', description: 'Parent comment ID for threaded replies' },
|
||||
},
|
||||
required: ['id', 'content'],
|
||||
},
|
||||
handler: async ({ id, content, parent_id }) => {
|
||||
const body = { content };
|
||||
if (parent_id) body.parent_id = parent_id;
|
||||
return api('POST', `/api/tasks/${id}/comments`, body);
|
||||
},
|
||||
},
|
||||
|
||||
// --- Sessions ---
|
||||
{
|
||||
name: 'mc_list_sessions',
|
||||
description: 'List all active sessions',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/sessions'),
|
||||
},
|
||||
{
|
||||
name: 'mc_control_session',
|
||||
description: 'Control a session (monitor, pause, or terminate)',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
id: { type: 'string', description: 'Session ID' },
|
||||
action: { type: 'string', description: 'Action: monitor, pause, or terminate' },
|
||||
},
|
||||
required: ['id', 'action'],
|
||||
},
|
||||
handler: async ({ id, action }) => api('POST', `/api/sessions/${id}/control`, { action }),
|
||||
},
|
||||
{
|
||||
name: 'mc_continue_session',
|
||||
description: 'Send a follow-up prompt to an existing session',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
kind: { type: 'string', description: 'Session kind: claude-code, codex-cli, hermes' },
|
||||
id: { type: 'string', description: 'Session ID' },
|
||||
prompt: { type: 'string', description: 'Follow-up prompt to send' },
|
||||
},
|
||||
required: ['kind', 'id', 'prompt'],
|
||||
},
|
||||
handler: async ({ kind, id, prompt }) =>
|
||||
api('POST', '/api/sessions/continue', { kind, id, prompt }),
|
||||
},
|
||||
{
|
||||
name: 'mc_session_transcript',
|
||||
description: 'Get the transcript of a session (messages, tool calls, reasoning)',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
kind: { type: 'string', description: 'Session kind: claude-code, codex-cli, hermes' },
|
||||
id: { type: 'string', description: 'Session ID' },
|
||||
limit: { type: 'number', description: 'Max messages to return (default 40, max 200)' },
|
||||
},
|
||||
required: ['kind', 'id'],
|
||||
},
|
||||
handler: async ({ kind, id, limit }) => {
|
||||
let qs = `?kind=${encodeURIComponent(kind)}&id=${encodeURIComponent(id)}`;
|
||||
if (limit) qs += `&limit=${limit}`;
|
||||
return api('GET', `/api/sessions/transcript${qs}`);
|
||||
},
|
||||
},
|
||||
|
||||
// --- Connections ---
|
||||
{
|
||||
name: 'mc_list_connections',
|
||||
description: 'List active agent connections (tool registrations)',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/connect'),
|
||||
},
|
||||
{
|
||||
name: 'mc_register_connection',
|
||||
description: 'Register a tool connection for an agent',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
tool_name: { type: 'string', description: 'Tool name to register' },
|
||||
agent_name: { type: 'string', description: 'Agent name to connect' },
|
||||
},
|
||||
required: ['tool_name', 'agent_name'],
|
||||
},
|
||||
handler: async (args) => api('POST', '/api/connect', args),
|
||||
},
|
||||
|
||||
// --- Tokens & Costs ---
|
||||
{
|
||||
name: 'mc_token_stats',
|
||||
description: 'Get aggregate token usage statistics (total tokens, cost, request count, per-model breakdown)',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
timeframe: { type: 'string', description: 'Timeframe: hour, day, week, month, all (default: all)' },
|
||||
},
|
||||
required: [],
|
||||
},
|
||||
handler: async ({ timeframe }) => {
|
||||
let qs = '?action=stats';
|
||||
if (timeframe) qs += `&timeframe=${encodeURIComponent(timeframe)}`;
|
||||
return api('GET', `/api/tokens${qs}`);
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'mc_agent_costs',
|
||||
description: 'Get per-agent cost breakdown with timeline and model details',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
timeframe: { type: 'string', description: 'Timeframe: hour, day, week, month, all' },
|
||||
},
|
||||
required: [],
|
||||
},
|
||||
handler: async ({ timeframe }) => {
|
||||
let qs = '?action=agent-costs';
|
||||
if (timeframe) qs += `&timeframe=${encodeURIComponent(timeframe)}`;
|
||||
return api('GET', `/api/tokens${qs}`);
|
||||
},
|
||||
},
|
||||
{
|
||||
name: 'mc_costs_by_agent',
|
||||
description: 'Get per-agent cost summary over a number of days',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
days: { type: 'number', description: 'Lookback in days (default 30, max 365)' },
|
||||
},
|
||||
required: [],
|
||||
},
|
||||
handler: async ({ days }) =>
|
||||
api('GET', `/api/tokens/by-agent?days=${days || 30}`),
|
||||
},
|
||||
|
||||
// --- Skills ---
|
||||
{
|
||||
name: 'mc_list_skills',
|
||||
description: 'List all skills available in the system',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/skills'),
|
||||
},
|
||||
{
|
||||
name: 'mc_read_skill',
|
||||
description: 'Read the content of a specific skill',
|
||||
inputSchema: {
|
||||
type: 'object',
|
||||
properties: {
|
||||
source: { type: 'string', description: 'Skill source (e.g. workspace, system)' },
|
||||
name: { type: 'string', description: 'Skill name' },
|
||||
},
|
||||
required: ['source', 'name'],
|
||||
},
|
||||
handler: async ({ source, name }) =>
|
||||
api('GET', `/api/skills?mode=content&source=${encodeURIComponent(source)}&name=${encodeURIComponent(name)}`),
|
||||
},
|
||||
|
||||
// --- Cron ---
|
||||
{
|
||||
name: 'mc_list_cron',
|
||||
description: 'List all cron jobs',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/cron'),
|
||||
},
|
||||
|
||||
// --- Status ---
|
||||
{
|
||||
name: 'mc_health',
|
||||
description: 'Check Mission Control health status (no auth required)',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/status?action=health'),
|
||||
},
|
||||
{
|
||||
name: 'mc_dashboard',
|
||||
description: 'Get a dashboard summary of the entire Mission Control system (agents, tasks, sessions, costs)',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/status?action=dashboard'),
|
||||
},
|
||||
{
|
||||
name: 'mc_status',
|
||||
description: 'Get system status overview (uptime, memory, disk, sessions, processes)',
|
||||
inputSchema: { type: 'object', properties: {}, required: [] },
|
||||
handler: async () => api('GET', '/api/status?action=overview'),
|
||||
},
|
||||
];
|
||||
|
||||
// Build lookup map
|
||||
const toolMap = new Map();
|
||||
for (const tool of TOOLS) {
|
||||
toolMap.set(tool.name, tool);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// JSON-RPC 2.0 / MCP protocol handler
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const SERVER_INFO = {
|
||||
name: 'mission-control',
|
||||
version: '2.0.1',
|
||||
};
|
||||
|
||||
const CAPABILITIES = {
|
||||
tools: {},
|
||||
};
|
||||
|
||||
function makeResponse(id, result) {
|
||||
return { jsonrpc: '2.0', id, result };
|
||||
}
|
||||
|
||||
function makeError(id, code, message, data) {
|
||||
return { jsonrpc: '2.0', id, error: { code, message, ...(data ? { data } : {}) } };
|
||||
}
|
||||
|
||||
async function handleMessage(msg) {
|
||||
const { id, method, params } = msg;
|
||||
|
||||
// Notifications (no id) — just acknowledge
|
||||
if (id === undefined) {
|
||||
if (method === 'notifications/initialized') return null; // no response needed
|
||||
return null;
|
||||
}
|
||||
|
||||
switch (method) {
|
||||
case 'initialize':
|
||||
return makeResponse(id, {
|
||||
protocolVersion: '2024-11-05',
|
||||
serverInfo: SERVER_INFO,
|
||||
capabilities: CAPABILITIES,
|
||||
});
|
||||
|
||||
case 'tools/list':
|
||||
return makeResponse(id, {
|
||||
tools: TOOLS.map(t => ({
|
||||
name: t.name,
|
||||
description: t.description,
|
||||
inputSchema: t.inputSchema,
|
||||
})),
|
||||
});
|
||||
|
||||
case 'tools/call': {
|
||||
const toolName = params?.name;
|
||||
const args = params?.arguments || {};
|
||||
const tool = toolMap.get(toolName);
|
||||
|
||||
if (!tool) {
|
||||
return makeResponse(id, {
|
||||
content: [{ type: 'text', text: `Unknown tool: ${toolName}` }],
|
||||
isError: true,
|
||||
});
|
||||
}
|
||||
|
||||
try {
|
||||
const result = await tool.handler(args);
|
||||
return makeResponse(id, {
|
||||
content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
|
||||
});
|
||||
} catch (err) {
|
||||
return makeResponse(id, {
|
||||
content: [{ type: 'text', text: `Error: ${err?.message || String(err)}` }],
|
||||
isError: true,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
case 'ping':
|
||||
return makeResponse(id, {});
|
||||
|
||||
default:
|
||||
return makeError(id, -32601, `Method not found: ${method}`);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Stdio transport
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function send(msg) {
|
||||
if (!msg) return;
|
||||
const json = JSON.stringify(msg);
|
||||
process.stdout.write(json + '\n');
|
||||
}
|
||||
|
||||
async function main() {
|
||||
// Disable stdout buffering for interactive use
|
||||
if (process.stdout._handle && process.stdout._handle.setBlocking) {
|
||||
process.stdout._handle.setBlocking(true);
|
||||
}
|
||||
|
||||
const readline = require('node:readline');
|
||||
const rl = readline.createInterface({ input: process.stdin, terminal: false });
|
||||
|
||||
rl.on('line', async (line) => {
|
||||
const trimmed = line.trim();
|
||||
if (!trimmed) return;
|
||||
|
||||
try {
|
||||
const msg = JSON.parse(trimmed);
|
||||
const response = await handleMessage(msg);
|
||||
send(response);
|
||||
} catch (err) {
|
||||
send(makeError(null, -32700, `Parse error: ${err?.message || 'invalid JSON'}`));
|
||||
}
|
||||
});
|
||||
|
||||
rl.on('close', () => {
|
||||
process.exit(0);
|
||||
});
|
||||
|
||||
// Keep process alive
|
||||
process.stdin.resume();
|
||||
}
|
||||
|
||||
main();
|
||||
|
|
@ -0,0 +1,876 @@
|
|||
#!/usr/bin/env node
|
||||
/*
|
||||
Mission Control TUI (v2)
|
||||
- Zero dependencies (ANSI escape codes)
|
||||
- Arrow key navigation between agents/tasks
|
||||
- Enter to drill into agent detail with sessions
|
||||
- Esc to go back, q to quit
|
||||
- Auto-refresh dashboard
|
||||
|
||||
Usage:
|
||||
node scripts/mc-tui.cjs [--url <base>] [--api-key <key>] [--profile <name>] [--refresh <ms>]
|
||||
*/
|
||||
|
||||
const fs = require('node:fs');
|
||||
const path = require('node:path');
|
||||
const os = require('node:os');
|
||||
const readline = require('node:readline');
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Config
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function parseArgs(argv) {
|
||||
const flags = {};
|
||||
for (let i = 0; i < argv.length; i++) {
|
||||
const t = argv[i];
|
||||
if (!t.startsWith('--')) continue;
|
||||
const key = t.slice(2);
|
||||
const next = argv[i + 1];
|
||||
if (!next || next.startsWith('--')) { flags[key] = true; continue; }
|
||||
flags[key] = next;
|
||||
i++;
|
||||
}
|
||||
return flags;
|
||||
}
|
||||
|
||||
function loadProfile(name) {
|
||||
const p = path.join(os.homedir(), '.mission-control', 'profiles', `${name}.json`);
|
||||
try {
|
||||
const parsed = JSON.parse(fs.readFileSync(p, 'utf8'));
|
||||
return {
|
||||
url: parsed.url || process.env.MC_URL || 'http://127.0.0.1:3000',
|
||||
apiKey: parsed.apiKey || process.env.MC_API_KEY || '',
|
||||
cookie: parsed.cookie || process.env.MC_COOKIE || '',
|
||||
};
|
||||
} catch {
|
||||
return {
|
||||
url: process.env.MC_URL || 'http://127.0.0.1:3000',
|
||||
apiKey: process.env.MC_API_KEY || '',
|
||||
cookie: process.env.MC_COOKIE || '',
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// HTTP client
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function api(baseUrl, apiKey, cookie, method, route) {
|
||||
const headers = { Accept: 'application/json' };
|
||||
if (apiKey) headers['x-api-key'] = apiKey;
|
||||
if (cookie) headers['Cookie'] = cookie;
|
||||
const url = `${baseUrl.replace(/\/+$/, '')}${route}`;
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), 8000);
|
||||
try {
|
||||
const res = await fetch(url, { method, headers, signal: controller.signal });
|
||||
clearTimeout(timer);
|
||||
if (!res.ok) return { _error: `HTTP ${res.status}` };
|
||||
return await res.json();
|
||||
} catch (err) {
|
||||
clearTimeout(timer);
|
||||
return { _error: err?.name === 'AbortError' ? 'timeout' : (err?.message || 'network error') };
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// ANSI helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
const ESC = '\x1b[';
|
||||
const ansi = {
|
||||
clear: () => process.stdout.write(`${ESC}2J${ESC}H`),
|
||||
moveTo: (row, col) => process.stdout.write(`${ESC}${row};${col}H`),
|
||||
bold: (s) => `${ESC}1m${s}${ESC}0m`,
|
||||
dim: (s) => `${ESC}2m${s}${ESC}0m`,
|
||||
green: (s) => `${ESC}32m${s}${ESC}0m`,
|
||||
yellow: (s) => `${ESC}33m${s}${ESC}0m`,
|
||||
red: (s) => `${ESC}31m${s}${ESC}0m`,
|
||||
cyan: (s) => `${ESC}36m${s}${ESC}0m`,
|
||||
magenta: (s) => `${ESC}35m${s}${ESC}0m`,
|
||||
bgBlue: (s) => `${ESC}48;5;17m${ESC}97m${s}${ESC}0m`,
|
||||
bgCyan: (s) => `${ESC}46m${ESC}30m${s}${ESC}0m`,
|
||||
inverse: (s) => `${ESC}7m${s}${ESC}0m`,
|
||||
hideCursor: () => process.stdout.write(`${ESC}?25l`),
|
||||
showCursor: () => process.stdout.write(`${ESC}?25h`),
|
||||
clearLine: () => process.stdout.write(`${ESC}2K`),
|
||||
enterAltScreen: () => process.stdout.write(`${ESC}?1049h`),
|
||||
exitAltScreen: () => process.stdout.write(`${ESC}?1049l`),
|
||||
};
|
||||
|
||||
function getTermSize() {
|
||||
return { cols: process.stdout.columns || 80, rows: process.stdout.rows || 24 };
|
||||
}
|
||||
|
||||
function truncate(s, maxLen) {
|
||||
if (!s) return '';
|
||||
return s.length > maxLen ? s.slice(0, maxLen - 1) + '\u2026' : s;
|
||||
}
|
||||
|
||||
function pad(s, len) {
|
||||
const str = String(s || '');
|
||||
return str.length >= len ? str.slice(0, len) : str + ' '.repeat(len - str.length);
|
||||
}
|
||||
|
||||
function statusColor(status) {
|
||||
const s = String(status || '').toLowerCase();
|
||||
if (s === 'online' || s === 'active' || s === 'done' || s === 'healthy' || s === 'completed') return ansi.green(status);
|
||||
if (s === 'idle' || s === 'sleeping' || s === 'in_progress' || s === 'pending' || s === 'warning') return ansi.yellow(status);
|
||||
if (s === 'offline' || s === 'error' || s === 'failed' || s === 'critical' || s === 'unhealthy') return ansi.red(status);
|
||||
return status;
|
||||
}
|
||||
|
||||
function timeSince(ts) {
|
||||
const now = Date.now();
|
||||
const then = typeof ts === 'number' ? (ts < 1e12 ? ts * 1000 : ts) : new Date(ts).getTime();
|
||||
const diff = Math.max(0, now - then);
|
||||
if (diff < 60000) return `${Math.floor(diff / 1000)}s ago`;
|
||||
if (diff < 3600000) return `${Math.floor(diff / 60000)}m ago`;
|
||||
if (diff < 86400000) return `${Math.floor(diff / 3600000)}h ago`;
|
||||
return `${Math.floor(diff / 86400000)}d ago`;
|
||||
}
|
||||
|
||||
function formatNumber(n) {
|
||||
if (n >= 1e6) return `${(n / 1e6).toFixed(1)}M`;
|
||||
if (n >= 1e3) return `${(n / 1e3).toFixed(1)}K`;
|
||||
return String(n);
|
||||
}
|
||||
|
||||
// Strip ANSI codes for length calculation
|
||||
function stripAnsi(s) {
|
||||
return s.replace(/\x1b\[[0-9;]*m/g, '');
|
||||
}
|
||||
|
||||
async function postJson(baseUrl, apiKey, cookie, route, data) {
|
||||
const headers = { Accept: 'application/json', 'Content-Type': 'application/json' };
|
||||
if (apiKey) headers['x-api-key'] = apiKey;
|
||||
if (cookie) headers['Cookie'] = cookie;
|
||||
const url = `${baseUrl.replace(/\/+$/, '')}${route}`;
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), 8000);
|
||||
try {
|
||||
const res = await fetch(url, { method: 'POST', headers, body: JSON.stringify(data), signal: controller.signal });
|
||||
clearTimeout(timer);
|
||||
if (!res.ok) return { _error: `HTTP ${res.status}` };
|
||||
return await res.json();
|
||||
} catch (err) {
|
||||
clearTimeout(timer);
|
||||
return { _error: err?.name === 'AbortError' ? 'timeout' : (err?.message || 'network error') };
|
||||
}
|
||||
}
|
||||
|
||||
async function putJson(baseUrl, apiKey, cookie, route, data) {
|
||||
const headers = { Accept: 'application/json', 'Content-Type': 'application/json' };
|
||||
if (apiKey) headers['x-api-key'] = apiKey;
|
||||
if (cookie) headers['Cookie'] = cookie;
|
||||
const url = `${baseUrl.replace(/\/+$/, '')}${route}`;
|
||||
const controller = new AbortController();
|
||||
const timer = setTimeout(() => controller.abort(), 8000);
|
||||
try {
|
||||
const res = await fetch(url, { method: 'PUT', headers, body: JSON.stringify(data), signal: controller.signal });
|
||||
clearTimeout(timer);
|
||||
if (!res.ok) return { _error: `HTTP ${res.status}` };
|
||||
return await res.json();
|
||||
} catch (err) {
|
||||
clearTimeout(timer);
|
||||
return { _error: err?.name === 'AbortError' ? 'timeout' : (err?.message || 'network error') };
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Data fetching
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
async function fetchDashboardData(baseUrl, apiKey, cookie) {
|
||||
const [health, agents, tasks, tokens, sessions] = await Promise.all([
|
||||
api(baseUrl, apiKey, cookie, 'GET', '/api/status?action=health'),
|
||||
api(baseUrl, apiKey, cookie, 'GET', '/api/agents'),
|
||||
api(baseUrl, apiKey, cookie, 'GET', '/api/tasks?limit=30'),
|
||||
api(baseUrl, apiKey, cookie, 'GET', '/api/tokens?action=stats&timeframe=day'),
|
||||
api(baseUrl, apiKey, cookie, 'GET', '/api/sessions?limit=50'),
|
||||
]);
|
||||
return { health, agents, tasks, tokens, sessions };
|
||||
}
|
||||
|
||||
async function fetchAgentSessions(baseUrl, apiKey, cookie, agentName) {
|
||||
const sessions = await api(baseUrl, apiKey, cookie, 'GET', '/api/sessions');
|
||||
if (sessions?._error) return sessions;
|
||||
const all = sessions?.sessions || [];
|
||||
// Match sessions by agent name (sessions use project path as agent key)
|
||||
const matched = all.filter(s => {
|
||||
const key = s.agent || s.key || '';
|
||||
const name = key.split('/').pop() || key;
|
||||
return name === agentName || key.includes(agentName);
|
||||
});
|
||||
return { sessions: matched.length > 0 ? matched : all.slice(0, 10) };
|
||||
}
|
||||
|
||||
async function fetchTranscript(baseUrl, apiKey, cookie, sessionId, limit) {
|
||||
return api(baseUrl, apiKey, cookie, 'GET',
|
||||
`/api/sessions/transcript?kind=claude-code&id=${encodeURIComponent(sessionId)}&limit=${limit}`);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Views
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
// State
|
||||
const state = {
|
||||
view: 'dashboard', // 'dashboard' | 'agent-detail'
|
||||
panel: 'agents', // 'agents' | 'tasks'
|
||||
cursorAgent: 0,
|
||||
cursorTask: 0,
|
||||
scrollOffset: 0,
|
||||
selectedAgent: null,
|
||||
agentSessions: null,
|
||||
agentTranscript: null,
|
||||
transcriptSessionIdx: 0,
|
||||
transcriptScroll: 0,
|
||||
data: { health: {}, agents: {}, tasks: {}, tokens: {} },
|
||||
actionMessage: '',
|
||||
// Input mode for task creation/editing
|
||||
inputMode: null, // null | 'new-task' | 'edit-title' | 'edit-status' | 'edit-assign' | 'confirm-delete'
|
||||
inputBuffer: '',
|
||||
inputLabel: '',
|
||||
editingTaskId: null,
|
||||
};
|
||||
|
||||
function getAgentsList() {
|
||||
const raw = state.data.agents?.agents || state.data.agents || [];
|
||||
if (!Array.isArray(raw)) return [];
|
||||
return [...raw].sort((a, b) => {
|
||||
const order = { online: 0, active: 0, idle: 1, sleeping: 2, offline: 3 };
|
||||
return (order[a.status] ?? 4) - (order[b.status] ?? 4);
|
||||
});
|
||||
}
|
||||
|
||||
function getTasksList() {
|
||||
const raw = state.data.tasks?.tasks || state.data.tasks || [];
|
||||
return Array.isArray(raw) ? raw : [];
|
||||
}
|
||||
|
||||
// --- Dashboard View ---
|
||||
|
||||
function renderDashboard() {
|
||||
const { cols, rows } = getTermSize();
|
||||
ansi.clear();
|
||||
|
||||
// Header
|
||||
const title = ' MISSION CONTROL ';
|
||||
process.stdout.write(ansi.bgBlue(pad(title, cols)) + '\n');
|
||||
|
||||
const healthData = state.data.health;
|
||||
let status;
|
||||
if (healthData?._error) {
|
||||
status = ansi.red('UNREACHABLE');
|
||||
} else {
|
||||
const checks = healthData?.checks || [];
|
||||
const essentialNames = new Set(['Database', 'Disk Space']);
|
||||
const essentialChecks = checks.filter(c => essentialNames.has(c.name));
|
||||
const essentialOk = essentialChecks.length > 0 && essentialChecks.every(c => c.status === 'healthy');
|
||||
const warnings = checks.filter(c => !essentialNames.has(c.name) && c.status !== 'healthy');
|
||||
const warningNames = warnings.map(c => c.name.toLowerCase()).join(', ');
|
||||
if (essentialOk && warnings.length === 0) status = ansi.green('healthy');
|
||||
else if (essentialOk) status = ansi.yellow('operational') + ansi.dim(` (${warningNames})`);
|
||||
else status = statusColor(healthData?.status || 'unknown');
|
||||
}
|
||||
process.stdout.write(` ${status} ${ansi.dim(baseUrl)} ${ansi.dim(new Date().toLocaleTimeString())}\n`);
|
||||
|
||||
// Panel tabs
|
||||
const agentTab = state.panel === 'agents' ? ansi.bgCyan(' AGENTS ') : ansi.dim(' AGENTS ');
|
||||
const taskTab = state.panel === 'tasks' ? ansi.bgCyan(' TASKS ') : ansi.dim(' TASKS ');
|
||||
process.stdout.write(`\n ${agentTab} ${taskTab}\n`);
|
||||
|
||||
const headerRows = 5;
|
||||
const footerRows = 4;
|
||||
const panelRows = Math.max(4, rows - headerRows - footerRows);
|
||||
|
||||
if (state.panel === 'agents') {
|
||||
renderAgentsList(cols, panelRows);
|
||||
} else {
|
||||
renderTasksList(cols, panelRows);
|
||||
}
|
||||
|
||||
// Costs bar — prefer token_usage table, fall back to session estimates
|
||||
const tokensData = state.data.tokens;
|
||||
const summary = tokensData?.summary || {};
|
||||
let costVal = summary.totalCost || 0;
|
||||
let tokenVal = summary.totalTokens || 0;
|
||||
// If token_usage table is empty, sum from active sessions
|
||||
if (costVal === 0 && state.data.sessions?.sessions) {
|
||||
for (const s of state.data.sessions.sessions) {
|
||||
if (s.estimatedCost) costVal += s.estimatedCost;
|
||||
}
|
||||
}
|
||||
const cost = costVal > 0 ? `$${costVal.toFixed(2)}` : '$0.00';
|
||||
const tokens = tokenVal > 0 ? formatNumber(tokenVal) : '-';
|
||||
process.stdout.write(`\n ${ansi.dim('24h:')} ${ansi.bold(cost)} ${ansi.dim('tokens:')} ${tokens}\n`);
|
||||
|
||||
// Input bar
|
||||
if (state.inputMode) {
|
||||
const label = state.inputLabel || 'Input';
|
||||
const cursor = state.inputBuffer + '\u2588'; // block cursor
|
||||
process.stdout.write(`\n ${ansi.bold(ansi.yellow(label + ':'))} ${cursor}\n`);
|
||||
if (state.inputMode === 'confirm-delete') {
|
||||
process.stdout.write(ansi.dim(' y/n to confirm') + '\n');
|
||||
} else if (state.inputMode === 'edit-status') {
|
||||
process.stdout.write(ansi.dim(' inbox/assigned/in_progress/done/failed esc cancel') + '\n');
|
||||
} else {
|
||||
process.stdout.write(ansi.dim(' enter submit esc cancel') + '\n');
|
||||
}
|
||||
return; // don't show normal footer when in input mode
|
||||
}
|
||||
|
||||
// Footer
|
||||
if (state.actionMessage) process.stdout.write(ansi.green(` ${state.actionMessage}\n`));
|
||||
const hint = state.panel === 'agents'
|
||||
? ' \u2191\u2193 navigate enter detail tab switch [r]efresh [w]ake [q]uit'
|
||||
: ' \u2191\u2193 navigate [n]ew enter edit [s]tatus [d]elete tab switch [r]efresh [q]uit';
|
||||
process.stdout.write(ansi.dim(hint) + '\n');
|
||||
}
|
||||
|
||||
function renderAgentsList(cols, maxRows) {
|
||||
const agents = getAgentsList();
|
||||
if (agents.length === 0) { process.stdout.write(ansi.dim(' (no agents)\n')); return; }
|
||||
|
||||
const nameW = Math.min(22, Math.floor(cols * 0.25));
|
||||
const roleW = Math.min(16, Math.floor(cols * 0.15));
|
||||
const statusW = 12;
|
||||
process.stdout.write(ansi.dim(` ${pad('Name', nameW)} ${pad('Role', roleW)} ${pad('Status', statusW)} Last Seen\n`));
|
||||
|
||||
// Ensure cursor is visible
|
||||
if (state.cursorAgent >= agents.length) state.cursorAgent = agents.length - 1;
|
||||
if (state.cursorAgent < 0) state.cursorAgent = 0;
|
||||
|
||||
const listRows = maxRows - 1; // minus header
|
||||
// Scroll window
|
||||
let start = 0;
|
||||
if (state.cursorAgent >= start + listRows) start = state.cursorAgent - listRows + 1;
|
||||
if (state.cursorAgent < start) start = state.cursorAgent;
|
||||
|
||||
for (let i = start; i < Math.min(agents.length, start + listRows); i++) {
|
||||
const a = agents[i];
|
||||
const selected = i === state.cursorAgent;
|
||||
const name = pad(truncate(a.name, nameW), nameW);
|
||||
const role = pad(truncate(a.role, roleW), roleW);
|
||||
const st = statusColor(a.status || 'unknown');
|
||||
const stPad = pad(st, statusW + 9);
|
||||
const lastSeen = a.last_seen ? ansi.dim(timeSince(a.last_seen)) : ansi.dim('\u2014');
|
||||
const line = ` ${name} ${role} ${stPad} ${lastSeen}`;
|
||||
process.stdout.write(selected ? ansi.inverse(stripAnsi(line).padEnd(cols)) + '\n' : line + '\n');
|
||||
}
|
||||
|
||||
if (agents.length > listRows) {
|
||||
process.stdout.write(ansi.dim(` ${agents.length} total, showing ${start + 1}-${Math.min(agents.length, start + listRows)}\n`));
|
||||
}
|
||||
}
|
||||
|
||||
function renderTasksList(cols, maxRows) {
|
||||
const tasks = getTasksList();
|
||||
if (tasks.length === 0) { process.stdout.write(ansi.dim(' (no tasks)\n')); return; }
|
||||
|
||||
const idW = 5;
|
||||
const titleW = Math.min(35, Math.floor(cols * 0.35));
|
||||
const statusW = 14;
|
||||
const assignW = 16;
|
||||
process.stdout.write(ansi.dim(` ${pad('ID', idW)} ${pad('Title', titleW)} ${pad('Status', statusW)} ${pad('Assigned', assignW)}\n`));
|
||||
|
||||
if (state.cursorTask >= tasks.length) state.cursorTask = tasks.length - 1;
|
||||
if (state.cursorTask < 0) state.cursorTask = 0;
|
||||
|
||||
const listRows = maxRows - 1;
|
||||
let start = 0;
|
||||
if (state.cursorTask >= start + listRows) start = state.cursorTask - listRows + 1;
|
||||
if (state.cursorTask < start) start = state.cursorTask;
|
||||
|
||||
for (let i = start; i < Math.min(tasks.length, start + listRows); i++) {
|
||||
const t = tasks[i];
|
||||
const selected = i === state.cursorTask;
|
||||
const id = pad(String(t.id || ''), idW);
|
||||
const title = pad(truncate(t.title, titleW), titleW);
|
||||
const st = statusColor(t.status || '');
|
||||
const stPad = pad(st, statusW + 9);
|
||||
const assigned = pad(truncate(t.assigned_to || '-', assignW), assignW);
|
||||
const line = ` ${id} ${title} ${stPad} ${assigned}`;
|
||||
process.stdout.write(selected ? ansi.inverse(stripAnsi(line).padEnd(cols)) + '\n' : line + '\n');
|
||||
}
|
||||
}
|
||||
|
||||
// --- Agent Detail View ---
|
||||
|
||||
function renderAgentDetail() {
|
||||
const { cols, rows } = getTermSize();
|
||||
ansi.clear();
|
||||
|
||||
const agent = state.selectedAgent;
|
||||
if (!agent) { state.view = 'dashboard'; renderDashboard(); return; }
|
||||
|
||||
// Header
|
||||
process.stdout.write(ansi.bgBlue(pad(` ${agent.name} `, cols)) + '\n');
|
||||
process.stdout.write(` Role: ${ansi.cyan(agent.role || '-')} Status: ${statusColor(agent.status || 'unknown')} ${ansi.dim(agent.last_activity || '')}\n`);
|
||||
|
||||
// Sessions
|
||||
process.stdout.write('\n' + ansi.bold(ansi.cyan(' SESSIONS')) + '\n');
|
||||
|
||||
const sessions = state.agentSessions?.sessions || [];
|
||||
if (state.agentSessions?._error) {
|
||||
process.stdout.write(ansi.dim(` (unavailable: ${state.agentSessions._error})\n`));
|
||||
} else if (sessions.length === 0) {
|
||||
process.stdout.write(ansi.dim(' (no sessions found)\n'));
|
||||
} else {
|
||||
for (let i = 0; i < Math.min(sessions.length, 5); i++) {
|
||||
const s = sessions[i];
|
||||
const selected = i === state.transcriptSessionIdx;
|
||||
const active = s.active ? ansi.green('*') : ' ';
|
||||
const age = s.startTime ? timeSince(s.startTime) : '';
|
||||
const cost = s.estimatedCost != null ? `$${s.estimatedCost.toFixed(2)}` : '';
|
||||
const model = s.model || '';
|
||||
const branch = (s.flags || [])[0] || '';
|
||||
const prompt = truncate(s.lastUserPrompt || '', Math.max(20, cols - 70));
|
||||
const line = ` ${active} ${pad(truncate(s.id || '', 12), 12)} ${pad(model, 18)} ${pad(age, 8)} ${pad(cost, 8)} ${ansi.dim(branch)}`;
|
||||
process.stdout.write(selected ? ansi.inverse(stripAnsi(line).padEnd(cols)) + '\n' : line + '\n');
|
||||
}
|
||||
}
|
||||
|
||||
// Transcript
|
||||
process.stdout.write('\n' + ansi.bold(ansi.magenta(' CHAT')) + '\n');
|
||||
|
||||
const transcript = state.agentTranscript?.messages || [];
|
||||
if (state.agentTranscript?._error) {
|
||||
process.stdout.write(ansi.dim(` (unavailable: ${state.agentTranscript._error})\n`));
|
||||
} else if (transcript.length === 0) {
|
||||
process.stdout.write(ansi.dim(' (no messages — press enter on a session to load)\n'));
|
||||
} else {
|
||||
const chatRows = Math.max(4, rows - 16);
|
||||
const messages = [];
|
||||
for (const msg of transcript) {
|
||||
const role = msg.role || 'unknown';
|
||||
for (const part of (msg.parts || [])) {
|
||||
if (part.type === 'text' && part.text) {
|
||||
messages.push({ role, text: part.text });
|
||||
} else if (part.type === 'tool_use') {
|
||||
messages.push({ role, text: ansi.dim(`[tool: ${part.name || part.id || '?'}]`) });
|
||||
} else if (part.type === 'tool_result') {
|
||||
const preview = typeof part.content === 'string' ? truncate(part.content, 80) : '[result]';
|
||||
messages.push({ role, text: ansi.dim(`[result: ${preview}]`) });
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Scroll from bottom
|
||||
const visible = messages.slice(-(chatRows + state.transcriptScroll), messages.length - state.transcriptScroll || undefined);
|
||||
for (const m of visible.slice(-chatRows)) {
|
||||
const roleLabel = m.role === 'user' ? ansi.green('you') : m.role === 'assistant' ? ansi.cyan('ai ') : ansi.dim(pad(m.role, 3));
|
||||
const lines = m.text.split('\n');
|
||||
const firstLine = truncate(lines[0], cols - 8);
|
||||
process.stdout.write(` ${roleLabel} ${firstLine}\n`);
|
||||
// Show continuation lines (up to 2 more)
|
||||
for (let j = 1; j < Math.min(lines.length, 3); j++) {
|
||||
process.stdout.write(` ${truncate(lines[j], cols - 8)}\n`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Footer
|
||||
process.stdout.write('\n');
|
||||
if (state.actionMessage) process.stdout.write(ansi.green(` ${state.actionMessage}\n`));
|
||||
process.stdout.write(ansi.dim(' \u2191\u2193 sessions enter load chat pgup/pgdn scroll esc back [q]uit') + '\n');
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Main loop
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
let baseUrl, apiKey, cookie, refreshMs;
|
||||
|
||||
async function main() {
|
||||
const flags = parseArgs(process.argv.slice(2));
|
||||
|
||||
if (flags.help) {
|
||||
console.log(`Mission Control TUI
|
||||
|
||||
Usage:
|
||||
node scripts/mc-tui.cjs [--url <base>] [--api-key <key>] [--profile <name>] [--refresh <ms>]
|
||||
|
||||
Keys (Dashboard):
|
||||
up/down Navigate agents or tasks list
|
||||
enter Open agent detail / edit task title
|
||||
tab Switch between agents and tasks panels
|
||||
n New task (tasks panel)
|
||||
s Change task status (tasks panel)
|
||||
d Delete task (tasks panel)
|
||||
r Refresh now
|
||||
w Wake first sleeping agent
|
||||
q/Esc Quit
|
||||
|
||||
Keys (Agent Detail):
|
||||
up/down Navigate sessions
|
||||
enter Load chat transcript for selected session
|
||||
pgup/pgdn Scroll chat
|
||||
esc Back to dashboard
|
||||
q Quit
|
||||
`);
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
const profile = loadProfile(String(flags.profile || 'default'));
|
||||
baseUrl = flags.url ? String(flags.url) : profile.url;
|
||||
apiKey = flags['api-key'] ? String(flags['api-key']) : profile.apiKey;
|
||||
cookie = profile.cookie;
|
||||
refreshMs = Number(flags.refresh || 5000);
|
||||
|
||||
// Raw mode for keyboard input
|
||||
if (process.stdin.isTTY) {
|
||||
readline.emitKeypressEvents(process.stdin);
|
||||
process.stdin.setRawMode(true);
|
||||
}
|
||||
|
||||
ansi.enterAltScreen();
|
||||
ansi.hideCursor();
|
||||
|
||||
let running = true;
|
||||
|
||||
function cleanup() {
|
||||
running = false;
|
||||
ansi.showCursor();
|
||||
ansi.exitAltScreen();
|
||||
process.exit(0);
|
||||
}
|
||||
process.on('SIGINT', cleanup);
|
||||
process.on('SIGTERM', cleanup);
|
||||
|
||||
function render() {
|
||||
if (state.view === 'dashboard') renderDashboard();
|
||||
else if (state.view === 'agent-detail') renderAgentDetail();
|
||||
}
|
||||
|
||||
// Keyboard handler
|
||||
process.stdin.on('keypress', async (str, key) => {
|
||||
if (!key) return;
|
||||
|
||||
// Global keys
|
||||
if (key.name === 'q') { cleanup(); return; }
|
||||
if (key.name === 'c' && key.ctrl) { cleanup(); return; }
|
||||
|
||||
if (state.view === 'dashboard') {
|
||||
await handleDashboardKey(key, str, render);
|
||||
} else if (state.view === 'agent-detail') {
|
||||
await handleAgentDetailKey(key, render);
|
||||
}
|
||||
});
|
||||
|
||||
// Initial load
|
||||
state.actionMessage = 'Loading...';
|
||||
render();
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
state.actionMessage = '';
|
||||
render();
|
||||
|
||||
// Auto-refresh loop
|
||||
while (running) {
|
||||
await new Promise(resolve => setTimeout(resolve, refreshMs));
|
||||
if (!running) break;
|
||||
if (state.view === 'dashboard') {
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
if (state.actionMessage === '') render();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function handleInputKey(key, str, render) {
|
||||
if (key.name === 'escape') {
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.editingTaskId = null;
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
if (state.inputMode === 'confirm-delete') {
|
||||
if (key.name === 'y') {
|
||||
const taskId = state.editingTaskId;
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.editingTaskId = null;
|
||||
state.actionMessage = 'Deleting...';
|
||||
render();
|
||||
const result = await api(baseUrl, apiKey, cookie, 'DELETE', `/api/tasks/${taskId}`);
|
||||
state.actionMessage = result?._error ? `Delete failed: ${result._error}` : 'Task deleted';
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 2000);
|
||||
} else {
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.editingTaskId = null;
|
||||
state.actionMessage = 'Cancelled';
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 1500);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'return') {
|
||||
const value = state.inputBuffer.trim();
|
||||
if (!value) { state.inputMode = null; state.inputBuffer = ''; render(); return; }
|
||||
|
||||
if (state.inputMode === 'new-task') {
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.actionMessage = 'Creating task...';
|
||||
render();
|
||||
const res = await postJson(baseUrl, apiKey, cookie, '/api/tasks', { title: value });
|
||||
state.actionMessage = res?._error ? `Create failed: ${res._error}` : `Created: ${value}`;
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 2000);
|
||||
} else if (state.inputMode === 'edit-title') {
|
||||
const taskId = state.editingTaskId;
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.editingTaskId = null;
|
||||
state.actionMessage = 'Updating...';
|
||||
render();
|
||||
const res = await putJson(baseUrl, apiKey, cookie, `/api/tasks/${taskId}`, { title: value });
|
||||
state.actionMessage = res?._error ? `Update failed: ${res._error}` : 'Title updated';
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 2000);
|
||||
} else if (state.inputMode === 'edit-status') {
|
||||
const valid = ['inbox', 'assigned', 'in_progress', 'review', 'done', 'failed'];
|
||||
if (!valid.includes(value)) {
|
||||
state.actionMessage = `Invalid status. Use: ${valid.join(', ')}`;
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.editingTaskId = null;
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 2000);
|
||||
return;
|
||||
}
|
||||
const taskId = state.editingTaskId;
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.editingTaskId = null;
|
||||
state.actionMessage = 'Updating status...';
|
||||
render();
|
||||
const res = await putJson(baseUrl, apiKey, cookie, `/api/tasks/${taskId}`, { status: value });
|
||||
state.actionMessage = res?._error ? `Update failed: ${res._error}` : `Status → ${value}`;
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 2000);
|
||||
} else if (state.inputMode === 'edit-assign') {
|
||||
const taskId = state.editingTaskId;
|
||||
state.inputMode = null;
|
||||
state.inputBuffer = '';
|
||||
state.editingTaskId = null;
|
||||
state.actionMessage = 'Assigning...';
|
||||
render();
|
||||
const res = await putJson(baseUrl, apiKey, cookie, `/api/tasks/${taskId}`, { assigned_to: value, status: 'assigned' });
|
||||
state.actionMessage = res?._error ? `Assign failed: ${res._error}` : `Assigned to ${value}`;
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 2000);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'backspace') {
|
||||
state.inputBuffer = state.inputBuffer.slice(0, -1);
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
// Printable character
|
||||
if (str && str.length === 1 && !key.ctrl && !key.meta) {
|
||||
state.inputBuffer += str;
|
||||
render();
|
||||
}
|
||||
}
|
||||
|
||||
async function handleDashboardKey(key, str, render) {
|
||||
// If in input mode, route all keys there
|
||||
if (state.inputMode) {
|
||||
await handleInputKey(key, str, render);
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'escape') { cleanup(); return; }
|
||||
|
||||
if (key.name === 'tab') {
|
||||
state.panel = state.panel === 'agents' ? 'tasks' : 'agents';
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
// Also support a/t to switch panels
|
||||
if (key.name === 'a') { state.panel = 'agents'; render(); return; }
|
||||
if (key.name === 't') { state.panel = 'tasks'; render(); return; }
|
||||
|
||||
if (key.name === 'up') {
|
||||
if (state.panel === 'agents') state.cursorAgent = Math.max(0, state.cursorAgent - 1);
|
||||
else state.cursorTask = Math.max(0, state.cursorTask - 1);
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'down') {
|
||||
if (state.panel === 'agents') {
|
||||
const max = getAgentsList().length - 1;
|
||||
state.cursorAgent = Math.min(max, state.cursorAgent + 1);
|
||||
} else {
|
||||
const max = getTasksList().length - 1;
|
||||
state.cursorTask = Math.min(max, state.cursorTask + 1);
|
||||
}
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
// Task management keys (only in tasks panel)
|
||||
if (state.panel === 'tasks') {
|
||||
if (key.name === 'n') {
|
||||
state.inputMode = 'new-task';
|
||||
state.inputBuffer = '';
|
||||
state.inputLabel = 'New task title';
|
||||
render();
|
||||
return;
|
||||
}
|
||||
if (key.name === 'return') {
|
||||
const tasks = getTasksList();
|
||||
if (tasks.length === 0) return;
|
||||
const task = tasks[state.cursorTask];
|
||||
state.inputMode = 'edit-title';
|
||||
state.inputBuffer = task.title || '';
|
||||
state.inputLabel = `Edit title [#${task.id}]`;
|
||||
state.editingTaskId = task.id;
|
||||
render();
|
||||
return;
|
||||
}
|
||||
if (key.name === 's') {
|
||||
const tasks = getTasksList();
|
||||
if (tasks.length === 0) return;
|
||||
const task = tasks[state.cursorTask];
|
||||
state.inputMode = 'edit-status';
|
||||
state.inputBuffer = task.status || '';
|
||||
state.inputLabel = `Status [#${task.id}]`;
|
||||
state.editingTaskId = task.id;
|
||||
render();
|
||||
return;
|
||||
}
|
||||
if (key.name === 'd' || key.name === 'x') {
|
||||
const tasks = getTasksList();
|
||||
if (tasks.length === 0) return;
|
||||
const task = tasks[state.cursorTask];
|
||||
state.inputMode = 'confirm-delete';
|
||||
state.inputBuffer = '';
|
||||
state.inputLabel = `Delete "${truncate(task.title, 40)}"?`;
|
||||
state.editingTaskId = task.id;
|
||||
render();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
if (key.name === 'return' && state.panel === 'agents') {
|
||||
const agents = getAgentsList();
|
||||
if (agents.length === 0) return;
|
||||
state.selectedAgent = agents[state.cursorAgent];
|
||||
state.view = 'agent-detail';
|
||||
state.transcriptSessionIdx = 0;
|
||||
state.transcriptScroll = 0;
|
||||
state.agentTranscript = null;
|
||||
state.actionMessage = 'Loading sessions...';
|
||||
render();
|
||||
state.agentSessions = await fetchAgentSessions(baseUrl, apiKey, cookie, state.selectedAgent.name);
|
||||
state.actionMessage = '';
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'r') {
|
||||
state.actionMessage = 'Refreshing...';
|
||||
render();
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
state.actionMessage = 'Refreshed';
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 2000);
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'w') {
|
||||
const agents = state.data.agents?.agents || [];
|
||||
const sleeping = agents.filter(a => a.status === 'sleeping' || a.status === 'idle' || a.status === 'offline');
|
||||
if (sleeping.length === 0) { state.actionMessage = 'No agents to wake'; render(); return; }
|
||||
state.actionMessage = 'Waking agent...';
|
||||
render();
|
||||
const target = sleeping[0];
|
||||
const result = await api(baseUrl, apiKey, cookie, 'POST', `/api/agents/${target.id}/wake`);
|
||||
state.actionMessage = result?._error ? `Wake failed: ${result._error}` : `Woke agent: ${target.name}`;
|
||||
render();
|
||||
state.data = await fetchDashboardData(baseUrl, apiKey, cookie);
|
||||
render();
|
||||
setTimeout(() => { state.actionMessage = ''; render(); }, 3000);
|
||||
}
|
||||
}
|
||||
|
||||
async function handleAgentDetailKey(key, render) {
|
||||
if (key.name === 'escape') {
|
||||
state.view = 'dashboard';
|
||||
state.selectedAgent = null;
|
||||
state.agentSessions = null;
|
||||
state.agentTranscript = null;
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
const sessions = state.agentSessions?.sessions || [];
|
||||
|
||||
if (key.name === 'up') {
|
||||
state.transcriptSessionIdx = Math.max(0, state.transcriptSessionIdx - 1);
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'down') {
|
||||
state.transcriptSessionIdx = Math.min(Math.max(0, sessions.length - 1), state.transcriptSessionIdx + 1);
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
if (key.name === 'return') {
|
||||
if (sessions.length === 0) return;
|
||||
const session = sessions[state.transcriptSessionIdx];
|
||||
if (!session?.id) return;
|
||||
state.actionMessage = 'Loading chat...';
|
||||
state.transcriptScroll = 0;
|
||||
render();
|
||||
state.agentTranscript = await fetchTranscript(baseUrl, apiKey, cookie, session.id, 20);
|
||||
state.actionMessage = '';
|
||||
render();
|
||||
return;
|
||||
}
|
||||
|
||||
// Page up/down for chat scroll
|
||||
if (key.name === 'pageup' || (key.shift && key.name === 'up')) {
|
||||
state.transcriptScroll = Math.min(state.transcriptScroll + 5, 100);
|
||||
render();
|
||||
return;
|
||||
}
|
||||
if (key.name === 'pagedown' || (key.shift && key.name === 'down')) {
|
||||
state.transcriptScroll = Math.max(0, state.transcriptScroll - 5);
|
||||
render();
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
function cleanup() {
|
||||
ansi.showCursor();
|
||||
ansi.exitAltScreen();
|
||||
process.exit(0);
|
||||
}
|
||||
|
||||
main().catch(err => {
|
||||
ansi.showCursor();
|
||||
ansi.exitAltScreen();
|
||||
console.error('TUI error:', err.message);
|
||||
process.exit(1);
|
||||
});
|
||||
|
|
@ -30,4 +30,7 @@ if [[ -d "$SOURCE_PUBLIC_DIR" ]]; then
|
|||
fi
|
||||
|
||||
cd "$STANDALONE_DIR"
|
||||
# Next.js standalone server reads HOSTNAME to decide bind address.
|
||||
# Default to 0.0.0.0 so the server is accessible from outside the host.
|
||||
export HOSTNAME="${HOSTNAME:-0.0.0.0}"
|
||||
exec node server.js
|
||||
|
|
|
|||
|
|
@ -0,0 +1,54 @@
|
|||
import { NextRequest, NextResponse } from 'next/server'
|
||||
import { requireRole } from '@/lib/auth'
|
||||
import { listAdapters } from '@/lib/adapters'
|
||||
import {
|
||||
listFrameworks,
|
||||
getFrameworkInfo,
|
||||
getTemplatesForFramework,
|
||||
UNIVERSAL_TEMPLATES,
|
||||
} from '@/lib/framework-templates'
|
||||
|
||||
/**
|
||||
* GET /api/frameworks — List all supported frameworks with connection info and templates.
|
||||
*
|
||||
* Query params:
|
||||
* ?framework=langgraph — Get details for a specific framework
|
||||
* ?templates=true — Include available templates in response
|
||||
*/
|
||||
export async function GET(request: NextRequest) {
|
||||
const auth = requireRole(request, 'viewer')
|
||||
if ('error' in auth) return NextResponse.json({ error: auth.error }, { status: auth.status })
|
||||
|
||||
const { searchParams: n } = new URL(request.url)
|
||||
const frameworkFilter = n.get('framework')
|
||||
const includeTemplates = n.get('templates') === 'true'
|
||||
|
||||
// Single framework detail
|
||||
if (frameworkFilter) {
|
||||
const info = getFrameworkInfo(frameworkFilter)
|
||||
if (!info) {
|
||||
return NextResponse.json(
|
||||
{ error: `Unknown framework: ${frameworkFilter}. Available: ${listAdapters().join(', ')}` },
|
||||
{ status: 404 }
|
||||
)
|
||||
}
|
||||
|
||||
const response: Record<string, unknown> = { framework: info }
|
||||
if (includeTemplates) {
|
||||
response.templates = getTemplatesForFramework(frameworkFilter)
|
||||
}
|
||||
return NextResponse.json(response)
|
||||
}
|
||||
|
||||
// List all frameworks
|
||||
const frameworks = listFrameworks()
|
||||
const response: Record<string, unknown> = { frameworks }
|
||||
|
||||
if (includeTemplates) {
|
||||
response.templates = UNIVERSAL_TEMPLATES
|
||||
}
|
||||
|
||||
return NextResponse.json(response)
|
||||
}
|
||||
|
||||
export const dynamic = 'force-dynamic'
|
||||
|
|
@ -80,7 +80,7 @@ export async function POST(request: NextRequest) {
|
|||
ensureTable(db)
|
||||
const body = await request.json()
|
||||
|
||||
const { name, host, port, token, is_primary } = body
|
||||
const { name, host, port, token, is_primary, agents } = body
|
||||
|
||||
if (!name || !host || !port) {
|
||||
return NextResponse.json({ error: 'name, host, and port are required' }, { status: 400 })
|
||||
|
|
@ -96,14 +96,37 @@ export async function POST(request: NextRequest) {
|
|||
INSERT INTO gateways (name, host, port, token, is_primary) VALUES (?, ?, ?, ?, ?)
|
||||
`).run(name, host, port, token || '', is_primary ? 1 : 0)
|
||||
|
||||
// Auto-register agents reported by the gateway (k8s sidecar support)
|
||||
let agentsRegistered = 0
|
||||
if (Array.isArray(agents) && agents.length > 0) {
|
||||
const workspaceId = auth.user?.workspace_id ?? 1
|
||||
const now = Math.floor(Date.now() / 1000)
|
||||
const upsertAgent = db.prepare(`
|
||||
INSERT INTO agents (name, role, status, last_seen, source, workspace_id, updated_at)
|
||||
VALUES (?, ?, 'idle', ?, 'gateway', ?, ?)
|
||||
ON CONFLICT(name) DO UPDATE SET
|
||||
status = 'idle',
|
||||
last_seen = excluded.last_seen,
|
||||
source = 'gateway',
|
||||
updated_at = excluded.updated_at
|
||||
`)
|
||||
for (const agent of agents.slice(0, 50)) {
|
||||
if (typeof agent?.name !== 'string' || !agent.name.trim()) continue
|
||||
const agentName = agent.name.trim().substring(0, 100)
|
||||
const agentRole = typeof agent?.role === 'string' ? agent.role.trim().substring(0, 100) : 'agent'
|
||||
upsertAgent.run(agentName, agentRole, now, workspaceId, now)
|
||||
agentsRegistered++
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
db.prepare('INSERT INTO audit_log (action, actor, detail) VALUES (?, ?, ?)').run(
|
||||
'gateway_added', auth.user?.username || 'system', `Added gateway: ${name} (${host}:${port})`
|
||||
'gateway_added', auth.user?.username || 'system', `Added gateway: ${name} (${host}:${port})${agentsRegistered ? `, registered ${agentsRegistered} agent(s)` : ''}`
|
||||
)
|
||||
} catch { /* audit might not exist */ }
|
||||
|
||||
const gw = db.prepare('SELECT * FROM gateways WHERE id = ?').get(result.lastInsertRowid) as GatewayEntry
|
||||
return NextResponse.json({ gateway: redactToken(gw) }, { status: 201 })
|
||||
return NextResponse.json({ gateway: redactToken(gw), agents_registered: agentsRegistered }, { status: 201 })
|
||||
} catch (err: any) {
|
||||
if (err.message?.includes('UNIQUE')) {
|
||||
return NextResponse.json({ error: 'A gateway with that name already exists' }, { status: 409 })
|
||||
|
|
@ -145,15 +168,39 @@ export async function PUT(request: NextRequest) {
|
|||
}
|
||||
}
|
||||
|
||||
if (sets.length === 0) return NextResponse.json({ error: 'No valid fields to update' }, { status: 400 })
|
||||
if (sets.length === 0 && !Array.isArray(updates.agents)) return NextResponse.json({ error: 'No valid fields to update' }, { status: 400 })
|
||||
|
||||
sets.push('updated_at = (unixepoch())')
|
||||
values.push(id)
|
||||
if (sets.length > 0) {
|
||||
sets.push('updated_at = (unixepoch())')
|
||||
values.push(id)
|
||||
db.prepare(`UPDATE gateways SET ${sets.join(', ')} WHERE id = ?`).run(...values)
|
||||
}
|
||||
|
||||
db.prepare(`UPDATE gateways SET ${sets.join(', ')} WHERE id = ?`).run(...values)
|
||||
// Auto-register agents reported by the gateway (k8s sidecar support)
|
||||
let agentsRegistered = 0
|
||||
if (Array.isArray(updates.agents) && updates.agents.length > 0) {
|
||||
const workspaceId = auth.user?.workspace_id ?? 1
|
||||
const now = Math.floor(Date.now() / 1000)
|
||||
const upsertAgent = db.prepare(`
|
||||
INSERT INTO agents (name, role, status, last_seen, source, workspace_id, updated_at)
|
||||
VALUES (?, ?, 'idle', ?, 'gateway', ?, ?)
|
||||
ON CONFLICT(name, workspace_id) DO UPDATE SET
|
||||
status = 'idle',
|
||||
last_seen = excluded.last_seen,
|
||||
source = 'gateway',
|
||||
updated_at = excluded.updated_at
|
||||
`)
|
||||
for (const agent of updates.agents.slice(0, 50)) {
|
||||
if (typeof agent?.name !== 'string' || !agent.name.trim()) continue
|
||||
const agentName = agent.name.trim().substring(0, 100)
|
||||
const agentRole = typeof agent?.role === 'string' ? agent.role.trim().substring(0, 100) : 'agent'
|
||||
upsertAgent.run(agentName, agentRole, now, workspaceId, now)
|
||||
agentsRegistered++
|
||||
}
|
||||
}
|
||||
|
||||
const updated = db.prepare('SELECT * FROM gateways WHERE id = ?').get(id) as GatewayEntry
|
||||
return NextResponse.json({ gateway: redactToken(updated) })
|
||||
return NextResponse.json({ gateway: redactToken(updated), agents_registered: agentsRegistered })
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -105,37 +105,28 @@ export async function GET(request: NextRequest) {
|
|||
})
|
||||
}
|
||||
|
||||
// Best-effort atomic pickup loop for race safety.
|
||||
for (let attempt = 0; attempt < 5; attempt += 1) {
|
||||
const candidate = db.prepare(`
|
||||
SELECT *
|
||||
FROM tasks
|
||||
// Atomic claim: single UPDATE with subquery to eliminate SELECT-UPDATE race condition.
|
||||
const claimed = db.prepare(`
|
||||
UPDATE tasks
|
||||
SET status = 'in_progress', assigned_to = ?, updated_at = ?
|
||||
WHERE id = (
|
||||
SELECT id FROM tasks
|
||||
WHERE workspace_id = ?
|
||||
AND status IN ('assigned', 'inbox')
|
||||
AND (assigned_to IS NULL OR assigned_to = ?)
|
||||
ORDER BY ${priorityRankSql()} ASC, due_date ASC NULLS LAST, created_at ASC
|
||||
LIMIT 1
|
||||
`).get(workspaceId, agent) as any | undefined
|
||||
)
|
||||
RETURNING *
|
||||
`).get(agent, now, workspaceId, agent) as any | undefined
|
||||
|
||||
if (!candidate) break
|
||||
|
||||
const claimed = db.prepare(`
|
||||
UPDATE tasks
|
||||
SET status = 'in_progress', assigned_to = ?, updated_at = ?
|
||||
WHERE id = ? AND workspace_id = ?
|
||||
AND status IN ('assigned', 'inbox')
|
||||
AND (assigned_to IS NULL OR assigned_to = ?)
|
||||
`).run(agent, now, candidate.id, workspaceId, agent)
|
||||
|
||||
if (claimed.changes > 0) {
|
||||
const task = db.prepare('SELECT * FROM tasks WHERE id = ? AND workspace_id = ?').get(candidate.id, workspaceId) as any
|
||||
return NextResponse.json({
|
||||
task: mapTaskRow(task),
|
||||
reason: 'assigned' as QueueReason,
|
||||
agent,
|
||||
timestamp: now,
|
||||
})
|
||||
}
|
||||
if (claimed) {
|
||||
return NextResponse.json({
|
||||
task: mapTaskRow(claimed),
|
||||
reason: 'assigned' as QueueReason,
|
||||
agent,
|
||||
timestamp: now,
|
||||
})
|
||||
}
|
||||
|
||||
return NextResponse.json({
|
||||
|
|
|
|||
|
|
@ -52,8 +52,8 @@ export const viewport: Viewport = {
|
|||
}
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: 'Mission Control',
|
||||
description: 'OpenClaw Agent Orchestration Dashboard',
|
||||
title: 'Mission Control — AI Agent Orchestration Dashboard',
|
||||
description: 'Open-source dashboard for AI agent orchestration. Manage agent fleets, dispatch tasks, track costs, and coordinate multi-agent workflows. Self-hosted, zero dependencies, SQLite-powered.',
|
||||
metadataBase,
|
||||
icons: {
|
||||
icon: [
|
||||
|
|
@ -64,14 +64,16 @@ export const metadata: Metadata = {
|
|||
shortcut: ['/icon.png'],
|
||||
},
|
||||
openGraph: {
|
||||
title: 'Mission Control',
|
||||
description: 'OpenClaw Agent Orchestration Dashboard',
|
||||
images: [{ url: '/brand/mc-logo-512.png', width: 512, height: 512, alt: 'Mission Control logo' }],
|
||||
title: 'Mission Control — AI Agent Orchestration Dashboard',
|
||||
description: 'Open-source dashboard for AI agent orchestration. Manage agent fleets, dispatch tasks, track costs, and coordinate multi-agent workflows.',
|
||||
images: [{ url: '/brand/mc-logo-512.png', width: 512, height: 512, alt: 'Mission Control — open-source AI agent orchestration dashboard' }],
|
||||
type: 'website',
|
||||
siteName: 'Mission Control',
|
||||
},
|
||||
twitter: {
|
||||
card: 'summary',
|
||||
title: 'Mission Control',
|
||||
description: 'OpenClaw Agent Orchestration Dashboard',
|
||||
card: 'summary_large_image',
|
||||
title: 'Mission Control — AI Agent Orchestration Dashboard',
|
||||
description: 'Open-source dashboard for AI agent orchestration. Manage agent fleets, dispatch tasks, track costs, and coordinate multi-agent workflows.',
|
||||
images: ['/brand/mc-logo-512.png'],
|
||||
},
|
||||
appleWebApp: {
|
||||
|
|
|
|||
|
|
@ -5,7 +5,6 @@ import { useTranslations } from 'next-intl'
|
|||
import { Button } from '@/components/ui/button'
|
||||
import { useMissionControl } from '@/store'
|
||||
import { useWebSocket } from '@/lib/websocket'
|
||||
import { buildGatewayWebSocketUrl } from '@/lib/gateway-url'
|
||||
|
||||
interface Gateway {
|
||||
id: number
|
||||
|
|
@ -130,19 +129,11 @@ export function MultiGatewayPanel() {
|
|||
const normalizedConn = url.toLowerCase()
|
||||
const normalizedHost = String(gw.host || '').toLowerCase()
|
||||
|
||||
if (normalizedHost && normalizedConn.includes(normalizedHost)) return true
|
||||
// Skip localhost matching — server rewrites localhost to browser hostname,
|
||||
// so the connection URL won't contain "127.0.0.1". Port matching handles it.
|
||||
if (normalizedHost && normalizedHost !== '127.0.0.1' && normalizedHost !== 'localhost' && normalizedConn.includes(normalizedHost)) return true
|
||||
if (normalizedConn.includes(`:${gw.port}`)) return true
|
||||
|
||||
try {
|
||||
const derivedWs = buildGatewayWebSocketUrl({
|
||||
host: gw.host,
|
||||
port: gw.port,
|
||||
browserProtocol: window.location.protocol,
|
||||
}).toLowerCase()
|
||||
return normalizedConn.includes(derivedWs)
|
||||
} catch {
|
||||
return false
|
||||
}
|
||||
return false
|
||||
}, [connection.url])
|
||||
|
||||
const shouldShowConnectionSummary =
|
||||
|
|
@ -179,11 +170,10 @@ export function MultiGatewayPanel() {
|
|||
if (!res.ok) return
|
||||
const payload = await res.json()
|
||||
|
||||
const wsUrl = String(payload?.ws_url || buildGatewayWebSocketUrl({
|
||||
host: gw.host,
|
||||
port: gw.port,
|
||||
browserProtocol: window.location.protocol,
|
||||
}))
|
||||
// Use server-resolved URL only — it respects NEXT_PUBLIC_GATEWAY_URL,
|
||||
// Tailscale Serve, and reverse-proxy configurations.
|
||||
const wsUrl = payload?.ws_url
|
||||
if (!wsUrl) return
|
||||
const token = String(payload?.token || '')
|
||||
connect(wsUrl, token)
|
||||
} catch {
|
||||
|
|
|
|||
|
|
@ -0,0 +1,100 @@
|
|||
import fs from 'node:fs'
|
||||
import os from 'node:os'
|
||||
import path from 'node:path'
|
||||
import { afterEach, describe, expect, it } from 'vitest'
|
||||
|
||||
import {
|
||||
collectOpenApiOperations,
|
||||
compareApiContractParity,
|
||||
extractHttpMethods,
|
||||
routeFileToApiPath,
|
||||
runApiContractParityCheck,
|
||||
} from '@/lib/api-contract-parity'
|
||||
|
||||
const tempDirs: string[] = []
|
||||
|
||||
afterEach(() => {
|
||||
for (const dir of tempDirs.splice(0)) {
|
||||
fs.rmSync(dir, { recursive: true, force: true })
|
||||
}
|
||||
})
|
||||
|
||||
describe('api-contract-parity helpers', () => {
|
||||
it('maps Next.js route files to OpenAPI-style API paths', () => {
|
||||
expect(routeFileToApiPath('src/app/api/agents/route.ts')).toBe('/api/agents')
|
||||
expect(routeFileToApiPath('src/app/api/tasks/[id]/route.ts')).toBe('/api/tasks/{id}')
|
||||
expect(routeFileToApiPath('src/app/api/files/[...slug]/route.ts')).toBe('/api/files/{slug}')
|
||||
expect(routeFileToApiPath('src/app/api/optional/[[...tail]]/route.ts')).toBe('/api/optional/{tail}')
|
||||
})
|
||||
|
||||
it('extracts exported HTTP methods from route modules', () => {
|
||||
const source = `
|
||||
export const GET = async () => {}
|
||||
export const POST = async () => {}
|
||||
const internal = 'ignore me'
|
||||
`
|
||||
expect(extractHttpMethods(source).sort()).toEqual(['GET', 'POST'])
|
||||
})
|
||||
|
||||
it('normalizes OpenAPI operations', () => {
|
||||
const operations = collectOpenApiOperations({
|
||||
paths: {
|
||||
'/api/tasks': { get: {}, post: {} },
|
||||
'/api/tasks/{id}': { delete: {}, patch: {} },
|
||||
},
|
||||
})
|
||||
expect(operations).toEqual([
|
||||
'DELETE /api/tasks/{id}',
|
||||
'GET /api/tasks',
|
||||
'PATCH /api/tasks/{id}',
|
||||
'POST /api/tasks',
|
||||
])
|
||||
})
|
||||
|
||||
it('reports mismatches with optional ignore list', () => {
|
||||
const report = compareApiContractParity({
|
||||
routeOperations: [
|
||||
{ method: 'GET', path: '/api/tasks', sourceFile: 'a' },
|
||||
{ method: 'POST', path: '/api/tasks', sourceFile: 'a' },
|
||||
{ method: 'DELETE', path: '/api/tasks/{id}', sourceFile: 'b' },
|
||||
],
|
||||
openapiOperations: ['GET /api/tasks', 'PATCH /api/tasks/{id}', 'DELETE /api/tasks/{id}'],
|
||||
ignore: ['PATCH /api/tasks/{id}'],
|
||||
})
|
||||
|
||||
expect(report.missingInOpenApi).toEqual(['POST /api/tasks'])
|
||||
expect(report.missingInRoutes).toEqual([])
|
||||
expect(report.ignoredOperations).toEqual(['PATCH /api/tasks/{id}'])
|
||||
})
|
||||
|
||||
it('scans a project root and compares route operations to openapi', () => {
|
||||
const root = fs.mkdtempSync(path.join(os.tmpdir(), 'mc-contract-'))
|
||||
tempDirs.push(root)
|
||||
|
||||
const routeDir = path.join(root, 'src/app/api/tasks/[id]')
|
||||
fs.mkdirSync(routeDir, { recursive: true })
|
||||
fs.writeFileSync(path.join(root, 'src/app/api/tasks/route.ts'), 'export const GET = async () => {};\n', 'utf8')
|
||||
fs.writeFileSync(path.join(routeDir, 'route.ts'), 'export const DELETE = async () => {};\n', 'utf8')
|
||||
|
||||
fs.writeFileSync(
|
||||
path.join(root, 'openapi.json'),
|
||||
JSON.stringify({
|
||||
openapi: '3.0.0',
|
||||
paths: {
|
||||
'/api/tasks': { get: {} },
|
||||
'/api/tasks/{id}': { delete: {}, patch: {} },
|
||||
},
|
||||
}),
|
||||
'utf8',
|
||||
)
|
||||
|
||||
const report = runApiContractParityCheck({
|
||||
projectRoot: root,
|
||||
ignore: ['PATCH /api/tasks/{id}'],
|
||||
})
|
||||
|
||||
expect(report.missingInOpenApi).toEqual([])
|
||||
expect(report.missingInRoutes).toEqual([])
|
||||
expect(report.ignoredOperations).toEqual(['PATCH /api/tasks/{id}'])
|
||||
})
|
||||
})
|
||||
|
|
@ -0,0 +1,146 @@
|
|||
/**
|
||||
* Framework Templates Test Suite
|
||||
*
|
||||
* Tests the framework-agnostic template registry, ensuring:
|
||||
* - All adapters have corresponding framework info
|
||||
* - Universal templates map correctly to framework-specific configs
|
||||
* - Template resolution works for all framework/template combinations
|
||||
*/
|
||||
|
||||
import { describe, it, expect } from 'vitest'
|
||||
import {
|
||||
FRAMEWORK_REGISTRY,
|
||||
UNIVERSAL_TEMPLATES,
|
||||
listFrameworks,
|
||||
getFrameworkInfo,
|
||||
getTemplatesForFramework,
|
||||
getUniversalTemplate,
|
||||
resolveTemplateConfig,
|
||||
} from '../framework-templates'
|
||||
import { listAdapters } from '../adapters'
|
||||
import { AGENT_TEMPLATES } from '../agent-templates'
|
||||
|
||||
describe('Framework Registry', () => {
|
||||
it('has an entry for every registered adapter', () => {
|
||||
const adapters = listAdapters()
|
||||
for (const adapter of adapters) {
|
||||
expect(FRAMEWORK_REGISTRY[adapter]).toBeDefined()
|
||||
expect(FRAMEWORK_REGISTRY[adapter].id).toBe(adapter)
|
||||
}
|
||||
})
|
||||
|
||||
it('every framework has required connection config', () => {
|
||||
for (const fw of listFrameworks()) {
|
||||
expect(fw.connection).toBeDefined()
|
||||
expect(fw.connection.connectionMode).toMatch(/^(webhook|polling|websocket)$/)
|
||||
expect(fw.connection.heartbeatInterval).toBeGreaterThan(0)
|
||||
expect(fw.connection.setupHints.length).toBeGreaterThan(0)
|
||||
expect(fw.connection.exampleSnippet.length).toBeGreaterThan(0)
|
||||
}
|
||||
})
|
||||
|
||||
it('every framework has a label and description', () => {
|
||||
for (const fw of listFrameworks()) {
|
||||
expect(fw.label).toBeTruthy()
|
||||
expect(fw.description).toBeTruthy()
|
||||
}
|
||||
})
|
||||
|
||||
it('getFrameworkInfo returns correct framework', () => {
|
||||
const info = getFrameworkInfo('langgraph')
|
||||
expect(info?.id).toBe('langgraph')
|
||||
expect(info?.label).toBe('LangGraph')
|
||||
})
|
||||
|
||||
it('getFrameworkInfo returns undefined for unknown', () => {
|
||||
expect(getFrameworkInfo('nonexistent')).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe('Universal Templates', () => {
|
||||
it('has at least 5 template archetypes', () => {
|
||||
expect(UNIVERSAL_TEMPLATES.length).toBeGreaterThanOrEqual(5)
|
||||
})
|
||||
|
||||
it('every template has required fields', () => {
|
||||
for (const tpl of UNIVERSAL_TEMPLATES) {
|
||||
expect(tpl.type).toBeTruthy()
|
||||
expect(tpl.label).toBeTruthy()
|
||||
expect(tpl.description).toBeTruthy()
|
||||
expect(tpl.emoji).toBeTruthy()
|
||||
expect(tpl.frameworks.length).toBeGreaterThan(0)
|
||||
expect(tpl.capabilities.length).toBeGreaterThan(0)
|
||||
}
|
||||
})
|
||||
|
||||
it('every template supports at least "generic" framework', () => {
|
||||
for (const tpl of UNIVERSAL_TEMPLATES) {
|
||||
expect(tpl.frameworks).toContain('generic')
|
||||
}
|
||||
})
|
||||
|
||||
it('templates with openclawTemplateType reference valid OpenClaw templates', () => {
|
||||
for (const tpl of UNIVERSAL_TEMPLATES) {
|
||||
if (tpl.openclawTemplateType) {
|
||||
const ocTemplate = AGENT_TEMPLATES.find(t => t.type === tpl.openclawTemplateType)
|
||||
expect(ocTemplate).toBeDefined()
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
it('getUniversalTemplate returns correct template', () => {
|
||||
const tpl = getUniversalTemplate('developer')
|
||||
expect(tpl?.type).toBe('developer')
|
||||
expect(tpl?.label).toBe('Developer')
|
||||
})
|
||||
|
||||
it('getUniversalTemplate returns undefined for unknown', () => {
|
||||
expect(getUniversalTemplate('nonexistent')).toBeUndefined()
|
||||
})
|
||||
})
|
||||
|
||||
describe('Template-Framework Resolution', () => {
|
||||
it('getTemplatesForFramework returns templates for known frameworks', () => {
|
||||
for (const fw of listAdapters()) {
|
||||
const templates = getTemplatesForFramework(fw)
|
||||
expect(templates.length).toBeGreaterThan(0)
|
||||
}
|
||||
})
|
||||
|
||||
it('getTemplatesForFramework returns empty for unknown framework', () => {
|
||||
expect(getTemplatesForFramework('nonexistent')).toEqual([])
|
||||
})
|
||||
|
||||
it('resolveTemplateConfig returns OpenClaw template for openclaw framework', () => {
|
||||
const result = resolveTemplateConfig('developer', 'openclaw')
|
||||
expect(result).toBeDefined()
|
||||
expect(result?.template).toBeDefined()
|
||||
expect(result?.template?.type).toBe('developer')
|
||||
expect(result?.universal.type).toBe('developer')
|
||||
})
|
||||
|
||||
it('resolveTemplateConfig returns universal-only for non-openclaw frameworks', () => {
|
||||
const result = resolveTemplateConfig('developer', 'langgraph')
|
||||
expect(result).toBeDefined()
|
||||
expect(result?.template).toBeUndefined()
|
||||
expect(result?.universal.type).toBe('developer')
|
||||
})
|
||||
|
||||
it('resolveTemplateConfig returns undefined for unknown template', () => {
|
||||
expect(resolveTemplateConfig('nonexistent', 'generic')).toBeUndefined()
|
||||
})
|
||||
|
||||
it('resolveTemplateConfig returns undefined for unsupported framework', () => {
|
||||
expect(resolveTemplateConfig('developer', 'nonexistent')).toBeUndefined()
|
||||
})
|
||||
|
||||
it('all universal templates resolve for all their declared frameworks', () => {
|
||||
for (const tpl of UNIVERSAL_TEMPLATES) {
|
||||
for (const fw of tpl.frameworks) {
|
||||
const result = resolveTemplateConfig(tpl.type, fw)
|
||||
expect(result).toBeDefined()
|
||||
expect(result?.universal.type).toBe(tpl.type)
|
||||
}
|
||||
}
|
||||
})
|
||||
})
|
||||
|
|
@ -0,0 +1,200 @@
|
|||
/**
|
||||
* Adapter API Route Integration Tests
|
||||
*
|
||||
* Tests the POST /api/adapters dispatcher against all frameworks.
|
||||
* Simulates what an external agent would do to connect to Mission Control.
|
||||
*
|
||||
* This is the "Feynman test" — timing how long it takes a stranger's
|
||||
* agent to connect via the HTTP API.
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach } from 'vitest'
|
||||
import { getAdapter, listAdapters } from '../index'
|
||||
|
||||
// These tests verify the API contract from the external agent's perspective.
|
||||
// They don't hit the HTTP layer (that's E2E) but verify the adapter dispatch
|
||||
// logic matches what the API route does.
|
||||
|
||||
const mockBroadcast = vi.fn()
|
||||
vi.mock('@/lib/event-bus', () => ({
|
||||
eventBus: { broadcast: (...args: unknown[]) => mockBroadcast(...args) },
|
||||
}))
|
||||
|
||||
const mockQuery = vi.fn()
|
||||
vi.mock('../adapter', async (importOriginal) => {
|
||||
const original = await importOriginal<typeof import('../adapter')>()
|
||||
return {
|
||||
...original,
|
||||
queryPendingAssignments: (...args: unknown[]) => mockQuery(...args),
|
||||
}
|
||||
})
|
||||
|
||||
// Simulate what POST /api/adapters does internally
|
||||
async function simulateAdapterAction(
|
||||
framework: string,
|
||||
action: string,
|
||||
payload: Record<string, unknown>
|
||||
): Promise<{ ok?: boolean; assignments?: unknown[]; error?: string }> {
|
||||
let adapter
|
||||
try {
|
||||
adapter = getAdapter(framework)
|
||||
} catch {
|
||||
return { error: `Unknown framework: ${framework}` }
|
||||
}
|
||||
|
||||
switch (action) {
|
||||
case 'register': {
|
||||
const { agentId, name, metadata } = payload
|
||||
if (!agentId || !name) return { error: 'payload.agentId and payload.name required' }
|
||||
await adapter.register({
|
||||
agentId: agentId as string,
|
||||
name: name as string,
|
||||
framework,
|
||||
metadata: metadata as Record<string, unknown>,
|
||||
})
|
||||
return { ok: true }
|
||||
}
|
||||
case 'heartbeat': {
|
||||
const { agentId, status, metrics } = payload
|
||||
if (!agentId) return { error: 'payload.agentId required' }
|
||||
await adapter.heartbeat({
|
||||
agentId: agentId as string,
|
||||
status: (status as string) || 'online',
|
||||
metrics: metrics as Record<string, unknown>,
|
||||
})
|
||||
return { ok: true }
|
||||
}
|
||||
case 'report': {
|
||||
const { taskId, agentId, progress, status, output } = payload
|
||||
if (!taskId || !agentId) return { error: 'payload.taskId and payload.agentId required' }
|
||||
await adapter.reportTask({
|
||||
taskId: taskId as string,
|
||||
agentId: agentId as string,
|
||||
progress: (progress as number) ?? 0,
|
||||
status: (status as string) || 'in_progress',
|
||||
output,
|
||||
})
|
||||
return { ok: true }
|
||||
}
|
||||
case 'assignments': {
|
||||
const { agentId } = payload
|
||||
if (!agentId) return { error: 'payload.agentId required' }
|
||||
const assignments = await adapter.getAssignments(agentId as string)
|
||||
return { assignments }
|
||||
}
|
||||
case 'disconnect': {
|
||||
const { agentId } = payload
|
||||
if (!agentId) return { error: 'payload.agentId required' }
|
||||
await adapter.disconnect(agentId as string)
|
||||
return { ok: true }
|
||||
}
|
||||
default:
|
||||
return { error: `Unknown action: ${action}` }
|
||||
}
|
||||
}
|
||||
|
||||
describe('Adapter API dispatch', () => {
|
||||
beforeEach(() => {
|
||||
mockBroadcast.mockClear()
|
||||
mockQuery.mockClear()
|
||||
})
|
||||
|
||||
// Full lifecycle for every framework
|
||||
describe.each(listAdapters())('Full agent lifecycle: %s', (framework) => {
|
||||
it('register → heartbeat → report → assignments → disconnect', async () => {
|
||||
mockQuery.mockResolvedValue([{ taskId: '1', description: 'Do stuff', priority: 1 }])
|
||||
|
||||
// 1. Register
|
||||
const reg = await simulateAdapterAction(framework, 'register', {
|
||||
agentId: `${framework}-agent-1`,
|
||||
name: `${framework} Test Agent`,
|
||||
metadata: { version: '2.0' },
|
||||
})
|
||||
expect(reg.ok).toBe(true)
|
||||
|
||||
// 2. Heartbeat
|
||||
const hb = await simulateAdapterAction(framework, 'heartbeat', {
|
||||
agentId: `${framework}-agent-1`,
|
||||
status: 'busy',
|
||||
metrics: { tasksInProgress: 1 },
|
||||
})
|
||||
expect(hb.ok).toBe(true)
|
||||
|
||||
// 3. Report task progress
|
||||
const rpt = await simulateAdapterAction(framework, 'report', {
|
||||
taskId: 'task-abc',
|
||||
agentId: `${framework}-agent-1`,
|
||||
progress: 50,
|
||||
status: 'in_progress',
|
||||
output: { log: 'halfway done' },
|
||||
})
|
||||
expect(rpt.ok).toBe(true)
|
||||
|
||||
// 4. Get assignments
|
||||
const asgn = await simulateAdapterAction(framework, 'assignments', {
|
||||
agentId: `${framework}-agent-1`,
|
||||
})
|
||||
expect(asgn.assignments).toHaveLength(1)
|
||||
|
||||
// 5. Disconnect
|
||||
const disc = await simulateAdapterAction(framework, 'disconnect', {
|
||||
agentId: `${framework}-agent-1`,
|
||||
})
|
||||
expect(disc.ok).toBe(true)
|
||||
|
||||
// Verify event sequence
|
||||
const eventTypes = mockBroadcast.mock.calls.map(c => c[0])
|
||||
expect(eventTypes).toEqual([
|
||||
'agent.created',
|
||||
'agent.status_changed',
|
||||
'task.updated',
|
||||
'agent.status_changed',
|
||||
])
|
||||
})
|
||||
})
|
||||
|
||||
// Validation checks
|
||||
describe('input validation', () => {
|
||||
it('rejects unknown framework', async () => {
|
||||
const result = await simulateAdapterAction('totally-fake', 'register', {
|
||||
agentId: 'x', name: 'X',
|
||||
})
|
||||
expect(result.error).toContain('Unknown framework')
|
||||
})
|
||||
|
||||
it('rejects unknown action', async () => {
|
||||
const result = await simulateAdapterAction('generic', 'explode', {})
|
||||
expect(result.error).toContain('Unknown action')
|
||||
})
|
||||
|
||||
it('rejects register without agentId', async () => {
|
||||
const result = await simulateAdapterAction('generic', 'register', { name: 'No ID' })
|
||||
expect(result.error).toContain('agentId')
|
||||
})
|
||||
|
||||
it('rejects register without name', async () => {
|
||||
const result = await simulateAdapterAction('generic', 'register', { agentId: 'no-name' })
|
||||
expect(result.error).toContain('name')
|
||||
})
|
||||
|
||||
it('rejects heartbeat without agentId', async () => {
|
||||
const result = await simulateAdapterAction('generic', 'heartbeat', {})
|
||||
expect(result.error).toContain('agentId')
|
||||
})
|
||||
|
||||
it('rejects report without taskId', async () => {
|
||||
const result = await simulateAdapterAction('generic', 'report', { agentId: 'x' })
|
||||
expect(result.error).toContain('taskId')
|
||||
})
|
||||
|
||||
it('rejects assignments without agentId', async () => {
|
||||
const result = await simulateAdapterAction('generic', 'assignments', {})
|
||||
expect(result.error).toContain('agentId')
|
||||
})
|
||||
|
||||
it('rejects disconnect without agentId', async () => {
|
||||
const result = await simulateAdapterAction('generic', 'disconnect', {})
|
||||
expect(result.error).toContain('agentId')
|
||||
})
|
||||
})
|
||||
})
|
||||
|
|
@ -0,0 +1,406 @@
|
|||
/**
|
||||
* Adapter Compliance Test Suite
|
||||
*
|
||||
* Tests every FrameworkAdapter implementation against the contract.
|
||||
* This is the P0 gate — nothing ships until all adapters pass.
|
||||
*
|
||||
* Tests:
|
||||
* 1. Interface compliance (all 5 methods exist and are callable)
|
||||
* 2. Event emission (correct event types and payloads)
|
||||
* 3. Assignment retrieval (DB query works)
|
||||
* 4. Error resilience (bad inputs don't crash)
|
||||
* 5. Framework identity (each adapter tags events correctly)
|
||||
*/
|
||||
|
||||
import { describe, it, expect, vi, beforeEach, afterEach } from 'vitest'
|
||||
import type { FrameworkAdapter, AgentRegistration, HeartbeatPayload, TaskReport } from '../adapter'
|
||||
import { getAdapter, listAdapters } from '../index'
|
||||
|
||||
// Mock event bus
|
||||
const mockBroadcast = vi.fn()
|
||||
vi.mock('@/lib/event-bus', () => ({
|
||||
eventBus: { broadcast: (...args: unknown[]) => mockBroadcast(...args) },
|
||||
}))
|
||||
|
||||
// Mock DB query for getAssignments
|
||||
const mockQuery = vi.fn()
|
||||
vi.mock('../adapter', async (importOriginal) => {
|
||||
const original = await importOriginal<typeof import('../adapter')>()
|
||||
return {
|
||||
...original,
|
||||
queryPendingAssignments: (...args: unknown[]) => mockQuery(...args),
|
||||
}
|
||||
})
|
||||
|
||||
// ─── Test Data ───────────────────────────────────────────────────────────────
|
||||
|
||||
const testAgent: AgentRegistration = {
|
||||
agentId: 'test-agent-001',
|
||||
name: 'Test Agent',
|
||||
framework: 'test-framework',
|
||||
metadata: { version: '1.0', runtime: 'node' },
|
||||
}
|
||||
|
||||
const testHeartbeat: HeartbeatPayload = {
|
||||
agentId: 'test-agent-001',
|
||||
status: 'busy',
|
||||
metrics: { cpu: 42, memory: 1024, tasksCompleted: 5 },
|
||||
}
|
||||
|
||||
const testReport: TaskReport = {
|
||||
taskId: 'task-123',
|
||||
agentId: 'test-agent-001',
|
||||
progress: 75,
|
||||
status: 'in_progress',
|
||||
output: { summary: 'Processing step 3 of 4' },
|
||||
}
|
||||
|
||||
// ─── Shared Compliance Tests ─────────────────────────────────────────────────
|
||||
|
||||
const ALL_FRAMEWORKS = ['openclaw', 'generic', 'crewai', 'langgraph', 'autogen', 'claude-sdk']
|
||||
|
||||
describe('Adapter Registry', () => {
|
||||
it('lists all registered adapters', () => {
|
||||
const adapters = listAdapters()
|
||||
expect(adapters).toEqual(expect.arrayContaining(ALL_FRAMEWORKS))
|
||||
expect(adapters.length).toBe(ALL_FRAMEWORKS.length)
|
||||
})
|
||||
|
||||
it('returns an adapter for each registered framework', () => {
|
||||
for (const fw of ALL_FRAMEWORKS) {
|
||||
const adapter = getAdapter(fw)
|
||||
expect(adapter).toBeDefined()
|
||||
expect(adapter.framework).toBe(fw)
|
||||
}
|
||||
})
|
||||
|
||||
it('throws for unknown framework', () => {
|
||||
expect(() => getAdapter('nonexistent')).toThrow('Unknown framework adapter')
|
||||
})
|
||||
})
|
||||
|
||||
// Run the full compliance suite for EVERY adapter
|
||||
describe.each(ALL_FRAMEWORKS)('FrameworkAdapter compliance: %s', (framework) => {
|
||||
let adapter: FrameworkAdapter
|
||||
|
||||
beforeEach(() => {
|
||||
adapter = getAdapter(framework)
|
||||
mockBroadcast.mockClear()
|
||||
mockQuery.mockClear()
|
||||
})
|
||||
|
||||
// ── 1. Interface Compliance ──────────────────────────────────────────────
|
||||
|
||||
describe('interface compliance', () => {
|
||||
it('implements all 5 required methods', () => {
|
||||
expect(typeof adapter.register).toBe('function')
|
||||
expect(typeof adapter.heartbeat).toBe('function')
|
||||
expect(typeof adapter.reportTask).toBe('function')
|
||||
expect(typeof adapter.getAssignments).toBe('function')
|
||||
expect(typeof adapter.disconnect).toBe('function')
|
||||
})
|
||||
|
||||
it('has a readonly framework property', () => {
|
||||
expect(adapter.framework).toBe(framework)
|
||||
})
|
||||
|
||||
it('all methods return promises', async () => {
|
||||
mockQuery.mockResolvedValue([])
|
||||
|
||||
const results = [
|
||||
adapter.register(testAgent),
|
||||
adapter.heartbeat(testHeartbeat),
|
||||
adapter.reportTask(testReport),
|
||||
adapter.getAssignments('any-id'),
|
||||
adapter.disconnect('any-id'),
|
||||
]
|
||||
|
||||
// All should be thenables
|
||||
for (const r of results) {
|
||||
expect(r).toBeInstanceOf(Promise)
|
||||
}
|
||||
|
||||
await Promise.all(results)
|
||||
})
|
||||
})
|
||||
|
||||
// ── 2. Event Emission ────────────────────────────────────────────────────
|
||||
|
||||
describe('register()', () => {
|
||||
it('broadcasts agent.created with correct payload', async () => {
|
||||
await adapter.register(testAgent)
|
||||
|
||||
expect(mockBroadcast).toHaveBeenCalledTimes(1)
|
||||
expect(mockBroadcast).toHaveBeenCalledWith(
|
||||
'agent.created',
|
||||
expect.objectContaining({
|
||||
id: 'test-agent-001',
|
||||
name: 'Test Agent',
|
||||
status: 'online',
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
it('includes framework tag in event', async () => {
|
||||
await adapter.register(testAgent)
|
||||
|
||||
const payload = mockBroadcast.mock.calls[0][1]
|
||||
// Generic adapter may use agent.framework; others use this.framework
|
||||
expect(payload.framework).toBeTruthy()
|
||||
})
|
||||
|
||||
it('passes through metadata', async () => {
|
||||
await adapter.register(testAgent)
|
||||
|
||||
const payload = mockBroadcast.mock.calls[0][1]
|
||||
// Metadata is spread into the event payload
|
||||
expect(payload.version).toBe('1.0')
|
||||
expect(payload.runtime).toBe('node')
|
||||
})
|
||||
|
||||
it('handles agent with no metadata', async () => {
|
||||
await adapter.register({
|
||||
agentId: 'minimal-agent',
|
||||
name: 'Minimal',
|
||||
framework,
|
||||
})
|
||||
|
||||
expect(mockBroadcast).toHaveBeenCalledWith(
|
||||
'agent.created',
|
||||
expect.objectContaining({
|
||||
id: 'minimal-agent',
|
||||
name: 'Minimal',
|
||||
status: 'online',
|
||||
})
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('heartbeat()', () => {
|
||||
it('broadcasts agent.status_changed with status and metrics', async () => {
|
||||
await adapter.heartbeat(testHeartbeat)
|
||||
|
||||
expect(mockBroadcast).toHaveBeenCalledTimes(1)
|
||||
expect(mockBroadcast).toHaveBeenCalledWith(
|
||||
'agent.status_changed',
|
||||
expect.objectContaining({
|
||||
id: 'test-agent-001',
|
||||
status: 'busy',
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
it('includes metrics in event payload', async () => {
|
||||
await adapter.heartbeat(testHeartbeat)
|
||||
|
||||
const payload = mockBroadcast.mock.calls[0][1]
|
||||
expect(payload.metrics).toBeDefined()
|
||||
expect(payload.metrics.cpu).toBe(42)
|
||||
})
|
||||
|
||||
it('handles heartbeat with no metrics', async () => {
|
||||
await adapter.heartbeat({
|
||||
agentId: 'test-agent-001',
|
||||
status: 'idle',
|
||||
})
|
||||
|
||||
expect(mockBroadcast).toHaveBeenCalledWith(
|
||||
'agent.status_changed',
|
||||
expect.objectContaining({
|
||||
id: 'test-agent-001',
|
||||
status: 'idle',
|
||||
})
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('reportTask()', () => {
|
||||
it('broadcasts task.updated with progress and status', async () => {
|
||||
await adapter.reportTask(testReport)
|
||||
|
||||
expect(mockBroadcast).toHaveBeenCalledTimes(1)
|
||||
expect(mockBroadcast).toHaveBeenCalledWith(
|
||||
'task.updated',
|
||||
expect.objectContaining({
|
||||
id: 'task-123',
|
||||
agentId: 'test-agent-001',
|
||||
progress: 75,
|
||||
status: 'in_progress',
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
it('passes through output data', async () => {
|
||||
await adapter.reportTask(testReport)
|
||||
|
||||
const payload = mockBroadcast.mock.calls[0][1]
|
||||
expect(payload.output).toEqual({ summary: 'Processing step 3 of 4' })
|
||||
})
|
||||
|
||||
it('handles report with no output', async () => {
|
||||
await adapter.reportTask({
|
||||
taskId: 'task-456',
|
||||
agentId: 'test-agent-001',
|
||||
progress: 100,
|
||||
status: 'completed',
|
||||
})
|
||||
|
||||
expect(mockBroadcast).toHaveBeenCalledWith(
|
||||
'task.updated',
|
||||
expect.objectContaining({
|
||||
id: 'task-456',
|
||||
status: 'completed',
|
||||
progress: 100,
|
||||
})
|
||||
)
|
||||
})
|
||||
})
|
||||
|
||||
describe('getAssignments()', () => {
|
||||
it('delegates to queryPendingAssignments', async () => {
|
||||
const mockAssignments = [
|
||||
{ taskId: '1', description: 'Fix bug', priority: 1 },
|
||||
{ taskId: '2', description: 'Write tests', priority: 2 },
|
||||
]
|
||||
mockQuery.mockResolvedValue(mockAssignments)
|
||||
|
||||
const result = await adapter.getAssignments('test-agent-001')
|
||||
|
||||
expect(mockQuery).toHaveBeenCalledWith('test-agent-001')
|
||||
expect(result).toEqual(mockAssignments)
|
||||
})
|
||||
|
||||
it('returns empty array when no assignments', async () => {
|
||||
mockQuery.mockResolvedValue([])
|
||||
|
||||
const result = await adapter.getAssignments('idle-agent')
|
||||
|
||||
expect(result).toEqual([])
|
||||
})
|
||||
|
||||
it('does not broadcast events', async () => {
|
||||
mockQuery.mockResolvedValue([])
|
||||
|
||||
await adapter.getAssignments('test-agent-001')
|
||||
|
||||
expect(mockBroadcast).not.toHaveBeenCalled()
|
||||
})
|
||||
})
|
||||
|
||||
describe('disconnect()', () => {
|
||||
it('broadcasts agent.status_changed with offline status', async () => {
|
||||
await adapter.disconnect('test-agent-001')
|
||||
|
||||
expect(mockBroadcast).toHaveBeenCalledTimes(1)
|
||||
expect(mockBroadcast).toHaveBeenCalledWith(
|
||||
'agent.status_changed',
|
||||
expect.objectContaining({
|
||||
id: 'test-agent-001',
|
||||
status: 'offline',
|
||||
})
|
||||
)
|
||||
})
|
||||
|
||||
it('tags disconnect event with framework', async () => {
|
||||
await adapter.disconnect('test-agent-001')
|
||||
|
||||
const payload = mockBroadcast.mock.calls[0][1]
|
||||
expect(payload.framework).toBe(framework)
|
||||
})
|
||||
})
|
||||
|
||||
// ── 3. Framework Identity ────────────────────────────────────────────────
|
||||
|
||||
describe('framework identity', () => {
|
||||
it('tags all emitted events with its framework name', async () => {
|
||||
mockQuery.mockResolvedValue([])
|
||||
|
||||
await adapter.register(testAgent)
|
||||
await adapter.heartbeat(testHeartbeat)
|
||||
await adapter.reportTask(testReport)
|
||||
await adapter.disconnect('test-agent-001')
|
||||
|
||||
// All 4 event-emitting calls should tag with framework
|
||||
for (const call of mockBroadcast.mock.calls) {
|
||||
const payload = call[1]
|
||||
expect(payload.framework).toBeTruthy()
|
||||
}
|
||||
})
|
||||
})
|
||||
})
|
||||
|
||||
// ── 4. Cross-Adapter Behavioral Consistency ────────────────────────────────
|
||||
|
||||
describe('Cross-adapter consistency', () => {
|
||||
beforeEach(() => {
|
||||
mockBroadcast.mockClear()
|
||||
mockQuery.mockClear()
|
||||
})
|
||||
|
||||
it('all adapters emit the same event types for the same actions', async () => {
|
||||
const eventsByFramework: Record<string, string[]> = {}
|
||||
|
||||
for (const fw of ALL_FRAMEWORKS) {
|
||||
mockBroadcast.mockClear()
|
||||
mockQuery.mockResolvedValue([])
|
||||
|
||||
const adapter = getAdapter(fw)
|
||||
await adapter.register(testAgent)
|
||||
await adapter.heartbeat(testHeartbeat)
|
||||
await adapter.reportTask(testReport)
|
||||
await adapter.disconnect('test-agent-001')
|
||||
|
||||
eventsByFramework[fw] = mockBroadcast.mock.calls.map(c => c[0])
|
||||
}
|
||||
|
||||
const expected = ['agent.created', 'agent.status_changed', 'task.updated', 'agent.status_changed']
|
||||
|
||||
for (const fw of ALL_FRAMEWORKS) {
|
||||
expect(eventsByFramework[fw]).toEqual(expected)
|
||||
}
|
||||
})
|
||||
|
||||
it('all adapters return the same assignment data for the same agent', async () => {
|
||||
const mockAssignments = [{ taskId: '99', description: 'Shared task', priority: 0 }]
|
||||
mockQuery.mockResolvedValue(mockAssignments)
|
||||
|
||||
for (const fw of ALL_FRAMEWORKS) {
|
||||
const adapter = getAdapter(fw)
|
||||
const result = await adapter.getAssignments('shared-agent')
|
||||
expect(result).toEqual(mockAssignments)
|
||||
}
|
||||
})
|
||||
})
|
||||
|
||||
// ── 5. Generic Adapter Specialization ──────────────────────────────────────
|
||||
|
||||
describe('GenericAdapter special behavior', () => {
|
||||
beforeEach(() => {
|
||||
mockBroadcast.mockClear()
|
||||
})
|
||||
|
||||
it('respects agent.framework from registration payload', async () => {
|
||||
const adapter = getAdapter('generic')
|
||||
await adapter.register({
|
||||
agentId: 'custom-agent',
|
||||
name: 'Custom Framework Agent',
|
||||
framework: 'my-custom-framework',
|
||||
})
|
||||
|
||||
const payload = mockBroadcast.mock.calls[0][1]
|
||||
expect(payload.framework).toBe('my-custom-framework')
|
||||
})
|
||||
|
||||
it('falls back to "generic" when no framework in payload', async () => {
|
||||
const adapter = getAdapter('generic')
|
||||
await adapter.register({
|
||||
agentId: 'unknown-agent',
|
||||
name: 'Unknown Agent',
|
||||
framework: '',
|
||||
})
|
||||
|
||||
const payload = mockBroadcast.mock.calls[0][1]
|
||||
// Empty string is falsy, should fall back to 'generic'
|
||||
expect(payload.framework).toBe('generic')
|
||||
})
|
||||
})
|
||||
|
|
@ -0,0 +1,176 @@
|
|||
import * as fs from 'node:fs'
|
||||
import * as path from 'node:path'
|
||||
|
||||
export type ContractOperation = string
|
||||
|
||||
export interface RouteOperation {
|
||||
method: string
|
||||
path: string
|
||||
sourceFile: string
|
||||
}
|
||||
|
||||
export interface ParityReport {
|
||||
routeOperations: ContractOperation[]
|
||||
openapiOperations: ContractOperation[]
|
||||
missingInOpenApi: ContractOperation[]
|
||||
missingInRoutes: ContractOperation[]
|
||||
ignoredOperations: ContractOperation[]
|
||||
}
|
||||
|
||||
const HTTP_METHODS = ['GET', 'POST', 'PUT', 'PATCH', 'DELETE', 'OPTIONS', 'HEAD'] as const
|
||||
|
||||
function toPosix(input: string): string {
|
||||
return input.split(path.sep).join('/')
|
||||
}
|
||||
|
||||
function normalizeSegment(segment: string): string {
|
||||
if (segment.startsWith('[[...') && segment.endsWith(']]')) {
|
||||
return `{${segment.slice(5, -2)}}`
|
||||
}
|
||||
if (segment.startsWith('[...') && segment.endsWith(']')) {
|
||||
return `{${segment.slice(4, -1)}}`
|
||||
}
|
||||
if (segment.startsWith('[') && segment.endsWith(']')) {
|
||||
return `{${segment.slice(1, -1)}}`
|
||||
}
|
||||
return segment
|
||||
}
|
||||
|
||||
export function routeFileToApiPath(routeFile: string, apiRoot = 'src/app/api'): string {
|
||||
const normalizedFile = toPosix(routeFile)
|
||||
const normalizedRoot = toPosix(apiRoot)
|
||||
const routeWithoutExt = normalizedFile.replace(/\/route\.tsx?$/, '')
|
||||
const relative = routeWithoutExt.startsWith(normalizedRoot)
|
||||
? routeWithoutExt.slice(normalizedRoot.length)
|
||||
: routeWithoutExt
|
||||
|
||||
const segments = relative
|
||||
.split('/')
|
||||
.filter(Boolean)
|
||||
.map(normalizeSegment)
|
||||
|
||||
return `/api${segments.length ? `/${segments.join('/')}` : ''}`
|
||||
}
|
||||
|
||||
export function extractHttpMethods(source: string): string[] {
|
||||
const methods = new Set<string>()
|
||||
for (const method of HTTP_METHODS) {
|
||||
const constExport = new RegExp(`export\\s+const\\s+${method}\\s*=`, 'm')
|
||||
const fnExport = new RegExp(`export\\s+(?:async\\s+)?function\\s+${method}\\s*\\(`, 'm')
|
||||
if (constExport.test(source) || fnExport.test(source)) methods.add(method)
|
||||
}
|
||||
return Array.from(methods)
|
||||
}
|
||||
|
||||
function walkRouteFiles(dir: string, found: string[] = []): string[] {
|
||||
if (!fs.existsSync(dir)) return found
|
||||
for (const entry of fs.readdirSync(dir, { withFileTypes: true })) {
|
||||
const fullPath = path.join(dir, entry.name)
|
||||
if (entry.isDirectory()) {
|
||||
walkRouteFiles(fullPath, found)
|
||||
} else if (entry.isFile() && /route\.tsx?$/.test(entry.name)) {
|
||||
found.push(fullPath)
|
||||
}
|
||||
}
|
||||
return found
|
||||
}
|
||||
|
||||
export function collectRouteOperations(projectRoot: string): RouteOperation[] {
|
||||
const apiRoot = path.join(projectRoot, 'src', 'app', 'api')
|
||||
const routeFiles = walkRouteFiles(apiRoot)
|
||||
|
||||
const operations: RouteOperation[] = []
|
||||
for (const file of routeFiles) {
|
||||
const source = fs.readFileSync(file, 'utf8')
|
||||
const methods = extractHttpMethods(source)
|
||||
const apiPath = routeFileToApiPath(toPosix(path.relative(projectRoot, file)))
|
||||
for (const method of methods) {
|
||||
operations.push({ method, path: apiPath, sourceFile: file })
|
||||
}
|
||||
}
|
||||
|
||||
return operations
|
||||
}
|
||||
|
||||
export function collectOpenApiOperations(openapi: any): ContractOperation[] {
|
||||
const operations = new Set<ContractOperation>()
|
||||
const paths = openapi?.paths ?? {}
|
||||
for (const [rawPath, pathItem] of Object.entries(paths)) {
|
||||
const normalizedPath = String(rawPath)
|
||||
for (const method of Object.keys(pathItem as Record<string, unknown>)) {
|
||||
const upper = method.toUpperCase()
|
||||
if ((HTTP_METHODS as readonly string[]).includes(upper)) {
|
||||
operations.add(`${upper} ${normalizedPath}`)
|
||||
}
|
||||
}
|
||||
}
|
||||
return Array.from(operations).sort()
|
||||
}
|
||||
|
||||
function toContractOperation(method: string, apiPath: string): ContractOperation {
|
||||
return `${method.toUpperCase()} ${apiPath}`
|
||||
}
|
||||
|
||||
function normalizeOperation(operation: string): ContractOperation {
|
||||
const [method = '', ...pathParts] = operation.trim().split(' ')
|
||||
const normalizedMethod = method.toUpperCase()
|
||||
const normalizedPath = pathParts.join(' ').trim()
|
||||
return `${normalizedMethod} ${normalizedPath}` as ContractOperation
|
||||
}
|
||||
|
||||
export function compareApiContractParity(params: {
|
||||
routeOperations: RouteOperation[]
|
||||
openapiOperations: ContractOperation[]
|
||||
ignore?: string[]
|
||||
}): ParityReport {
|
||||
const ignored = new Set((params.ignore ?? []).map((x) => normalizeOperation(x)))
|
||||
const routeOperations = Array.from(new Set(params.routeOperations.map((op) => toContractOperation(op.method, op.path)))).sort()
|
||||
const openapiOperations = Array.from(new Set(params.openapiOperations.map((op) => normalizeOperation(op)))).sort()
|
||||
|
||||
const routeSet = new Set(routeOperations)
|
||||
const openapiSet = new Set(openapiOperations)
|
||||
|
||||
const ignoredOperations: ContractOperation[] = []
|
||||
const missingInOpenApi: ContractOperation[] = []
|
||||
for (const op of routeOperations) {
|
||||
if (ignored.has(op)) {
|
||||
ignoredOperations.push(op)
|
||||
continue
|
||||
}
|
||||
if (!openapiSet.has(op)) missingInOpenApi.push(op)
|
||||
}
|
||||
|
||||
const missingInRoutes: ContractOperation[] = []
|
||||
for (const op of openapiOperations) {
|
||||
if (ignored.has(op)) {
|
||||
if (!ignoredOperations.includes(op as ContractOperation)) ignoredOperations.push(op as ContractOperation)
|
||||
continue
|
||||
}
|
||||
if (!routeSet.has(op)) missingInRoutes.push(op as ContractOperation)
|
||||
}
|
||||
|
||||
return {
|
||||
routeOperations: routeOperations as ContractOperation[],
|
||||
openapiOperations: openapiOperations as ContractOperation[],
|
||||
missingInOpenApi,
|
||||
missingInRoutes,
|
||||
ignoredOperations: ignoredOperations.sort(),
|
||||
}
|
||||
}
|
||||
|
||||
export function loadOpenApiFile(projectRoot: string, openapiPath = 'openapi.json'): any {
|
||||
const filePath = path.join(projectRoot, openapiPath)
|
||||
return JSON.parse(fs.readFileSync(filePath, 'utf8'))
|
||||
}
|
||||
|
||||
export function runApiContractParityCheck(params: {
|
||||
projectRoot: string
|
||||
openapiPath?: string
|
||||
ignore?: string[]
|
||||
}): ParityReport {
|
||||
const projectRoot = path.resolve(params.projectRoot)
|
||||
const openapi = loadOpenApiFile(projectRoot, params.openapiPath)
|
||||
const routeOperations = collectRouteOperations(projectRoot)
|
||||
const openapiOperations = collectOpenApiOperations(openapi)
|
||||
return compareApiContractParity({ routeOperations, openapiOperations, ignore: params.ignore })
|
||||
}
|
||||
|
|
@ -459,6 +459,20 @@ export function getUserFromRequest(request: Request): User | null {
|
|||
const configuredApiKey = resolveActiveApiKey()
|
||||
|
||||
if (configuredApiKey && apiKey && safeCompare(apiKey, configuredApiKey)) {
|
||||
// FR-D2: Log warning when global admin API key is used.
|
||||
// Prefer agent-scoped keys (POST /api/agents/{id}/keys) for least-privilege access.
|
||||
try {
|
||||
logSecurityEvent({
|
||||
event_type: 'global_api_key_used',
|
||||
severity: 'info',
|
||||
source: 'auth',
|
||||
agent_name: agentName || undefined,
|
||||
detail: JSON.stringify({ hint: 'Consider using agent-scoped API keys for least-privilege access' }),
|
||||
ip_address: request.headers.get('x-real-ip') || 'unknown',
|
||||
workspace_id: getDefaultWorkspaceContext().workspaceId,
|
||||
tenant_id: getDefaultWorkspaceContext().tenantId,
|
||||
})
|
||||
} catch { /* startup race */ }
|
||||
return {
|
||||
id: 0,
|
||||
username: 'api',
|
||||
|
|
|
|||
|
|
@ -0,0 +1,427 @@
|
|||
/**
|
||||
* Framework-Agnostic Template System
|
||||
*
|
||||
* Extends the existing OpenClaw templates with framework-neutral archetypes
|
||||
* that any adapter can use. Each framework template defines:
|
||||
* - What the agent does (role, capabilities)
|
||||
* - How it connects (framework-specific connection config)
|
||||
* - What permissions it needs (tool scopes)
|
||||
*
|
||||
* The existing AGENT_TEMPLATES in agent-templates.ts remain for OpenClaw-native
|
||||
* use. This module wraps them with a framework-aware registry.
|
||||
*/
|
||||
|
||||
import { AGENT_TEMPLATES, type AgentTemplate } from './agent-templates'
|
||||
import { listAdapters } from './adapters'
|
||||
|
||||
// ─── Framework Connection Config ────────────────────────────────────────────
|
||||
|
||||
export interface FrameworkConnectionConfig {
|
||||
/** How the agent connects to MC (webhook, polling, websocket) */
|
||||
connectionMode: 'webhook' | 'polling' | 'websocket'
|
||||
/** Default heartbeat interval in seconds */
|
||||
heartbeatInterval: number
|
||||
/** Framework-specific setup hints shown in the UI */
|
||||
setupHints: string[]
|
||||
/** Example connection code snippet */
|
||||
exampleSnippet: string
|
||||
}
|
||||
|
||||
export interface FrameworkInfo {
|
||||
id: string
|
||||
label: string
|
||||
description: string
|
||||
docsUrl: string
|
||||
connection: FrameworkConnectionConfig
|
||||
}
|
||||
|
||||
// ─── Framework Registry ─────────────────────────────────────────────────────
|
||||
|
||||
export const FRAMEWORK_REGISTRY: Record<string, FrameworkInfo> = {
|
||||
openclaw: {
|
||||
id: 'openclaw',
|
||||
label: 'OpenClaw',
|
||||
description: 'Native gateway-managed agents with full lifecycle control',
|
||||
docsUrl: 'https://github.com/openclaw/openclaw',
|
||||
connection: {
|
||||
connectionMode: 'websocket',
|
||||
heartbeatInterval: 30,
|
||||
setupHints: [
|
||||
'Agents are managed via the OpenClaw gateway',
|
||||
'Config syncs bidirectionally via openclaw.json',
|
||||
'Use "pnpm openclaw agents add" to provision',
|
||||
],
|
||||
exampleSnippet: `# OpenClaw agents are auto-managed by the gateway.
|
||||
# No manual registration needed — sync happens automatically.
|
||||
# See: openclaw.json in your state directory.`,
|
||||
},
|
||||
},
|
||||
generic: {
|
||||
id: 'generic',
|
||||
label: 'Generic HTTP',
|
||||
description: 'Any agent that can make HTTP calls — the universal adapter',
|
||||
docsUrl: '',
|
||||
connection: {
|
||||
connectionMode: 'polling',
|
||||
heartbeatInterval: 60,
|
||||
setupHints: [
|
||||
'POST to /api/adapters with framework: "generic"',
|
||||
'Use any language — just call the REST API',
|
||||
'Poll /api/adapters for assignments or use SSE for push',
|
||||
],
|
||||
exampleSnippet: `# Register your agent
|
||||
curl -X POST http://localhost:3000/api/adapters \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-H "x-api-key: YOUR_API_KEY" \\
|
||||
-d '{
|
||||
"framework": "generic",
|
||||
"action": "register",
|
||||
"payload": {
|
||||
"agentId": "my-agent-1",
|
||||
"name": "My Custom Agent",
|
||||
"metadata": { "version": "1.0" }
|
||||
}
|
||||
}'
|
||||
|
||||
# Send heartbeat
|
||||
curl -X POST http://localhost:3000/api/adapters \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-H "x-api-key: YOUR_API_KEY" \\
|
||||
-d '{
|
||||
"framework": "generic",
|
||||
"action": "heartbeat",
|
||||
"payload": { "agentId": "my-agent-1", "status": "online" }
|
||||
}'
|
||||
|
||||
# Get assignments
|
||||
curl -X POST http://localhost:3000/api/adapters \\
|
||||
-H "Content-Type: application/json" \\
|
||||
-H "x-api-key: YOUR_API_KEY" \\
|
||||
-d '{
|
||||
"framework": "generic",
|
||||
"action": "assignments",
|
||||
"payload": { "agentId": "my-agent-1" }
|
||||
}'`,
|
||||
},
|
||||
},
|
||||
langgraph: {
|
||||
id: 'langgraph',
|
||||
label: 'LangGraph',
|
||||
description: 'LangChain\'s graph-based agent orchestration framework',
|
||||
docsUrl: 'https://langchain-ai.github.io/langgraph/',
|
||||
connection: {
|
||||
connectionMode: 'webhook',
|
||||
heartbeatInterval: 30,
|
||||
setupHints: [
|
||||
'Wrap your LangGraph graph with the MC adapter client',
|
||||
'Register nodes as capabilities for task routing',
|
||||
'Use checkpointers for durable state across MC task assignments',
|
||||
],
|
||||
exampleSnippet: `import requests
|
||||
|
||||
MC_URL = "http://localhost:3000"
|
||||
API_KEY = "YOUR_API_KEY"
|
||||
HEADERS = {"Content-Type": "application/json", "x-api-key": API_KEY}
|
||||
|
||||
# Register your LangGraph agent
|
||||
requests.post(f"{MC_URL}/api/adapters", headers=HEADERS, json={
|
||||
"framework": "langgraph",
|
||||
"action": "register",
|
||||
"payload": {
|
||||
"agentId": "langgraph-research-agent",
|
||||
"name": "Research Agent",
|
||||
"metadata": {
|
||||
"graph_type": "StateGraph",
|
||||
"nodes": ["research", "summarize", "review"],
|
||||
"checkpointer": "sqlite"
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
# After your graph completes a task:
|
||||
requests.post(f"{MC_URL}/api/adapters", headers=HEADERS, json={
|
||||
"framework": "langgraph",
|
||||
"action": "report",
|
||||
"payload": {
|
||||
"taskId": "task-123",
|
||||
"agentId": "langgraph-research-agent",
|
||||
"progress": 100,
|
||||
"status": "completed",
|
||||
"output": {"summary": "Research complete", "sources": 12}
|
||||
}
|
||||
})`,
|
||||
},
|
||||
},
|
||||
crewai: {
|
||||
id: 'crewai',
|
||||
label: 'CrewAI',
|
||||
description: 'Role-based multi-agent orchestration framework',
|
||||
docsUrl: 'https://docs.crewai.com/',
|
||||
connection: {
|
||||
connectionMode: 'webhook',
|
||||
heartbeatInterval: 30,
|
||||
setupHints: [
|
||||
'Register each CrewAI agent role as a separate MC agent',
|
||||
'Map Crew tasks to MC task assignments',
|
||||
'Use callbacks to report progress back to MC',
|
||||
],
|
||||
exampleSnippet: `from crewai import Agent, Task, Crew
|
||||
import requests
|
||||
|
||||
MC_URL = "http://localhost:3000"
|
||||
HEADERS = {"Content-Type": "application/json", "x-api-key": "YOUR_API_KEY"}
|
||||
|
||||
def register_crew_agent(agent: Agent):
|
||||
"""Register a CrewAI agent with Mission Control."""
|
||||
requests.post(f"{MC_URL}/api/adapters", headers=HEADERS, json={
|
||||
"framework": "crewai",
|
||||
"action": "register",
|
||||
"payload": {
|
||||
"agentId": f"crewai-{agent.role.lower().replace(' ', '-')}",
|
||||
"name": agent.role,
|
||||
"metadata": {
|
||||
"goal": agent.goal,
|
||||
"backstory": agent.backstory[:200],
|
||||
"tools": [t.name for t in (agent.tools or [])]
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
def report_task_complete(agent_id: str, task_id: str, output: str):
|
||||
"""Report task completion to Mission Control."""
|
||||
requests.post(f"{MC_URL}/api/adapters", headers=HEADERS, json={
|
||||
"framework": "crewai",
|
||||
"action": "report",
|
||||
"payload": {
|
||||
"taskId": task_id,
|
||||
"agentId": agent_id,
|
||||
"progress": 100,
|
||||
"status": "completed",
|
||||
"output": {"result": output}
|
||||
}
|
||||
})`,
|
||||
},
|
||||
},
|
||||
autogen: {
|
||||
id: 'autogen',
|
||||
label: 'AutoGen',
|
||||
description: 'Microsoft\'s multi-agent conversation framework',
|
||||
docsUrl: 'https://microsoft.github.io/autogen/',
|
||||
connection: {
|
||||
connectionMode: 'webhook',
|
||||
heartbeatInterval: 30,
|
||||
setupHints: [
|
||||
'Register each AutoGen AssistantAgent with MC',
|
||||
'Use message hooks to report conversation progress',
|
||||
'Map GroupChat rounds to MC task progress updates',
|
||||
],
|
||||
exampleSnippet: `import requests
|
||||
# AutoGen v0.4+ (ag2)
|
||||
from autogen import AssistantAgent, UserProxyAgent
|
||||
|
||||
MC_URL = "http://localhost:3000"
|
||||
HEADERS = {"Content-Type": "application/json", "x-api-key": "YOUR_API_KEY"}
|
||||
|
||||
def register_autogen_agent(agent_name: str, system_message: str):
|
||||
"""Register an AutoGen agent with Mission Control."""
|
||||
requests.post(f"{MC_URL}/api/adapters", headers=HEADERS, json={
|
||||
"framework": "autogen",
|
||||
"action": "register",
|
||||
"payload": {
|
||||
"agentId": f"autogen-{agent_name.lower().replace(' ', '-')}",
|
||||
"name": agent_name,
|
||||
"metadata": {
|
||||
"type": "AssistantAgent",
|
||||
"system_message_preview": system_message[:200]
|
||||
}
|
||||
}
|
||||
})
|
||||
|
||||
# Register your agents
|
||||
register_autogen_agent("Coder", "You are a coding assistant...")
|
||||
register_autogen_agent("Reviewer", "You review code for bugs...")`,
|
||||
},
|
||||
},
|
||||
'claude-sdk': {
|
||||
id: 'claude-sdk',
|
||||
label: 'Claude Agent SDK',
|
||||
description: 'Anthropic\'s native agent SDK for building Claude-powered agents',
|
||||
docsUrl: 'https://docs.anthropic.com/en/docs/agents/agent-sdk',
|
||||
connection: {
|
||||
connectionMode: 'webhook',
|
||||
heartbeatInterval: 30,
|
||||
setupHints: [
|
||||
'Register your Claude Agent SDK agent after initialization',
|
||||
'Use tool callbacks to report progress to MC',
|
||||
'Map agent turns to MC task progress updates',
|
||||
],
|
||||
exampleSnippet: `import Anthropic from "@anthropic-ai/sdk";
|
||||
|
||||
const MC_URL = "http://localhost:3000";
|
||||
const HEADERS = { "Content-Type": "application/json", "x-api-key": "YOUR_API_KEY" };
|
||||
|
||||
// Register your Claude SDK agent
|
||||
await fetch(\`\${MC_URL}/api/adapters\`, {
|
||||
method: "POST",
|
||||
headers: HEADERS,
|
||||
body: JSON.stringify({
|
||||
framework: "claude-sdk",
|
||||
action: "register",
|
||||
payload: {
|
||||
agentId: "claude-agent-1",
|
||||
name: "Claude Development Agent",
|
||||
metadata: {
|
||||
model: "claude-sonnet-4-20250514",
|
||||
tools: ["computer", "text_editor", "bash"]
|
||||
}
|
||||
}
|
||||
})
|
||||
});
|
||||
|
||||
// Report task completion
|
||||
await fetch(\`\${MC_URL}/api/adapters\`, {
|
||||
method: "POST",
|
||||
headers: HEADERS,
|
||||
body: JSON.stringify({
|
||||
framework: "claude-sdk",
|
||||
action: "report",
|
||||
payload: {
|
||||
taskId: "task-456",
|
||||
agentId: "claude-agent-1",
|
||||
progress: 100,
|
||||
status: "completed",
|
||||
output: { files_changed: 3, tests_passed: true }
|
||||
}
|
||||
})
|
||||
});`,
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
// ─── Universal Template Archetypes ──────────────────────────────────────────
|
||||
|
||||
export interface UniversalTemplate {
|
||||
type: string
|
||||
label: string
|
||||
description: string
|
||||
emoji: string
|
||||
/** Which frameworks this template supports */
|
||||
frameworks: string[]
|
||||
/** Role-based capabilities (framework-agnostic) */
|
||||
capabilities: string[]
|
||||
/** The OpenClaw template to use when framework is openclaw */
|
||||
openclawTemplateType?: string
|
||||
}
|
||||
|
||||
/**
|
||||
* Universal templates that work across all frameworks.
|
||||
* These describe WHAT the agent does, not HOW it's configured.
|
||||
* Framework-specific config is resolved at creation time.
|
||||
*/
|
||||
export const UNIVERSAL_TEMPLATES: UniversalTemplate[] = [
|
||||
{
|
||||
type: 'orchestrator',
|
||||
label: 'Orchestrator',
|
||||
description: 'Coordinates other agents, routes tasks, and manages workflows. Full access.',
|
||||
emoji: '\ud83e\udded',
|
||||
frameworks: ['openclaw', 'generic', 'langgraph', 'crewai', 'autogen', 'claude-sdk'],
|
||||
capabilities: ['task_routing', 'agent_management', 'workflow_control', 'full_access'],
|
||||
openclawTemplateType: 'orchestrator',
|
||||
},
|
||||
{
|
||||
type: 'developer',
|
||||
label: 'Developer',
|
||||
description: 'Writes and edits code, runs builds and tests. Read-write workspace access.',
|
||||
emoji: '\ud83d\udee0\ufe0f',
|
||||
frameworks: ['openclaw', 'generic', 'langgraph', 'crewai', 'autogen', 'claude-sdk'],
|
||||
capabilities: ['code_write', 'code_execute', 'testing', 'debugging'],
|
||||
openclawTemplateType: 'developer',
|
||||
},
|
||||
{
|
||||
type: 'reviewer',
|
||||
label: 'Reviewer / QA',
|
||||
description: 'Reviews code and validates quality. Read-only access, lightweight model.',
|
||||
emoji: '\ud83d\udd2c',
|
||||
frameworks: ['openclaw', 'generic', 'langgraph', 'crewai', 'autogen', 'claude-sdk'],
|
||||
capabilities: ['code_read', 'quality_review', 'security_audit'],
|
||||
openclawTemplateType: 'reviewer',
|
||||
},
|
||||
{
|
||||
type: 'researcher',
|
||||
label: 'Researcher',
|
||||
description: 'Browses the web and gathers information. No code execution.',
|
||||
emoji: '\ud83d\udd0d',
|
||||
frameworks: ['openclaw', 'generic', 'langgraph', 'crewai', 'autogen', 'claude-sdk'],
|
||||
capabilities: ['web_browse', 'data_gathering', 'summarization'],
|
||||
openclawTemplateType: 'researcher',
|
||||
},
|
||||
{
|
||||
type: 'content-creator',
|
||||
label: 'Content Creator',
|
||||
description: 'Generates and edits written content. No code execution or browsing.',
|
||||
emoji: '\u270f\ufe0f',
|
||||
frameworks: ['openclaw', 'generic', 'langgraph', 'crewai', 'autogen', 'claude-sdk'],
|
||||
capabilities: ['content_write', 'content_edit'],
|
||||
openclawTemplateType: 'content-creator',
|
||||
},
|
||||
{
|
||||
type: 'security-auditor',
|
||||
label: 'Security Auditor',
|
||||
description: 'Scans for vulnerabilities. Read-only with shell access for scanning tools.',
|
||||
emoji: '\ud83d\udee1\ufe0f',
|
||||
frameworks: ['openclaw', 'generic', 'langgraph', 'crewai', 'autogen', 'claude-sdk'],
|
||||
capabilities: ['code_read', 'shell_execute', 'security_scan'],
|
||||
openclawTemplateType: 'security-auditor',
|
||||
},
|
||||
]
|
||||
|
||||
// ─── Template Resolution ────────────────────────────────────────────────────
|
||||
|
||||
/**
|
||||
* Get a universal template by type.
|
||||
*/
|
||||
export function getUniversalTemplate(type: string): UniversalTemplate | undefined {
|
||||
return UNIVERSAL_TEMPLATES.find(t => t.type === type)
|
||||
}
|
||||
|
||||
/**
|
||||
* List templates available for a specific framework.
|
||||
*/
|
||||
export function getTemplatesForFramework(framework: string): UniversalTemplate[] {
|
||||
return UNIVERSAL_TEMPLATES.filter(t => t.frameworks.includes(framework))
|
||||
}
|
||||
|
||||
/**
|
||||
* Get framework connection info.
|
||||
*/
|
||||
export function getFrameworkInfo(framework: string): FrameworkInfo | undefined {
|
||||
return FRAMEWORK_REGISTRY[framework]
|
||||
}
|
||||
|
||||
/**
|
||||
* List all supported frameworks.
|
||||
*/
|
||||
export function listFrameworks(): FrameworkInfo[] {
|
||||
return Object.values(FRAMEWORK_REGISTRY)
|
||||
}
|
||||
|
||||
/**
|
||||
* Resolve a universal template to its OpenClaw-specific config (if applicable).
|
||||
* For non-OpenClaw frameworks, returns the universal template metadata
|
||||
* since config is managed externally by the framework.
|
||||
*/
|
||||
export function resolveTemplateConfig(
|
||||
universalType: string,
|
||||
framework: string
|
||||
): { template?: AgentTemplate; universal: UniversalTemplate } | undefined {
|
||||
const universal = getUniversalTemplate(universalType)
|
||||
if (!universal) return undefined
|
||||
if (!universal.frameworks.includes(framework)) return undefined
|
||||
|
||||
if (framework === 'openclaw' && universal.openclawTemplateType) {
|
||||
const template = AGENT_TEMPLATES.find(t => t.type === universal.openclawTemplateType)
|
||||
return { template, universal }
|
||||
}
|
||||
|
||||
return { universal }
|
||||
}
|
||||
|
|
@ -1284,6 +1284,42 @@ const migrations: Migration[] = [
|
|||
update.run(hashed, row.id)
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
id: '044_spawn_history',
|
||||
up(db: Database.Database) {
|
||||
db.exec([
|
||||
`CREATE TABLE IF NOT EXISTS spawn_history (`,
|
||||
` id INTEGER PRIMARY KEY AUTOINCREMENT,`,
|
||||
` agent_id INTEGER,`,
|
||||
` agent_name TEXT NOT NULL,`,
|
||||
` spawn_type TEXT NOT NULL DEFAULT 'claude-code',`,
|
||||
` session_id TEXT,`,
|
||||
` trigger TEXT,`,
|
||||
` status TEXT NOT NULL DEFAULT 'started',`,
|
||||
` exit_code INTEGER,`,
|
||||
` error TEXT,`,
|
||||
` duration_ms INTEGER,`,
|
||||
` workspace_id INTEGER NOT NULL DEFAULT 1,`,
|
||||
` created_at INTEGER NOT NULL DEFAULT (unixepoch()),`,
|
||||
` finished_at INTEGER,`,
|
||||
` FOREIGN KEY (agent_id) REFERENCES agents(id) ON DELETE SET NULL`,
|
||||
`)`,
|
||||
].join('\n'))
|
||||
db.exec(`CREATE INDEX IF NOT EXISTS idx_spawn_history_agent ON spawn_history(agent_name)`)
|
||||
db.exec(`CREATE INDEX IF NOT EXISTS idx_spawn_history_created ON spawn_history(created_at)`)
|
||||
db.exec(`CREATE INDEX IF NOT EXISTS idx_spawn_history_status ON spawn_history(status)`)
|
||||
}
|
||||
},
|
||||
{
|
||||
id: '045_task_dispatch_attempts',
|
||||
up(db: Database.Database) {
|
||||
const cols = db.prepare(`PRAGMA table_info(tasks)`).all() as Array<{ name: string }>
|
||||
if (!cols.some(c => c.name === 'dispatch_attempts')) {
|
||||
db.exec(`ALTER TABLE tasks ADD COLUMN dispatch_attempts INTEGER NOT NULL DEFAULT 0`)
|
||||
}
|
||||
db.exec(`CREATE INDEX IF NOT EXISTS idx_tasks_stale_inprogress ON tasks(status, updated_at) WHERE status = 'in_progress'`)
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
|
|
|
|||
|
|
@ -71,8 +71,9 @@ export function createRateLimiter(options: RateLimiterOptions) {
|
|||
if (cleanupInterval.unref) cleanupInterval.unref()
|
||||
|
||||
return function checkRateLimit(request: Request): NextResponse | null {
|
||||
// Allow disabling non-critical rate limiting for E2E tests (never in production)
|
||||
if (process.env.MC_DISABLE_RATE_LIMIT === '1' && !options.critical && process.env.NODE_ENV !== 'production') return null
|
||||
// Allow disabling non-critical rate limiting for E2E tests
|
||||
// In CI, standalone server runs with NODE_ENV=production but needs rate limit bypass
|
||||
if (process.env.MC_DISABLE_RATE_LIMIT === '1' && !options.critical && (process.env.NODE_ENV !== 'production' || process.env.MISSION_CONTROL_TEST_MODE === '1')) return null
|
||||
const ip = extractClientIp(request)
|
||||
const now = Date.now()
|
||||
const entry = store.get(ip)
|
||||
|
|
@ -143,7 +144,7 @@ export function createAgentRateLimiter(options: RateLimiterOptions) {
|
|||
if (cleanupInterval.unref) cleanupInterval.unref()
|
||||
|
||||
return function checkAgentRateLimit(request: Request): NextResponse | null {
|
||||
if (process.env.MC_DISABLE_RATE_LIMIT === '1' && !options.critical && process.env.NODE_ENV !== 'production') return null
|
||||
if (process.env.MC_DISABLE_RATE_LIMIT === '1' && !options.critical && (process.env.NODE_ENV !== 'production' || process.env.MISSION_CONTROL_TEST_MODE === '1')) return null
|
||||
|
||||
const agentName = (request.headers.get('x-agent-name') || '').trim()
|
||||
const key = agentName || `ip:${extractClientIp(request)}`
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ import { pruneGatewaySessionsOlderThan, getAgentLiveStatuses } from './sessions'
|
|||
import { eventBus } from './event-bus'
|
||||
import { syncSkillsFromDisk } from './skill-sync'
|
||||
import { syncLocalAgents } from './local-agent-sync'
|
||||
import { dispatchAssignedTasks, runAegisReviews } from './task-dispatch'
|
||||
import { dispatchAssignedTasks, runAegisReviews, requeueStaleTasks } from './task-dispatch'
|
||||
import { spawnRecurringTasks } from './recurring-tasks'
|
||||
|
||||
const BACKUP_DIR = join(dirname(config.dbPath), 'backups')
|
||||
|
|
@ -389,6 +389,15 @@ export function initScheduler() {
|
|||
running: false,
|
||||
})
|
||||
|
||||
tasks.set('stale_task_requeue', {
|
||||
name: 'Stale Task Requeue',
|
||||
intervalMs: TICK_MS, // Every 60s — check for stale in_progress tasks
|
||||
lastRun: null,
|
||||
nextRun: now + 25_000, // First check 25s after startup
|
||||
enabled: true,
|
||||
running: false,
|
||||
})
|
||||
|
||||
// Start the tick loop
|
||||
tickInterval = setInterval(tick, TICK_MS)
|
||||
logger.info('Scheduler initialized - backup at ~3AM, cleanup at ~4AM, heartbeat every 5m, webhook/claude/skill/local-agent/gateway-agent sync every 60s')
|
||||
|
|
@ -423,8 +432,9 @@ async function tick() {
|
|||
: id === 'task_dispatch' ? 'general.task_dispatch'
|
||||
: id === 'aegis_review' ? 'general.aegis_review'
|
||||
: id === 'recurring_task_spawn' ? 'general.recurring_task_spawn'
|
||||
: id === 'stale_task_requeue' ? 'general.stale_task_requeue'
|
||||
: 'general.agent_heartbeat'
|
||||
const defaultEnabled = id === 'agent_heartbeat' || id === 'webhook_retry' || id === 'claude_session_scan' || id === 'skill_sync' || id === 'local_agent_sync' || id === 'gateway_agent_sync' || id === 'task_dispatch' || id === 'aegis_review' || id === 'recurring_task_spawn'
|
||||
const defaultEnabled = id === 'agent_heartbeat' || id === 'webhook_retry' || id === 'claude_session_scan' || id === 'skill_sync' || id === 'local_agent_sync' || id === 'gateway_agent_sync' || id === 'task_dispatch' || id === 'aegis_review' || id === 'recurring_task_spawn' || id === 'stale_task_requeue'
|
||||
if (!isSettingEnabled(settingKey, defaultEnabled)) continue
|
||||
|
||||
task.running = true
|
||||
|
|
@ -442,6 +452,7 @@ async function tick() {
|
|||
: id === 'task_dispatch' ? await dispatchAssignedTasks()
|
||||
: id === 'aegis_review' ? await runAegisReviews()
|
||||
: id === 'recurring_task_spawn' ? await spawnRecurringTasks()
|
||||
: id === 'stale_task_requeue' ? await requeueStaleTasks()
|
||||
: await runCleanup()
|
||||
task.lastResult = { ...result, timestamp: now }
|
||||
} catch (err: any) {
|
||||
|
|
@ -477,8 +488,9 @@ export function getSchedulerStatus() {
|
|||
: id === 'task_dispatch' ? 'general.task_dispatch'
|
||||
: id === 'aegis_review' ? 'general.aegis_review'
|
||||
: id === 'recurring_task_spawn' ? 'general.recurring_task_spawn'
|
||||
: id === 'stale_task_requeue' ? 'general.stale_task_requeue'
|
||||
: 'general.agent_heartbeat'
|
||||
const defaultEnabled = id === 'agent_heartbeat' || id === 'webhook_retry' || id === 'claude_session_scan' || id === 'skill_sync' || id === 'local_agent_sync' || id === 'gateway_agent_sync' || id === 'task_dispatch' || id === 'aegis_review' || id === 'recurring_task_spawn'
|
||||
const defaultEnabled = id === 'agent_heartbeat' || id === 'webhook_retry' || id === 'claude_session_scan' || id === 'skill_sync' || id === 'local_agent_sync' || id === 'gateway_agent_sync' || id === 'task_dispatch' || id === 'aegis_review' || id === 'recurring_task_spawn' || id === 'stale_task_requeue'
|
||||
result.push({
|
||||
id,
|
||||
name: task.name,
|
||||
|
|
@ -506,6 +518,7 @@ export async function triggerTask(taskId: string): Promise<{ ok: boolean; messag
|
|||
if (taskId === 'task_dispatch') return dispatchAssignedTasks()
|
||||
if (taskId === 'aegis_review') return runAegisReviews()
|
||||
if (taskId === 'recurring_task_spawn') return spawnRecurringTasks()
|
||||
if (taskId === 'stale_task_requeue') return requeueStaleTasks()
|
||||
return { ok: false, message: `Unknown task: ${taskId}` }
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,135 @@
|
|||
/**
|
||||
* Spawn History — durable persistence for agent spawn events.
|
||||
*
|
||||
* Replaces log-scraping fallback with DB-backed spawn tracking.
|
||||
* Every agent session spawn (claude-code, codex-cli, hermes) is recorded
|
||||
* with status, duration, and error details for diagnostics and attribution.
|
||||
*/
|
||||
|
||||
import { getDatabase } from '@/lib/db'
|
||||
|
||||
export interface SpawnRecord {
|
||||
id: number
|
||||
agent_id: number | null
|
||||
agent_name: string
|
||||
spawn_type: string
|
||||
session_id: string | null
|
||||
trigger: string | null
|
||||
status: string
|
||||
exit_code: number | null
|
||||
error: string | null
|
||||
duration_ms: number | null
|
||||
workspace_id: number
|
||||
created_at: number
|
||||
finished_at: number | null
|
||||
}
|
||||
|
||||
export function recordSpawnStart(input: {
|
||||
agentName: string
|
||||
agentId?: number
|
||||
spawnType?: string
|
||||
sessionId?: string
|
||||
trigger?: string
|
||||
workspaceId?: number
|
||||
}): number {
|
||||
const db = getDatabase()
|
||||
const result = db.prepare(`
|
||||
INSERT INTO spawn_history (agent_name, agent_id, spawn_type, session_id, trigger, status, workspace_id)
|
||||
VALUES (?, ?, ?, ?, ?, 'started', ?)
|
||||
`).run(
|
||||
input.agentName,
|
||||
input.agentId ?? null,
|
||||
input.spawnType ?? 'claude-code',
|
||||
input.sessionId ?? null,
|
||||
input.trigger ?? null,
|
||||
input.workspaceId ?? 1,
|
||||
)
|
||||
return result.lastInsertRowid as number
|
||||
}
|
||||
|
||||
export function recordSpawnFinish(id: number, input: {
|
||||
status: 'completed' | 'failed' | 'terminated'
|
||||
exitCode?: number
|
||||
error?: string
|
||||
durationMs?: number
|
||||
}): void {
|
||||
const db = getDatabase()
|
||||
db.prepare(`
|
||||
UPDATE spawn_history
|
||||
SET status = ?, exit_code = ?, error = ?, duration_ms = ?, finished_at = unixepoch()
|
||||
WHERE id = ?
|
||||
`).run(
|
||||
input.status,
|
||||
input.exitCode ?? null,
|
||||
input.error ?? null,
|
||||
input.durationMs ?? null,
|
||||
id,
|
||||
)
|
||||
}
|
||||
|
||||
export function getSpawnHistory(agentName: string, opts?: {
|
||||
hours?: number
|
||||
limit?: number
|
||||
workspaceId?: number
|
||||
}): SpawnRecord[] {
|
||||
const db = getDatabase()
|
||||
const hours = opts?.hours ?? 24
|
||||
const limit = opts?.limit ?? 50
|
||||
const since = Math.floor(Date.now() / 1000) - hours * 3600
|
||||
|
||||
return db.prepare(`
|
||||
SELECT * FROM spawn_history
|
||||
WHERE agent_name = ? AND workspace_id = ? AND created_at > ?
|
||||
ORDER BY created_at DESC
|
||||
LIMIT ?
|
||||
`).all(agentName, opts?.workspaceId ?? 1, since, limit) as SpawnRecord[]
|
||||
}
|
||||
|
||||
export function getSpawnStats(opts?: {
|
||||
hours?: number
|
||||
workspaceId?: number
|
||||
}): {
|
||||
total: number
|
||||
completed: number
|
||||
failed: number
|
||||
avgDurationMs: number
|
||||
byAgent: Array<{ agent_name: string; count: number; failures: number }>
|
||||
} {
|
||||
const db = getDatabase()
|
||||
const hours = opts?.hours ?? 24
|
||||
const since = Math.floor(Date.now() / 1000) - hours * 3600
|
||||
const wsId = opts?.workspaceId ?? 1
|
||||
|
||||
const totals = db.prepare(`
|
||||
SELECT
|
||||
COUNT(*) as total,
|
||||
SUM(CASE WHEN status = 'completed' THEN 1 ELSE 0 END) as completed,
|
||||
SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) as failed,
|
||||
AVG(duration_ms) as avg_duration
|
||||
FROM spawn_history
|
||||
WHERE workspace_id = ? AND created_at > ?
|
||||
`).get(wsId, since) as any
|
||||
|
||||
const byAgent = db.prepare(`
|
||||
SELECT
|
||||
agent_name,
|
||||
COUNT(*) as count,
|
||||
SUM(CASE WHEN status = 'failed' THEN 1 ELSE 0 END) as failures
|
||||
FROM spawn_history
|
||||
WHERE workspace_id = ? AND created_at > ?
|
||||
GROUP BY agent_name
|
||||
ORDER BY count DESC
|
||||
`).all(wsId, since) as any[]
|
||||
|
||||
return {
|
||||
total: totals?.total ?? 0,
|
||||
completed: totals?.completed ?? 0,
|
||||
failed: totals?.failed ?? 0,
|
||||
avgDurationMs: Math.round(totals?.avg_duration ?? 0),
|
||||
byAgent: byAgent.map((row: any) => ({
|
||||
agent_name: row.agent_name,
|
||||
count: row.count,
|
||||
failures: row.failures,
|
||||
})),
|
||||
}
|
||||
}
|
||||
|
|
@ -306,21 +306,43 @@ export async function runAegisReviews(): Promise<{ ok: boolean; message: string
|
|||
previous_status: 'quality_review',
|
||||
})
|
||||
} else {
|
||||
// Rejected: push back to in_progress with feedback
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, updated_at = ? WHERE id = ?')
|
||||
.run('in_progress', `Aegis rejected: ${verdict.notes}`, Math.floor(Date.now() / 1000), task.id)
|
||||
// Rejected: check dispatch_attempts to decide next status
|
||||
const now = Math.floor(Date.now() / 1000)
|
||||
const currentAttempts = (db.prepare('SELECT dispatch_attempts FROM tasks WHERE id = ?').get(task.id) as { dispatch_attempts: number } | undefined)?.dispatch_attempts ?? 0
|
||||
const newAttempts = currentAttempts + 1
|
||||
const maxAegisRetries = 3
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'in_progress',
|
||||
previous_status: 'quality_review',
|
||||
})
|
||||
if (newAttempts >= maxAegisRetries) {
|
||||
// Too many rejections — move to failed
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, dispatch_attempts = ?, updated_at = ? WHERE id = ?')
|
||||
.run('failed', `Aegis rejected ${newAttempts} times. Last: ${verdict.notes}`, newAttempts, now, task.id)
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'failed',
|
||||
previous_status: 'quality_review',
|
||||
error_message: `Aegis rejected ${newAttempts} times`,
|
||||
reason: 'max_aegis_retries_exceeded',
|
||||
})
|
||||
} else {
|
||||
// Requeue to assigned for re-dispatch with feedback
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, dispatch_attempts = ?, updated_at = ? WHERE id = ?')
|
||||
.run('assigned', `Aegis rejected: ${verdict.notes}`, newAttempts, now, task.id)
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'assigned',
|
||||
previous_status: 'quality_review',
|
||||
error_message: `Aegis rejected: ${verdict.notes}`,
|
||||
reason: 'aegis_rejection',
|
||||
})
|
||||
}
|
||||
|
||||
// Add rejection as a comment so the agent sees it on next dispatch
|
||||
db.prepare(`
|
||||
INSERT INTO comments (task_id, author, content, created_at, workspace_id)
|
||||
VALUES (?, 'aegis', ?, ?, ?)
|
||||
`).run(task.id, `Quality Review Rejected:\n${verdict.notes}`, Math.floor(Date.now() / 1000), task.workspace_id)
|
||||
`).run(task.id, `Quality Review Rejected (attempt ${newAttempts}/${maxAegisRetries}):\n${verdict.notes}`, now, task.workspace_id)
|
||||
}
|
||||
|
||||
db_helpers.logActivity(
|
||||
|
|
@ -363,6 +385,86 @@ export async function runAegisReviews(): Promise<{ ok: boolean; message: string
|
|||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Requeue stale tasks stuck in 'in_progress' whose assigned agent is offline.
|
||||
* Prevents tasks from being permanently stuck when agents crash or disconnect.
|
||||
*/
|
||||
export async function requeueStaleTasks(): Promise<{ ok: boolean; message: string }> {
|
||||
const db = getDatabase()
|
||||
const now = Math.floor(Date.now() / 1000)
|
||||
const staleThreshold = now - 10 * 60 // 10 minutes
|
||||
const maxDispatchRetries = 5
|
||||
|
||||
const staleTasks = db.prepare(`
|
||||
SELECT t.id, t.title, t.assigned_to, t.dispatch_attempts, t.workspace_id,
|
||||
a.status as agent_status, a.last_seen as agent_last_seen
|
||||
FROM tasks t
|
||||
LEFT JOIN agents a ON a.name = t.assigned_to AND a.workspace_id = t.workspace_id
|
||||
WHERE t.status = 'in_progress'
|
||||
AND t.updated_at < ?
|
||||
`).all(staleThreshold) as Array<{
|
||||
id: number; title: string; assigned_to: string | null; dispatch_attempts: number
|
||||
workspace_id: number; agent_status: string | null; agent_last_seen: number | null
|
||||
}>
|
||||
|
||||
if (staleTasks.length === 0) {
|
||||
return { ok: true, message: 'No stale tasks found' }
|
||||
}
|
||||
|
||||
let requeued = 0
|
||||
let failed = 0
|
||||
|
||||
for (const task of staleTasks) {
|
||||
// Only requeue if the agent is offline or unknown
|
||||
const agentOffline = !task.agent_status || task.agent_status === 'offline'
|
||||
if (!agentOffline) continue
|
||||
|
||||
const newAttempts = (task.dispatch_attempts ?? 0) + 1
|
||||
|
||||
if (newAttempts >= maxDispatchRetries) {
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, dispatch_attempts = ?, updated_at = ? WHERE id = ?')
|
||||
.run('failed', `Task stuck in_progress ${newAttempts} times — agent "${task.assigned_to}" offline. Moved to failed.`, newAttempts, now, task.id)
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'failed',
|
||||
previous_status: 'in_progress',
|
||||
error_message: `Stale task — agent offline after ${newAttempts} attempts`,
|
||||
reason: 'stale_task_max_retries',
|
||||
})
|
||||
|
||||
failed++
|
||||
} else {
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, dispatch_attempts = ?, updated_at = ? WHERE id = ?')
|
||||
.run('assigned', `Requeued: agent "${task.assigned_to}" went offline while task was in_progress`, newAttempts, now, task.id)
|
||||
|
||||
// Add a comment explaining the requeue
|
||||
db.prepare(`
|
||||
INSERT INTO comments (task_id, author, content, created_at, workspace_id)
|
||||
VALUES (?, 'scheduler', ?, ?, ?)
|
||||
`).run(task.id, `Task requeued (attempt ${newAttempts}/${maxDispatchRetries}): agent "${task.assigned_to}" went offline while task was in_progress.`, now, task.workspace_id)
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'assigned',
|
||||
previous_status: 'in_progress',
|
||||
error_message: `Agent "${task.assigned_to}" went offline`,
|
||||
reason: 'stale_task_requeue',
|
||||
})
|
||||
|
||||
requeued++
|
||||
}
|
||||
}
|
||||
|
||||
const total = requeued + failed
|
||||
return {
|
||||
ok: true,
|
||||
message: total === 0
|
||||
? `Found ${staleTasks.length} stale task(s) but agents still online`
|
||||
: `Requeued ${requeued}, failed ${failed} of ${staleTasks.length} stale task(s)`,
|
||||
}
|
||||
}
|
||||
|
||||
export async function dispatchAssignedTasks(): Promise<{ ok: boolean; message: string }> {
|
||||
const db = getDatabase()
|
||||
|
||||
|
|
@ -559,15 +661,36 @@ export async function dispatchAssignedTasks(): Promise<{ ok: boolean; message: s
|
|||
const errorMsg = err.message || 'Unknown error'
|
||||
logger.error({ taskId: task.id, agent: task.agent_name, err }, 'Task dispatch failed')
|
||||
|
||||
// Revert to assigned so it can be retried on the next tick
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, updated_at = ? WHERE id = ?')
|
||||
.run('assigned', errorMsg.substring(0, 5000), Math.floor(Date.now() / 1000), task.id)
|
||||
// Increment dispatch_attempts and decide next status
|
||||
const currentAttempts = (db.prepare('SELECT dispatch_attempts FROM tasks WHERE id = ?').get(task.id) as { dispatch_attempts: number } | undefined)?.dispatch_attempts ?? 0
|
||||
const newAttempts = currentAttempts + 1
|
||||
const maxDispatchRetries = 5
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'assigned',
|
||||
previous_status: 'in_progress',
|
||||
})
|
||||
if (newAttempts >= maxDispatchRetries) {
|
||||
// Too many failures — move to failed
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, dispatch_attempts = ?, updated_at = ? WHERE id = ?')
|
||||
.run('failed', `Dispatch failed ${newAttempts} times. Last: ${errorMsg.substring(0, 5000)}`, newAttempts, Math.floor(Date.now() / 1000), task.id)
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'failed',
|
||||
previous_status: 'in_progress',
|
||||
error_message: `Dispatch failed ${newAttempts} times`,
|
||||
reason: 'max_dispatch_retries_exceeded',
|
||||
})
|
||||
} else {
|
||||
// Revert to assigned so it can be retried on the next tick
|
||||
db.prepare('UPDATE tasks SET status = ?, error_message = ?, dispatch_attempts = ?, updated_at = ? WHERE id = ?')
|
||||
.run('assigned', errorMsg.substring(0, 5000), newAttempts, Math.floor(Date.now() / 1000), task.id)
|
||||
|
||||
eventBus.broadcast('task.status_changed', {
|
||||
id: task.id,
|
||||
status: 'assigned',
|
||||
previous_status: 'in_progress',
|
||||
error_message: errorMsg.substring(0, 500),
|
||||
reason: 'dispatch_failed',
|
||||
})
|
||||
}
|
||||
|
||||
db_helpers.logActivity(
|
||||
'task_dispatch_failed',
|
||||
|
|
|
|||
|
|
@ -0,0 +1,207 @@
|
|||
import { expect, test } from '@playwright/test'
|
||||
import { execFile } from 'node:child_process'
|
||||
import { promisify } from 'node:util'
|
||||
import path from 'node:path'
|
||||
import { API_KEY_HEADER, createTestAgent, deleteTestAgent, createTestTask, deleteTestTask } from './helpers'
|
||||
|
||||
const execFileAsync = promisify(execFile)
|
||||
|
||||
const CLI = path.resolve('scripts/mc-cli.cjs')
|
||||
const BASE_URL = process.env.E2E_BASE_URL || 'http://127.0.0.1:3005'
|
||||
const API_KEY = 'test-api-key-e2e-12345'
|
||||
|
||||
/** Run mc-cli command via execFile (no shell) and return parsed JSON output */
|
||||
async function mc(...args: string[]): Promise<{ stdout: string; parsed: any; exitCode: number }> {
|
||||
try {
|
||||
const { stdout } = await execFileAsync('node', [CLI, ...args, '--json', '--url', BASE_URL, '--api-key', API_KEY], {
|
||||
timeout: 15000,
|
||||
env: { ...process.env, MC_URL: BASE_URL, MC_API_KEY: API_KEY },
|
||||
})
|
||||
let parsed: any
|
||||
try { parsed = JSON.parse(stdout) } catch { parsed = { raw: stdout } }
|
||||
return { stdout, parsed, exitCode: 0 }
|
||||
} catch (err: any) {
|
||||
const stdout = err.stdout || ''
|
||||
let parsed: any
|
||||
try { parsed = JSON.parse(stdout) } catch { parsed = { raw: stdout, stderr: err.stderr } }
|
||||
return { stdout, parsed, exitCode: err.code ?? 1 }
|
||||
}
|
||||
}
|
||||
|
||||
test.describe('CLI Integration', () => {
|
||||
// --- Help & Usage ---
|
||||
|
||||
test('--help shows usage and exits 0', async () => {
|
||||
const { stdout, exitCode } = await mc('--help')
|
||||
expect(exitCode).toBe(0)
|
||||
expect(stdout).toContain('Mission Control CLI')
|
||||
expect(stdout).toContain('agents')
|
||||
expect(stdout).toContain('tasks')
|
||||
})
|
||||
|
||||
test('unknown group exits 2 with error', async () => {
|
||||
const { exitCode } = await mc('nonexistent', 'action')
|
||||
expect(exitCode).toBe(2)
|
||||
})
|
||||
|
||||
test('missing required flag exits 2 with error message', async () => {
|
||||
const { exitCode, parsed } = await mc('agents', 'get')
|
||||
expect(exitCode).toBe(2)
|
||||
expect(parsed.error).toContain('--id')
|
||||
})
|
||||
|
||||
// --- Status ---
|
||||
|
||||
test('status health returns healthy', async () => {
|
||||
const { parsed, exitCode } = await mc('status', 'health')
|
||||
expect(exitCode).toBe(0)
|
||||
expect(parsed.data?.status || parsed.status).toBeDefined()
|
||||
})
|
||||
|
||||
test('status overview returns system info', async () => {
|
||||
const { parsed, exitCode } = await mc('status', 'overview')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
// --- Agents CRUD ---
|
||||
|
||||
test.describe('agents', () => {
|
||||
const agentIds: number[] = []
|
||||
|
||||
test.afterEach(async ({ request }) => {
|
||||
for (const id of agentIds.splice(0)) {
|
||||
await deleteTestAgent(request, id).catch(() => {})
|
||||
}
|
||||
})
|
||||
|
||||
test('list returns array', async () => {
|
||||
const { parsed, exitCode } = await mc('agents', 'list')
|
||||
expect(exitCode).toBe(0)
|
||||
const data = parsed.data || parsed
|
||||
expect(data).toBeDefined()
|
||||
})
|
||||
|
||||
test('get + heartbeat lifecycle', async ({ request }) => {
|
||||
const agent = await createTestAgent(request)
|
||||
agentIds.push(agent.id)
|
||||
|
||||
// Get via CLI
|
||||
const { parsed: getResult, exitCode: getCode } = await mc('agents', 'get', '--id', String(agent.id))
|
||||
expect(getCode).toBe(0)
|
||||
const agentData = getResult.data?.agent || getResult.data || getResult
|
||||
expect(agentData).toBeDefined()
|
||||
|
||||
// Heartbeat via CLI
|
||||
const { exitCode: hbCode } = await mc('agents', 'heartbeat', '--id', String(agent.id))
|
||||
expect(hbCode).toBe(0)
|
||||
})
|
||||
|
||||
test('memory set and get work', async ({ request }) => {
|
||||
const agent = await createTestAgent(request)
|
||||
agentIds.push(agent.id)
|
||||
|
||||
// Set memory — may succeed or fail depending on workspace state
|
||||
const { exitCode: setCode } = await mc('agents', 'memory', 'set', '--id', String(agent.id), '--content', 'CLI test memory')
|
||||
expect([0, 2, 6]).toContain(setCode)
|
||||
|
||||
// Get memory
|
||||
const { exitCode: getCode } = await mc('agents', 'memory', 'get', '--id', String(agent.id))
|
||||
expect([0, 2, 6]).toContain(getCode)
|
||||
})
|
||||
|
||||
test('attribution returns response', async ({ request }) => {
|
||||
const agent = await createTestAgent(request)
|
||||
agentIds.push(agent.id)
|
||||
|
||||
// Attribution may return 403 for test API key depending on auth scope — accept 0 or 4
|
||||
const { exitCode } = await mc('agents', 'attribution', '--id', String(agent.id), '--hours', '1')
|
||||
expect([0, 4]).toContain(exitCode)
|
||||
})
|
||||
})
|
||||
|
||||
// --- Tasks ---
|
||||
|
||||
test.describe('tasks', () => {
|
||||
const taskIds: number[] = []
|
||||
|
||||
test.afterEach(async ({ request }) => {
|
||||
for (const id of taskIds.splice(0)) {
|
||||
await deleteTestTask(request, id).catch(() => {})
|
||||
}
|
||||
})
|
||||
|
||||
test('list returns data', async () => {
|
||||
const { exitCode } = await mc('tasks', 'list')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
test('queue returns response', async () => {
|
||||
const { exitCode } = await mc('tasks', 'queue', '--agent', 'e2e-test-agent')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
test('comments list/add lifecycle', async ({ request }) => {
|
||||
const task = await createTestTask(request)
|
||||
taskIds.push(task.id)
|
||||
|
||||
// Add comment via CLI
|
||||
const { exitCode: addCode } = await mc('tasks', 'comments', 'add', '--id', String(task.id), '--content', 'CLI comment test')
|
||||
expect(addCode).toBe(0)
|
||||
|
||||
// List comments via CLI
|
||||
const { parsed, exitCode: listCode } = await mc('tasks', 'comments', 'list', '--id', String(task.id))
|
||||
expect(listCode).toBe(0)
|
||||
const comments = parsed.data?.comments || parsed.comments || []
|
||||
expect(comments.length).toBeGreaterThanOrEqual(1)
|
||||
})
|
||||
})
|
||||
|
||||
// --- Sessions ---
|
||||
|
||||
test('sessions list returns response', async () => {
|
||||
// Sessions endpoint behavior depends on gateway availability
|
||||
const { exitCode } = await mc('sessions', 'list')
|
||||
expect(exitCode).toBeLessThanOrEqual(6)
|
||||
})
|
||||
|
||||
// --- Tokens ---
|
||||
|
||||
test('tokens stats returns data', async () => {
|
||||
const { exitCode } = await mc('tokens', 'stats')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
test('tokens by-agent returns data', async () => {
|
||||
const { exitCode } = await mc('tokens', 'by-agent', '--days', '7')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
// --- Skills ---
|
||||
|
||||
test('skills list returns data', async () => {
|
||||
const { exitCode } = await mc('skills', 'list')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
// --- Cron ---
|
||||
|
||||
test('cron list returns response', async () => {
|
||||
// Cron may return error in test mode — accept 0, 2, or 6
|
||||
const { exitCode } = await mc('cron', 'list')
|
||||
expect([0, 2, 6]).toContain(exitCode)
|
||||
})
|
||||
|
||||
// --- Connect ---
|
||||
|
||||
test('connect list returns data', async () => {
|
||||
const { exitCode } = await mc('connect', 'list')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
|
||||
// --- Raw passthrough ---
|
||||
|
||||
test('raw GET /api/status works', async () => {
|
||||
const { exitCode } = await mc('raw', '--method', 'GET', '--path', '/api/status?action=health')
|
||||
expect(exitCode).toBe(0)
|
||||
})
|
||||
})
|
||||
|
|
@ -0,0 +1,253 @@
|
|||
import { expect, test } from '@playwright/test'
|
||||
import { execFile, spawn } from 'node:child_process'
|
||||
import { promisify } from 'node:util'
|
||||
import path from 'node:path'
|
||||
import { createTestAgent, deleteTestAgent, createTestTask, deleteTestTask } from './helpers'
|
||||
|
||||
const MCP = path.resolve('scripts/mc-mcp-server.cjs')
|
||||
const BASE_URL = process.env.E2E_BASE_URL || 'http://127.0.0.1:3005'
|
||||
const API_KEY = 'test-api-key-e2e-12345'
|
||||
|
||||
/** Send JSON-RPC messages to the MCP server and collect responses */
|
||||
async function mcpCall(messages: object[]): Promise<any[]> {
|
||||
return new Promise((resolve, reject) => {
|
||||
const child = spawn('node', [MCP], {
|
||||
env: { ...process.env, MC_URL: BASE_URL, MC_API_KEY: API_KEY },
|
||||
stdio: ['pipe', 'pipe', 'pipe'],
|
||||
})
|
||||
|
||||
let stdout = ''
|
||||
child.stdout.on('data', (data: Buffer) => { stdout += data.toString() })
|
||||
|
||||
let stderr = ''
|
||||
child.stderr.on('data', (data: Buffer) => { stderr += data.toString() })
|
||||
|
||||
// Write all messages
|
||||
for (const msg of messages) {
|
||||
child.stdin.write(JSON.stringify(msg) + '\n')
|
||||
}
|
||||
child.stdin.end()
|
||||
|
||||
const timer = setTimeout(() => {
|
||||
child.kill()
|
||||
reject(new Error(`MCP server timeout. stdout: ${stdout}, stderr: ${stderr}`))
|
||||
}, 15000)
|
||||
|
||||
child.on('close', () => {
|
||||
clearTimeout(timer)
|
||||
const responses = stdout
|
||||
.split('\n')
|
||||
.filter(line => line.trim())
|
||||
.map(line => {
|
||||
try { return JSON.parse(line) } catch { return { raw: line } }
|
||||
})
|
||||
resolve(responses)
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
/** Send a single MCP JSON-RPC request and return the response */
|
||||
async function mcpRequest(method: string, params: object = {}, id = 1): Promise<any> {
|
||||
const responses = await mcpCall([
|
||||
{ jsonrpc: '2.0', id: 0, method: 'initialize', params: { protocolVersion: '2024-11-05', clientInfo: { name: 'test', version: '1.0' }, capabilities: {} } },
|
||||
{ jsonrpc: '2.0', method: 'notifications/initialized' },
|
||||
{ jsonrpc: '2.0', id, method, params },
|
||||
])
|
||||
// Return the response matching our request id (skip initialize response)
|
||||
return responses.find(r => r.id === id) || responses[responses.length - 1]
|
||||
}
|
||||
|
||||
/** Call an MCP tool and return the parsed content */
|
||||
async function mcpTool(name: string, args: object = {}): Promise<{ content: any; isError?: boolean }> {
|
||||
const response = await mcpRequest('tools/call', { name, arguments: args }, 99)
|
||||
const text = response?.result?.content?.[0]?.text || ''
|
||||
let parsed: any
|
||||
try { parsed = JSON.parse(text) } catch { parsed = text }
|
||||
return {
|
||||
content: parsed,
|
||||
isError: response?.result?.isError || false,
|
||||
}
|
||||
}
|
||||
|
||||
test.describe('MCP Server Integration', () => {
|
||||
// --- Protocol ---
|
||||
|
||||
test('initialize returns server info and capabilities', async () => {
|
||||
const responses = await mcpCall([
|
||||
{ jsonrpc: '2.0', id: 1, method: 'initialize', params: { protocolVersion: '2024-11-05', clientInfo: { name: 'test', version: '1.0' }, capabilities: {} } },
|
||||
])
|
||||
expect(responses).toHaveLength(1)
|
||||
expect(responses[0].result.serverInfo.name).toBe('mission-control')
|
||||
expect(responses[0].result.capabilities.tools).toBeDefined()
|
||||
})
|
||||
|
||||
test('tools/list returns all tools with schemas', async () => {
|
||||
const response = await mcpRequest('tools/list')
|
||||
const tools = response.result.tools
|
||||
expect(tools.length).toBeGreaterThan(30)
|
||||
|
||||
// Every tool should have name, description, and inputSchema
|
||||
for (const tool of tools) {
|
||||
expect(tool.name).toBeTruthy()
|
||||
expect(tool.description).toBeTruthy()
|
||||
expect(tool.inputSchema).toBeDefined()
|
||||
expect(tool.inputSchema.type).toBe('object')
|
||||
}
|
||||
|
||||
// Check key tools exist
|
||||
const names = tools.map((t: any) => t.name)
|
||||
expect(names).toContain('mc_list_agents')
|
||||
expect(names).toContain('mc_poll_task_queue')
|
||||
expect(names).toContain('mc_heartbeat')
|
||||
expect(names).toContain('mc_read_memory')
|
||||
expect(names).toContain('mc_write_memory')
|
||||
expect(names).toContain('mc_add_comment')
|
||||
expect(names).toContain('mc_health')
|
||||
})
|
||||
|
||||
test('unknown tool returns isError', async () => {
|
||||
const result = await mcpTool('mc_nonexistent', {})
|
||||
expect(result.isError).toBe(true)
|
||||
})
|
||||
|
||||
test('ping responds', async () => {
|
||||
const response = await mcpRequest('ping')
|
||||
expect(response.result).toBeDefined()
|
||||
})
|
||||
|
||||
test('unknown method returns error code', async () => {
|
||||
const response = await mcpRequest('foo/bar')
|
||||
expect(response.error).toBeDefined()
|
||||
expect(response.error.code).toBe(-32601)
|
||||
})
|
||||
|
||||
// --- Status tools ---
|
||||
|
||||
test('mc_health returns status', async () => {
|
||||
const { content, isError } = await mcpTool('mc_health')
|
||||
expect(isError).toBe(false)
|
||||
expect(content).toBeDefined()
|
||||
})
|
||||
|
||||
test('mc_dashboard returns system summary', async () => {
|
||||
const { content, isError } = await mcpTool('mc_dashboard')
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
// --- Agent tools ---
|
||||
|
||||
test.describe('agent tools', () => {
|
||||
const agentIds: number[] = []
|
||||
|
||||
test.afterEach(async ({ request }) => {
|
||||
for (const id of agentIds.splice(0)) {
|
||||
await deleteTestAgent(request, id).catch(() => {})
|
||||
}
|
||||
})
|
||||
|
||||
test('mc_list_agents returns agents', async () => {
|
||||
const { content, isError } = await mcpTool('mc_list_agents')
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
test('mc_heartbeat sends heartbeat', async ({ request }) => {
|
||||
const agent = await createTestAgent(request)
|
||||
agentIds.push(agent.id)
|
||||
|
||||
const { isError } = await mcpTool('mc_heartbeat', { id: agent.id })
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
test('mc_write_memory writes and mc_read_memory reads', async ({ request }) => {
|
||||
const agent = await createTestAgent(request)
|
||||
agentIds.push(agent.id)
|
||||
|
||||
// Write
|
||||
const { isError: writeErr } = await mcpTool('mc_write_memory', {
|
||||
id: agent.id,
|
||||
working_memory: 'MCP test memory content',
|
||||
})
|
||||
expect(writeErr).toBe(false)
|
||||
|
||||
// Read back
|
||||
const { isError: readErr } = await mcpTool('mc_read_memory', { id: agent.id })
|
||||
expect(readErr).toBe(false)
|
||||
})
|
||||
|
||||
test('mc_clear_memory clears', async ({ request }) => {
|
||||
const agent = await createTestAgent(request)
|
||||
agentIds.push(agent.id)
|
||||
|
||||
const { isError } = await mcpTool('mc_clear_memory', { id: agent.id })
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
// --- Task tools ---
|
||||
|
||||
test.describe('task tools', () => {
|
||||
const taskIds: number[] = []
|
||||
|
||||
test.afterEach(async ({ request }) => {
|
||||
for (const id of taskIds.splice(0)) {
|
||||
await deleteTestTask(request, id).catch(() => {})
|
||||
}
|
||||
})
|
||||
|
||||
test('mc_list_tasks returns tasks', async () => {
|
||||
const { isError } = await mcpTool('mc_list_tasks')
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
test('mc_poll_task_queue returns response', async () => {
|
||||
const { isError } = await mcpTool('mc_poll_task_queue', { agent: 'e2e-mcp-agent' })
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
test('mc_create_task creates a task', async ({ request }) => {
|
||||
const { content, isError } = await mcpTool('mc_create_task', { title: 'MCP e2e test task' })
|
||||
expect(isError).toBe(false)
|
||||
if ((content as any)?.task?.id) taskIds.push((content as any).task.id)
|
||||
})
|
||||
|
||||
test('mc_add_comment succeeds', async ({ request }) => {
|
||||
const task = await createTestTask(request)
|
||||
taskIds.push(task.id)
|
||||
|
||||
const { isError } = await mcpTool('mc_add_comment', {
|
||||
id: task.id,
|
||||
content: 'MCP comment test',
|
||||
})
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
test('mc_list_comments returns array', async ({ request }) => {
|
||||
const task = await createTestTask(request)
|
||||
taskIds.push(task.id)
|
||||
|
||||
const { isError } = await mcpTool('mc_list_comments', { id: task.id })
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
})
|
||||
|
||||
// --- Token tools ---
|
||||
|
||||
test('mc_token_stats returns stats', async () => {
|
||||
const { isError } = await mcpTool('mc_token_stats', { timeframe: 'all' })
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
// --- Skill tools ---
|
||||
|
||||
test('mc_list_skills returns data', async () => {
|
||||
const { isError } = await mcpTool('mc_list_skills')
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
|
||||
// --- Cron tools ---
|
||||
|
||||
test('mc_list_cron returns data', async () => {
|
||||
const { isError } = await mcpTool('mc_list_cron')
|
||||
expect(isError).toBe(false)
|
||||
})
|
||||
})
|
||||
Loading…
Reference in New Issue