11 KiB
L2 Agent Encryption — Design Document
Problem
The database on the server should be worthless to steal.
Today, anything an AI agent can read is encrypted with a server-held key. Steal the database + derive the vault key (from the first WebAuthn credential's public key) = read everything agents can read. The server is interesting to steal.
Solution: Three-Tier Encryption
L1 — Server-readable (metadata)
What it protects: entry title, type, URLs, username labels. Knowing someone has a Coinbase account isn't an attack.
What exists today: titles are already plaintext (for search). The rest is inside the L1-encrypted blob. Some metadata fields should move out of encryption into plaintext or remain L1-encrypted — acceptable either way since the server can read L1 by design.
No changes needed. L1 stays as-is.
L2 — Agent-readable, server-opaque (secrets)
What it protects: passwords, API keys, TOTP seeds, SSH private keys — anything an AI agent needs to act on.
The server stores L2 ciphertext. It cannot decrypt it. Agents decrypt locally using the L2 private key embedded in their token.
This is the new tier. This is what we're building.
L3 — Hardware-only (high-value secrets)
What it protects: card numbers, CVV, passport numbers, government IDs, bank accounts, seed phrases.
Encrypted with a symmetric key derived from WebAuthn PRF. Requires physical authenticator. Even a fully compromised agent with L2 access cannot reach L3.
This is the current "L2" implementation (client-side PRF encryption). Rename and keep.
Key Derivation
Single root of trust: the hardware authenticator's PRF output.
Hardware authenticator (Touch ID / YubiKey / Titan Key)
→ WebAuthn PRF output (32 bytes)
│
├─ HKDF-SHA256(salt="vault1984-l2-seed", info=empty)
│ → 32 bytes
│ → X25519 keypair (asymmetric)
│ ├─ public key → stored on server (for browser encryption)
│ └─ private key → NEVER stored on server
│ ├─ browser has it during PRF session
│ └─ baked into agent tokens at creation time
│
└─ HKDF-SHA256(salt="vault1984-l3", info=empty)
→ 32 bytes → AES-256 key (symmetric)
→ browser-only, never leaves the client
Properties:
- L2 private key cannot be used to derive L3 key (independent HKDF branches)
- Compromised agent = L2 exposed, L3 untouched
- Both derived from same PRF tap — one authentication unlocks both in browser
Combined Agent Token
Agent credentials are a single opaque string containing both the MCP auth token and the L2 private key.
Format
base64url(mcp_token_bytes || AES-256-GCM(per_token_key, l2_private_key))
Where:
per_token_key = HKDF-SHA256(ikm=mcp_token_bytes, salt="vault1984-token-wrap", info=empty)
Properties
- Each token looks completely different (L2 key wrapped with token-specific key)
- Two tokens side by side reveal no shared material
- Agent splits locally: auth half → server, key half → local decryption only
- Server never sees the L2 private key in the combined token (only at creation time, briefly in memory)
Agent-side flow
1. Read combined token from config file
2. Decode base64url
3. Split at known offset (first 32 bytes = MCP token)
4. Derive per_token_key from MCP token bytes
5. Unwrap L2 private key via AES-256-GCM
6. Auth: send MCP token in Authorization header
7. Decrypt: use L2 private key locally on L2 ciphertext
Token Creation Flow
- User clicks "Create MCP token" in browser UI
- Browser triggers WebAuthn authentication (user taps hardware key)
- PRF output → derive L2 private key via HKDF
- Server creates MCP token record (label, scope, expiry)
- Browser receives MCP token bytes from server
- Browser wraps L2 private key with per-token key
- Browser concatenates and base64url-encodes
- Combined token displayed once for user to copy
Requires WebAuthn tap — this is desirable, not a limitation. Creating agent credentials should require physical authentication.
Entry Save Flow (Browser)
When saving an entry with L2 fields:
- User has active PRF session (already tapped hardware key)
- Browser derives L2 keypair from PRF output
- For each L2 field:
- Generate ephemeral X25519 keypair
- ECDH(ephemeral_private, l2_public_key) → shared secret
- HKDF(shared_secret) → AES-256-GCM key
- Encrypt field value
- Store: ephemeral_public_key || nonce || ciphertext
- L2 field values in VaultData are replaced with the ciphertext blob
- Entry saved normally (L1 encryption wraps the whole thing, L2 fields are ciphertext-within-ciphertext)
Alternative (simpler): use NaCl crypto_box_seal (X25519 + XSalsa20-Poly1305). One function call, well-understood, available in tweetnacl-js and Go.
MCP Read Flow (Agent)
- Agent sends request with MCP token (auth half only)
- Server decrypts entry with L1 key (as today)
- Server returns entry — L2 field values are opaque ciphertext blobs
- L3 field values are
"[L3 — requires hardware key]"(as today's L2 redaction) - Agent decrypts L2 fields locally with its L2 private key
Import Flow
Import already requires a browser session (LLM-powered import UI). User has already authenticated with WebAuthn. PRF is available.
- Import parses incoming data, auto-detects L2 fields via
l2labels.go - Browser encrypts L2 fields with L2 public key before sending to server
- Server stores encrypted blobs. Never sees plaintext.
Database Schema Changes
Modified: mcp_tokens
ALTER TABLE mcp_tokens ADD COLUMN l2_public_key BLOB;
-- Not strictly needed (all agents share the same L2 public key, stored at vault level)
-- But useful if we ever want per-agent L2 keys in the future
Actually — since all agents share one L2 keypair, the public key should be vault-level:
-- New vault-level config (or add to existing config mechanism)
-- Store the L2 public key once
ALTER TABLE ... ADD COLUMN l2_public_key BLOB; -- 32 bytes, X25519
Modified: VaultField
type VaultField struct {
Label string `json:"label"`
Value string `json:"value"` // plaintext (L1) or ciphertext blob (L2)
Kind string `json:"kind"`
Section string `json:"section,omitempty"`
Tier int `json:"tier,omitempty"` // 1=L1 (default), 2=agent-encrypted, 3=hardware-only
}
The L2 bool field becomes Tier int. Migration: L2=false → Tier=1, L2=true → Tier=3 (current L2 maps to new L3).
No new tables
No l2_field_envelopes. No l2_key_wraps. L2 ciphertext lives inline in the VaultField value. Clean.
Migration Path
Existing entries
All existing fields are either L1 (server-encrypted) or flagged L2 (which maps to new L3, hardware-only).
No existing fields need to become new-L2 today. The migration is:
- Rename
L2 booltoTier intin types - Existing
L2=true→Tier=3 - Existing
L2=false→Tier=1 - New L2 tier is opt-in per field going forward
Fields that should be L2 (passwords, API keys, TOTP) can be upgraded by the user through the UI. A "security upgrade" flow in the browser could batch-convert selected L1 fields to L2 (requires PRF session to encrypt).
What Breaks
- MCP response format — L2 fields return ciphertext instead of plaintext. Agents must decrypt. Breaking change for any existing MCP client.
stripL2Fields()function — replaced with tier-aware logic: L2 returns ciphertext, L3 returns redaction string.- MCP token format — combined token is longer and contains wrapped key. Existing tokens remain valid but can't decrypt L2 (they don't have the key half). Backward compatible for L1 access.
- Token creation UI — now requires WebAuthn tap.
- Field model —
L2 bool→Tier int. All serialization, tests, l2labels.go detection must update.
What Doesn't Break
- L1 encryption (unchanged)
- L3/WebAuthn PRF flow (unchanged, just renamed)
- Entry CRUD (L2 ciphertext is just a string value from the server's perspective)
- Blind indexing, search (operates on titles, which are L1)
- Audit logging (unchanged)
- Scoped tokens, read-only, expiry (unchanged)
- Import detection (l2labels.go still detects sensitive fields, just flags them as Tier 2 or 3)
Security Properties
| Scenario | L1 | L2 | L3 |
|---|---|---|---|
| Database stolen | Readable (with vault key derivation) | Encrypted, worthless | Encrypted, worthless |
| Server process compromised | Readable | Readable (briefly, during L1 decryption of blob containing L2 ciphertext) | Not present |
| Agent compromised | Readable (via MCP) | Readable (has L2 key) | Not present |
| Agent + server compromised | Readable | Readable | Encrypted, worthless |
| Hardware authenticator stolen | Readable | Readable (can derive L2 key) | Readable (can derive L3 key) |
Wait — "Server process compromised" for L2 says readable. Let's examine:
- Server decrypts L1 blob → sees L2 field values as ciphertext
- Server cannot decrypt that ciphertext (no L2 private key)
- Server returns ciphertext to agent → L2 is NOT readable by compromised server
Corrected:
| Scenario | L1 | L2 | L3 |
|---|---|---|---|
| Database stolen | Derivable from public key | Worthless ciphertext | Worthless ciphertext |
| Server memory dump | Plaintext (during request) | Ciphertext only | Not present |
| Agent compromised | Via MCP | Decryptable | Not present |
| Hardware key stolen + PIN | Everything | Everything | Everything |
Implementation Plan
Phase 1: Foundation (day 1)
- Rename
L2 bool→Tier intinVaultField(types.go) - Update all references: l2labels.go (now assigns Tier 2 or 3), handlers, tests
- Add
l2_public_key BLOBcolumn to vault config storage - Add L2 HKDF derivation branch in webauthn.js (alongside existing L3 derivation)
- Generate and store L2 public key on first passkey registration
- Tests for key derivation (L2 and L3 from same PRF output are independent)
Phase 2: L2 Encryption (day 2)
- Implement L2 field encryption in browser (sealed box or X25519+AES-GCM via tweetnacl-js)
- Entry save: browser encrypts Tier=2 fields with L2 public key before packing
- Entry read (browser): decrypt Tier=2 fields with L2 private key (from PRF session)
- Entry read (MCP): return Tier=2 ciphertext as-is, Tier=3 as redacted string
- Import flow: encrypt detected L2 fields during import
Phase 3: Combined Token (day 3)
- Modify token creation: require WebAuthn auth, derive L2 private key
- Implement token wrapping:
mcp_token || AES-GCM(HKDF(mcp_token), l2_private_key) - Token display: show combined base64url string
- Agent-side: split combined token, unwrap L2 key, use for decryption
- Update MCP client code to decrypt L2 fields after receiving response
Phase 4: Migration & Polish (day 4)
- Data migration: existing
L2=true→Tier=3,L2=false→Tier=1 - UI: field tier selector (L1/L2/L3) replacing L2 toggle
- UI: "upgrade to L2" batch flow for existing L1 passwords/API keys
- Update all tests
- Update extension to handle L2 ciphertext
Total: ~4 days of focused agent work
Not 2-3 weeks. The crypto is straightforward (X25519 + AES-GCM, libraries exist for both Go and JS). The schema change is a rename. The hardest part is the browser-side encryption/decryption wiring and the combined token format.