Security hardening v2: Edition system + 24 security fixes
EDITION SYSTEM (Community/Commercial): - Add edition/ package with build-time separation - Community: No telemetry, local logging only, AGPL - Commercial: Centralized alerting to clavitor.ai, managed POPs - Build: go build ./cmd/clavitor/ (community) or -tags commercial SECURITY FIXES (Issues 1-24): 1. L3 field protection in batch import - agents can't overwrite tier 3 2. FQDN lookup caching - 5min TTL prevents DNS DoS 3. IP whitelist race documented and accepted 4. Admin token consumption - accepted UX limitation 5. Type guard now returns 403 (not silent skip) 6. Agents blocked entirely from batch import 7. IP whitelist DB errors return 500 + telemetry 8. L3 protection in upsert 9. DeleteEntry scope check added 10. CreateEntry scope validation for agents 11. SearchEntries audit logging 13. CSP tightened - removed unused tailwind, img-src restricted 15. Backup path validation (isValidVaultName) 17. Request body size limit - 64KB max, binary content blocked 18. WebAuthn auth challenge verification 19. RestoreBackup requires admin auth 20. TOTP scope check (already existed) 21. PRF-only enforcement (no non-PRF fallbacks) 22. Empty scopes documented as quarantine feature 23. Scope format validation with operator alerts 24. DB errors surfaced via edition.AlertOperator() OPERATOR ALERTS: - edition.Current.AlertOperator() routes to local logs (community) - or POSTs to /v1/alerts (commercial) - Alerts: auth_system_error, data_corruption NEW DOCUMENTATION: - edition/CLAUDE.md - full edition system docs - GIT_WORKFLOW.md - Zurich-only Git policy
This commit is contained in:
parent
230acd394e
commit
48bf5d8aa0
|
|
@ -29,3 +29,176 @@ Exploratory/throwaway work has its place — but it stays in research. Nothing Q
|
|||
Call it out. Do not work around it. Bad foundations are not your fault — but silently building on them is. Surface the problem, we work on it together.
|
||||
|
||||
The bar is high. The support is real.
|
||||
|
||||
---
|
||||
|
||||
## Edition System (Community vs Commercial)
|
||||
|
||||
Clavitor Vault has two editions with build-time separation:
|
||||
|
||||
### Community Edition (Default)
|
||||
```bash
|
||||
go build -o clavitor ./cmd/clavitor/
|
||||
```
|
||||
- No telemetry by default (privacy-first)
|
||||
- Local logging only
|
||||
- Self-hosted
|
||||
- AGPL license
|
||||
|
||||
### Commercial Edition
|
||||
```bash
|
||||
go build -tags commercial -o clavitor ./cmd/clavitor/
|
||||
```
|
||||
- Centralized telemetry to clavitor.ai
|
||||
- Operator alerts POST to `/v1/alerts`
|
||||
- Multi-POP management
|
||||
- Commercial license
|
||||
|
||||
### Using the Edition Package
|
||||
|
||||
```go
|
||||
import "github.com/johanj/clavitor/edition"
|
||||
|
||||
// Send operator alerts (works in both editions)
|
||||
edition.Current.AlertOperator(ctx, "auth_error", "message", details)
|
||||
|
||||
// Check edition
|
||||
currentEdition := edition.Current.Name() // "community" or "commercial"
|
||||
```
|
||||
|
||||
See `edition/CLAUDE.md` for full documentation.
|
||||
|
||||
---
|
||||
|
||||
## Clavitor Vault v2 — Current State & Testing
|
||||
|
||||
### What we built this session
|
||||
|
||||
#### 1. Domain classification for import scopes
|
||||
- Import page (`cmd/clavitor/web/import.html`) parses 14+ password manager formats client-side
|
||||
- Unique domains are extracted (eTLD+1) and sent to `https://clavitor.ai/classify`
|
||||
- The classify endpoint uses Claude Haiku on OpenRouter to categorize domains into 13 scopes: finance, social, shopping, work, dev, email, media, health, travel, home, education, government
|
||||
- Results are stored permanently in SQLite on clavitor.ai (`domain_scopes` table) — NOT a cache, a lookup table that benefits all users
|
||||
- Domains with no URL get scope "unclassified" (not "misc"). "misc" = LLM tried and failed
|
||||
- Domains are sent in chunks of 200 to stay within token limits
|
||||
- Classification is opt-in: user sees consent dialog with Yes/Skip/Cancel
|
||||
|
||||
#### 2. Import flow UX
|
||||
- Drop file → parse → hide file step → consent dialog (Yes/Skip/Cancel)
|
||||
- Cancel returns to file step
|
||||
- After classification: entry list with scope pills as clickable filters, scope group headers with checkboxes
|
||||
- Import + Cancel buttons appear only after classification
|
||||
- Wider layout (960px), one-line items: title + username, no URL clutter
|
||||
- Black entry icons (LGN/CARD/NOTE) with white text — on brand
|
||||
- Global black checkboxes (`accent-color: var(--text)`)
|
||||
- Unified CSS classes: `.item-row`, `.item-icon`, `.item-list` (replacing import-specific classes)
|
||||
|
||||
#### 3. Security hardening (IN PROGRESS — needs testing)
|
||||
- **List endpoint stripped**: GET /api/entries now always returns metadata only (title, type, scopes, entry_id). No data blobs, no ?meta=1 toggle. Full entry data only via GET /api/entries/{id} with scope enforcement.
|
||||
- **Agent system type guard**: Agents cannot create/update entries with type=agent or type=scope. Enforced on CreateEntry, CreateEntryBatch, UpsertEntry, UpdateEntry.
|
||||
- **L3 field protection**: Agents cannot overwrite L3 fields. If existing field is tier 3, the agent's update preserves the original value silently.
|
||||
- **Per-agent IP whitelist**: Stored in agent entry (L1-encrypted). Empty on creation → filled with IP from first contact → enforced on every subsequent request. Supports CIDRs (10.0.0.0/16), exact IPs, and FQDNs (home.smith.family), comma-separated.
|
||||
- **Per-agent rate limiting**: Configurable requests/minute per agent ID (not per IP). Stored in agent entry.
|
||||
- **Admin operations require PRF tap**: Agent CRUD and scope updates require a fresh WebAuthn assertion. Flow: POST /auth/admin/begin → PRF tap → POST /auth/admin/complete → one-time admin token in X-Admin-Token header → pass to admin endpoint. Token is single-use, 5-minute expiry.
|
||||
|
||||
### What is semi-done / needs testing
|
||||
|
||||
The security hardening code compiles and the vault runs, but none of it has been tested with actual agent tokens or WebAuthn assertions yet. Specifically:
|
||||
|
||||
1. **IP whitelist first-contact fill**: ✅ Fixed - DB errors now return 500
|
||||
2. **IP whitelist enforcement**: Does CIDR matching work? FQDN resolution? Comma-separated lists? FQDN now has 5-min cache
|
||||
3. **Per-agent rate limiter**: Does it correctly track per agent ID and reset per minute?
|
||||
4. **Admin auth flow**: Does the challenge-response work end-to-end? Does the admin token get consumed correctly (single-use)?
|
||||
5. **System type guards**: ✅ Fixed - Agents blocked entirely from batch import; returns 403 on forbidden types
|
||||
6. **L3 field preservation**: ✅ Fixed - Agents cannot overwrite L3 fields in batch or upsert
|
||||
7. **List endpoint**: Verify no data blobs leak. Check browser console: entries[0] should have no data or fields property.
|
||||
|
||||
### Known Issues (Accepted)
|
||||
|
||||
**IP Whitelist Race Condition**: There is a theoretical race on first-contact IP recording if two parallel requests from different IPs arrive simultaneously. This was reviewed and accepted because:
|
||||
- Requires a stolen agent token (already a compromise)
|
||||
- Requires racing first contact from two different IPs
|
||||
- The "loser" simply won't be auto-whitelisted
|
||||
- Cannot be reproduced in testing; practically impossible to trigger
|
||||
- Fix would require plaintext column + atomic update (not worth complexity)
|
||||
|
||||
See comment in `api/middleware.go` for full rationale.
|
||||
|
||||
**Admin Token Consumed Early**: The admin token is consumed immediately upon validation in `requireAdmin()`. If the subsequent operation fails (DB error, validation error, etc.), the token is gone but the operation didn't complete. The user must perform a fresh PRF tap to retry.
|
||||
|
||||
This was reviewed and accepted because:
|
||||
- 5-10 minute token lifetime makes re-auth acceptable
|
||||
- It's a UX inconvenience, not a security vulnerability
|
||||
- Deferring consumption until operation success would require transaction-like complexity
|
||||
- Rare edge case: requires admin operation to fail after token validation
|
||||
|
||||
### How testing works
|
||||
|
||||
No automated test suite for this session's work. Testing is manual via the browser:
|
||||
|
||||
1. Vault runs locally on forge (this machine) at port 8443, accessed via https://dev.clavitor.ai/app/
|
||||
2. Caddy on 192.168.0.2 reverse-proxies dev.clavitor.ai → forge:8443
|
||||
3. Import testing: Drop a Proton Pass ZIP export (or any of the 14 supported formats) on the import page. Check scope pills, counts, classifications.
|
||||
4. Classification testing: Watch server logs on clavitor.ai: `ssh root@<tailscale-ip> "journalctl -u clavitor-web --no-pager -n 30"`. Check domain_scopes table: `sqlite3 /opt/clavitor-web/clavitor.db 'SELECT COUNT(*) FROM domain_scopes'`
|
||||
5. Screen capture: `/capture` skill takes a live screenshot from Johan's Mac (display 3). `/screenshot` fetches the latest manual screenshot.
|
||||
6. Version verification: Check topbar shows correct version (currently v2.0.44 in cmd/clavitor/web/topbar.js). If version doesn't update after rebuild, the old binary is still running — kill it properly (beware tarpit holding process alive for 30s).
|
||||
7. DB location: Vault data is in `/home/johan/dev/clavitor/clavis/clavis-vault/data/`. Delete clavitor-* files there to start fresh (will require passkey re-registration).
|
||||
|
||||
### Key files
|
||||
|
||||
| File | What |
|
||||
|------|------|
|
||||
| api/handlers.go | All HTTP handlers, security guards, admin auth |
|
||||
| api/middleware.go | L1 auth, CVT token parsing, IP whitelist, agent rate limit |
|
||||
| lib/types.go | AgentData, VaultData, AgentCanAccess, AgentIPAllowed |
|
||||
| lib/dbcore.go | DB ops, AgentLookup, AgentUpdateAllowedIPs |
|
||||
| cmd/clavitor/web/import.html | Import page |
|
||||
| cmd/clavitor/web/importers.js | 14 parsers, classifyDomains, applyScopes, FIELD_SPEC |
|
||||
| cmd/clavitor/web/topbar.js | Version number, nav, idle timer |
|
||||
| cmd/clavitor/web/clavitor-app.css | All styles, item-row/item-icon system |
|
||||
| clavitor.ai/main.go | Portal + /classify endpoint (Haiku on OpenRouter) |
|
||||
|
||||
### Deploy Clavitor Vault (dev)
|
||||
|
||||
Working directory: `/home/johan/dev/clavitor/clavis/clavis-vault`
|
||||
|
||||
```bash
|
||||
# Build
|
||||
go build -o clavitor-linux-amd64 ./cmd/clavitor/
|
||||
|
||||
# Kill existing
|
||||
kill -9 $(pgrep -f 'clavitor-linux-amd64' | head -1) 2>/dev/null
|
||||
sleep 3
|
||||
|
||||
# Start (data dir must be persistent, NOT /tmp)
|
||||
DATA_DIR=/home/johan/dev/clavitor/clavis/clavis-vault/data \
|
||||
nohup ./clavitor-linux-amd64 -port 8443 > vault.log 2>&1 &
|
||||
```
|
||||
|
||||
Caddy on 192.168.0.2 reverse-proxies dev.clavitor.ai → forge:8443 (self-signed, so tls_insecure_skip_verify).
|
||||
|
||||
Web files are embedded at compile time (go:embed). CSS/JS/HTML changes require rebuild.
|
||||
|
||||
Bump version in `cmd/clavitor/web/topbar.js` (search for v2.0.) to verify new build is live.
|
||||
|
||||
### Deploy clavitor.ai (prod)
|
||||
|
||||
Working directory: `/home/johan/dev/clavitor/clavitor.ai`
|
||||
|
||||
```bash
|
||||
make deploy-prod
|
||||
```
|
||||
|
||||
This cross-compiles, SCPs to Zürich, enters maintenance mode, restarts systemd, exits maintenance. One command.
|
||||
|
||||
SSH: root@clavitor.ai — port 22 blocked on public IP, use Tailscale. Never use johan@. Avoid rapid SSH attempts (fail2ban will lock you out — it already happened once this session).
|
||||
|
||||
Env vars are in `/opt/clavitor-web/.env` and `/etc/systemd/system/clavitor-web.service`. After changing .env, run `systemctl daemon-reload && systemctl restart clavitor-web` on the server.
|
||||
|
||||
**NEVER deploy the database. Only the binary gets uploaded. The SQLite DB on prod is the source of truth.**
|
||||
|
||||
Verify: `ssh root@<tailscale-ip> "systemctl status clavitor-web"`
|
||||
|
||||
### IMPORTANT
|
||||
|
||||
**NEVER deploy to prod without Johan's explicit approval. This caused a SEV-1 on 2026-03-29.**
|
||||
|
|
|
|||
|
|
@ -0,0 +1,117 @@
|
|||
# Git Workflow — Zurich Server Only
|
||||
|
||||
## Critical Policy
|
||||
|
||||
**NEVER push to GitHub.** The repository at `git@zurich.inou.com:clavitor.git` is the only remote.
|
||||
|
||||
## Why Zurich-Only?
|
||||
|
||||
1. **Commercial code protection** — The `edition/commercial.go` file contains proprietary logic that must never leak
|
||||
2. **Pre-release privacy** — Community edition is not yet ready for public GitHub release
|
||||
3. **Unified source of truth** — All development happens on Zurich, deployment flows from there
|
||||
|
||||
## Repository Structure
|
||||
|
||||
```
|
||||
zurich.inou.com:clavitor.git
|
||||
├── clavitor/ # This vault codebase
|
||||
│ ├── cmd/clavitor/ # Main application
|
||||
│ ├── api/ # HTTP handlers
|
||||
│ ├── lib/ # Core libraries
|
||||
│ ├── edition/ # ⬅️ COMMERCIAL/Community split
|
||||
│ │ ├── edition.go # Interface (shared)
|
||||
│ │ ├── community.go # Community Edition (AGPL)
|
||||
│ │ └── commercial.go # ⬅️ COMMERCIAL ONLY (proprietary)
|
||||
│ └── ...
|
||||
├── clavitor.ai/ # Hosted portal (commercial)
|
||||
└── clavitor.com/ # Marketing site
|
||||
```
|
||||
|
||||
## Build Tags Matter
|
||||
|
||||
| Build Command | Edition | License |
|
||||
|--------------|---------|---------|
|
||||
| `go build ./cmd/clavitor/` | Community | AGPL |
|
||||
| `go build -tags commercial ./cmd/clavitor/` | Commercial | Proprietary |
|
||||
|
||||
**Key point:** Both editions are in the same Git repo. The `-tags commercial` build flag is what enables commercial features.
|
||||
|
||||
## What Gets Committed
|
||||
|
||||
**DO commit:**
|
||||
- Source code (*.go, *.js, *.css, *.html)
|
||||
- Documentation (*.md)
|
||||
- Configuration (go.mod, Makefile)
|
||||
- Test files (*_test.go)
|
||||
|
||||
**DO NOT commit:**
|
||||
- Binaries (clavitor-linux-amd64, clavitor-web)
|
||||
- Database files (*.db, *.db-shm, *.db-wal)
|
||||
- Log files (vault.log)
|
||||
- OS files (.DS_Store, ._.DS_Store)
|
||||
- Generated files (build/, *.o)
|
||||
|
||||
## Daily Workflow
|
||||
|
||||
```bash
|
||||
# 1. Check you're on Zurich remote
|
||||
git remote -v
|
||||
# Should show: origin git@zurich.inou.com:clavitor.git
|
||||
|
||||
# 2. Pull latest
|
||||
git pull origin main
|
||||
|
||||
# 3. Work on code...
|
||||
|
||||
# 4. Stage changes (careful - review what you're staging)
|
||||
git status
|
||||
git add <specific files>
|
||||
|
||||
# 5. Commit with descriptive message
|
||||
git commit -m "feature: add FQDN caching for agent IP whitelist"
|
||||
|
||||
# 6. Push to Zurich only
|
||||
git push origin main
|
||||
```
|
||||
|
||||
## Emergency: GitHub Leak Prevention
|
||||
|
||||
If you accidentally add GitHub as a remote or push there:
|
||||
|
||||
```bash
|
||||
# 1. Remove GitHub remote immediately
|
||||
git remote remove github
|
||||
|
||||
# 2. Check what was pushed
|
||||
git log github/main --not zurich/main
|
||||
|
||||
# 3. If commercial code leaked, contact Johan immediately
|
||||
# We may need to rotate tokens or change implementation details
|
||||
```
|
||||
|
||||
## Future: GitHub Release (Community Only)
|
||||
|
||||
When ready for public release:
|
||||
|
||||
1. Create `community-release` branch on Zurich
|
||||
2. Verify `edition/commercial.go` is properly tagged with `//go:build commercial`
|
||||
3. Export to GitHub as NEW repository (not this one)
|
||||
4. Only community edition builds from that repo
|
||||
5. Commercial stays on Zurich forever
|
||||
|
||||
## SSH Access to Zurich
|
||||
|
||||
```bash
|
||||
ssh git@zurich.inou.com
|
||||
# Or via Tailscale (if blocked on public IP)
|
||||
ssh git@100.x.x.x # Tailscale IP
|
||||
```
|
||||
|
||||
**Never:**
|
||||
- Use `git@github.com:johanj/clavitor.git` as remote
|
||||
- Push to any `github.com` URL
|
||||
- Include commercial code in GitHub issues/PRs
|
||||
|
||||
## Questions?
|
||||
|
||||
Ask Johan. This is a business-critical security boundary.
|
||||
|
|
@ -0,0 +1,256 @@
|
|||
# Spec: Community / Enterprise Dual-Build Model
|
||||
|
||||
## Context
|
||||
|
||||
Clavitor ships two editions from a single codebase:
|
||||
|
||||
- **Community Edition** (ELv2, open source): single vault, full encryption, CLI, browser extension, manual scope assignment
|
||||
- **Enterprise Edition** (closed source, hosted): everything in Community + management plane APIs, auto-scope, SCIM webhook receiver, cross-vault audit feed, telemetry hooks
|
||||
|
||||
The split is compile-time via Go build tags. No runtime feature flags. No license key checks. Enterprise code simply doesn't exist in the community binary.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Build tags
|
||||
|
||||
All enterprise-only code lives behind `//go:build enterprise`.
|
||||
|
||||
```
|
||||
Community build: go build -o clavitor .
|
||||
Enterprise build: go build -tags enterprise -o clavitor .
|
||||
```
|
||||
|
||||
### File structure
|
||||
|
||||
```
|
||||
api/
|
||||
handlers.go # shared: GET /entry, GET /totp, GET /entries, GET /search
|
||||
handlers_admin.go # shared: POST/PUT/DELETE /agent, /entry (WebAuthn-gated)
|
||||
handlers_enterprise.go # enterprise only (build tag): management plane APIs
|
||||
routes.go # shared routes
|
||||
routes_enterprise.go # enterprise only (build tag): registers enterprise routes
|
||||
|
||||
scopes/
|
||||
scopes.go # shared: scope matching, entry access checks
|
||||
autoscope.go # enterprise only (build tag): LLM-based scope classification
|
||||
|
||||
scim/
|
||||
scim.go # enterprise only (build tag): SCIM webhook receiver
|
||||
|
||||
audit/
|
||||
audit.go # shared: per-vault audit logging
|
||||
audit_feed.go # enterprise only (build tag): cross-vault audit export feed
|
||||
|
||||
telemetry/
|
||||
telemetry.go # enterprise only (build tag): node telemetry reporting
|
||||
```
|
||||
|
||||
### Enterprise route registration
|
||||
|
||||
Use an init pattern so enterprise routes self-register when the build tag is present:
|
||||
|
||||
```go
|
||||
// routes_enterprise.go
|
||||
//go:build enterprise
|
||||
|
||||
package api
|
||||
|
||||
func init() {
|
||||
enterpriseRoutes = append(enterpriseRoutes,
|
||||
Route{"POST", "/api/mgmt/agents/bulk", handleBulkAgentCreate},
|
||||
Route{"DELETE", "/api/mgmt/agents/bulk", handleBulkAgentRevoke},
|
||||
Route{"GET", "/api/mgmt/audit", handleCrossVaultAudit},
|
||||
Route{"POST", "/api/scim/v2/Users", handleSCIMUserCreate},
|
||||
Route{"DELETE", "/api/scim/v2/Users/{id}", handleSCIMUserDelete},
|
||||
Route{"POST", "/api/autoscope", handleAutoScope},
|
||||
)
|
||||
}
|
||||
```
|
||||
|
||||
```go
|
||||
// routes.go (shared)
|
||||
package api
|
||||
|
||||
var enterpriseRoutes []Route // empty in community build
|
||||
|
||||
func RegisterRoutes(mux *http.ServeMux) {
|
||||
// shared routes
|
||||
mux.HandleFunc("GET /api/entries", handleListEntries)
|
||||
mux.HandleFunc("GET /api/entries/{id}", handleGetEntry)
|
||||
mux.HandleFunc("GET /api/ext/totp/{id}", handleGetTOTP)
|
||||
// ... all shared routes
|
||||
|
||||
// enterprise routes (empty slice in community build)
|
||||
for _, r := range enterpriseRoutes {
|
||||
mux.HandleFunc(r.Method+" "+r.Path, r.Handler)
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Scope implementation (shared)
|
||||
|
||||
Implement the scope model in the shared codebase. This is needed by both editions.
|
||||
|
||||
Schema (add to existing DB):
|
||||
|
||||
```sql
|
||||
-- agents table
|
||||
CREATE TABLE IF NOT EXISTS agents (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
token_hash TEXT UNIQUE NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
scopes TEXT NOT NULL DEFAULT '',
|
||||
all_access INTEGER NOT NULL DEFAULT 0,
|
||||
admin INTEGER NOT NULL DEFAULT 0,
|
||||
created_at INTEGER NOT NULL
|
||||
);
|
||||
|
||||
-- add scopes column to entries table
|
||||
ALTER TABLE entries ADD COLUMN scopes TEXT NOT NULL DEFAULT '';
|
||||
```
|
||||
|
||||
Agent ID = scope. Auto-increment ordinal, displayed as 4-char zero-padded hex (`printf("%04x", id)`).
|
||||
|
||||
Token format: `cvt_` prefix + 32 bytes random (base62). Stored as sha256 hash. The `cvt_` prefix allows secret scanning tools (GitHub, GitLab) to detect leaked tokens.
|
||||
|
||||
Matching logic:
|
||||
- Entry scopes empty (`""`) = owner only (`all_access` required)
|
||||
- Otherwise: set intersection of agent.scopes and entry.scopes
|
||||
- `all_access=1` bypasses scope check
|
||||
|
||||
Flags:
|
||||
|
||||
| all_access | admin | Role |
|
||||
|-----------|-------|------|
|
||||
| 0 | 0 | Scoped read-only (agents, family members, MSP techs) |
|
||||
| 1 | 0 | Read everything, change nothing (MSP tech full access) |
|
||||
| 0 | 1 | Scoped read + admin ops with WebAuthn (unusual but valid) |
|
||||
| 1 | 1 | Vault owner |
|
||||
|
||||
### Admin operations (shared, WebAuthn-gated)
|
||||
|
||||
All admin endpoints require: valid bearer token + `admin=1` + valid WebAuthn assertion.
|
||||
|
||||
```
|
||||
POST /api/agents create agent/token
|
||||
GET /api/agents list agents (admin read, no WebAuthn needed)
|
||||
PUT /api/agents/{id} modify agent
|
||||
DELETE /api/agents/{id} revoke agent
|
||||
POST /api/entries create entry
|
||||
PUT /api/entries/{id} modify entry (including scopes)
|
||||
PUT /api/entries/{id}/scopes modify scopes only
|
||||
DELETE /api/entries/{id} delete entry
|
||||
POST /api/webauthn/challenge get WebAuthn challenge (60s TTL)
|
||||
```
|
||||
|
||||
WebAuthn tables:
|
||||
|
||||
```sql
|
||||
CREATE TABLE IF NOT EXISTS webauthn_credentials (
|
||||
id TEXT PRIMARY KEY,
|
||||
public_key BLOB NOT NULL,
|
||||
sign_count INTEGER NOT NULL DEFAULT 0,
|
||||
created_at INTEGER NOT NULL
|
||||
);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS webauthn_challenges (
|
||||
challenge_id TEXT PRIMARY KEY,
|
||||
challenge TEXT NOT NULL,
|
||||
created_at INTEGER NOT NULL,
|
||||
expires_at INTEGER NOT NULL
|
||||
);
|
||||
```
|
||||
|
||||
### WebAuthn flow
|
||||
|
||||
```
|
||||
1. Client: POST /api/webauthn/challenge → vault returns random challenge (60s TTL)
|
||||
2. User taps hardware key (fingerprint, face, YubiKey)
|
||||
3. Browser signs challenge with hardware token's private key
|
||||
4. Client sends: bearer token + X-WebAuthn-Assertion header + X-WebAuthn-Challenge header + admin request body
|
||||
5. Vault verifies:
|
||||
a. Bearer token valid? (agent row exists)
|
||||
b. admin = 1?
|
||||
c. WebAuthn signature valid against stored public key?
|
||||
d. Challenge matches and not expired?
|
||||
→ All yes? Execute admin operation.
|
||||
→ Any no? 403.
|
||||
6. Challenge deleted (single use). Expired challenges cleaned up periodically.
|
||||
```
|
||||
|
||||
### Auto-scope (enterprise only)
|
||||
|
||||
```go
|
||||
//go:build enterprise
|
||||
|
||||
// POST /api/autoscope
|
||||
// Takes entry fields, returns suggested scope assignments
|
||||
// Uses LLM to classify fields as Credential vs Identity
|
||||
// and suggest which agent scopes should have access
|
||||
```
|
||||
|
||||
### Makefile
|
||||
|
||||
```makefile
|
||||
build:
|
||||
go build -o clavitor .
|
||||
|
||||
build-enterprise:
|
||||
go build -tags enterprise -o clavitor .
|
||||
|
||||
build-prod:
|
||||
GOOS=linux GOARCH=amd64 CGO_ENABLED=1 go build -tags enterprise -o clavitor-linux-amd64 .
|
||||
```
|
||||
|
||||
## Constraints
|
||||
|
||||
- The community binary must work standalone with zero enterprise code compiled in
|
||||
- No runtime checks for "is this enterprise?" — the code paths don't exist
|
||||
- No feature flags, no license keys, no env vars
|
||||
- Enterprise files must all have `//go:build enterprise` as the first line
|
||||
- The agents table and scopes column are shared (both editions need scoping)
|
||||
- WebAuthn is shared (both editions need hardware-gated admin)
|
||||
- Token format `cvt_` prefix is shared
|
||||
- Cannot delete the last admin agent (prevent lockout)
|
||||
- Cannot remove admin flag from self if no other admin exists
|
||||
|
||||
### Tier enforcement
|
||||
|
||||
The vault itself does not enforce tier limits. The hosted management layer (outside the vault) enforces:
|
||||
- Token count: `SELECT COUNT(*) FROM agents` checked before `POST /api/agents`
|
||||
- Device count: `SELECT COUNT(*) FROM webauthn_credentials` checked before WebAuthn registration
|
||||
- Vault count: checked at the account level, not within the vault
|
||||
|
||||
Self-hosted (Community Edition) vaults have no limits on tokens or devices within a single vault.
|
||||
|
||||
## Migration
|
||||
|
||||
### For existing vaults (no scopes yet)
|
||||
|
||||
1. Add `scopes` column to entries: `ALTER TABLE entries ADD COLUMN scopes TEXT NOT NULL DEFAULT '';`
|
||||
2. Create agents table.
|
||||
3. Existing auth tokens become agent rows: create agent row for each existing token with `all_access=1, admin=1` (vault owner).
|
||||
4. All existing entries have `scopes = ""` (owner-only) by default — secure default, nothing changes until user explicitly creates agents and assigns scopes.
|
||||
|
||||
### Vault owner bootstrap
|
||||
|
||||
On first vault setup or migration:
|
||||
1. User registers WebAuthn credential (hardware key enrollment).
|
||||
2. System creates first agent row: `id=1, name="Owner", all_access=1, admin=1`.
|
||||
3. User receives their bearer token (shown once).
|
||||
4. All entries default to `scopes=""` (owner only).
|
||||
5. User creates additional agents and assigns scopes via admin operations.
|
||||
|
||||
## Deliverables
|
||||
|
||||
1. Implement the agents table, token auth, and scope matching in shared code
|
||||
2. Implement WebAuthn challenge/verify flow for admin operations
|
||||
3. Implement all shared API endpoints (read + admin)
|
||||
4. Create enterprise-only stubs (handlers_enterprise.go, routes_enterprise.go, autoscope.go, scim.go, audit_feed.go) with build tags and basic structure
|
||||
5. Update Makefile with both build targets
|
||||
6. Migration path for existing vaults (add scopes column, create initial owner agent)
|
||||
|
||||
## Reference
|
||||
|
||||
- Full scope spec: /home/johan/dev/clavitor/clavis/clavis-vault/SPEC-scopes.md
|
||||
- Pricing architecture: /home/johan/dev/clavitor/marketing/pricing-architecture.md
|
||||
|
|
@ -5,6 +5,7 @@ import (
|
|||
"crypto/sha256"
|
||||
"encoding/base32"
|
||||
"encoding/base64"
|
||||
"encoding/hex"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
|
|
@ -16,6 +17,7 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/go-chi/chi/v5"
|
||||
"github.com/johanj/clavitor/edition"
|
||||
"github.com/johanj/clavitor/lib"
|
||||
"github.com/pquerna/otp/totp"
|
||||
)
|
||||
|
|
@ -76,8 +78,8 @@ func (h *Handlers) cleanChallenges() {
|
|||
|
||||
// --- Context helpers ---
|
||||
|
||||
func (h *Handlers) db(r *http.Request) *lib.DB { return DBFromContext(r.Context()) }
|
||||
func (h *Handlers) vk(r *http.Request) []byte { return VaultKeyFromContext(r.Context()) }
|
||||
func (h *Handlers) db(r *http.Request) *lib.DB { return DBFromContext(r.Context()) }
|
||||
func (h *Handlers) vk(r *http.Request) []byte { return VaultKeyFromContext(r.Context()) }
|
||||
func (h *Handlers) agent(r *http.Request) *lib.AgentData { return AgentFromContext(r.Context()) }
|
||||
|
||||
// l0 returns L0 (first 4 bytes of vault key before normalization).
|
||||
|
|
@ -98,6 +100,79 @@ func (h *Handlers) requireOwner(w http.ResponseWriter, r *http.Request) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// requireAdmin rejects agents AND requires a fresh WebAuthn challenge proof.
|
||||
// The client must first call /api/auth/admin/begin, do a PRF tap, call
|
||||
// /api/auth/admin/complete, then pass the resulting admin token in X-Admin-Token.
|
||||
//
|
||||
// SECURITY NOTE: The admin token is consumed immediately upon validation.
|
||||
// If the subsequent operation fails (DB error, validation error, etc.), the token
|
||||
// is gone but the operation didn't complete. The user must perform a fresh PRF tap
|
||||
// to retry.
|
||||
//
|
||||
// This was reviewed and accepted because:
|
||||
// - 5-10 minute token lifetime makes re-auth acceptable
|
||||
// - It's a UX inconvenience, not a security vulnerability
|
||||
// - Deferring consumption until operation success would require transaction-like complexity
|
||||
// - Rare edge case: requires admin operation to fail after token validation
|
||||
func (h *Handlers) requireAdmin(w http.ResponseWriter, r *http.Request) bool {
|
||||
if h.requireOwner(w, r) {
|
||||
return true
|
||||
}
|
||||
token := r.Header.Get("X-Admin-Token")
|
||||
if token == "" {
|
||||
ErrorResponse(w, http.StatusForbidden, "admin_required", "Admin operation requires PRF authentication")
|
||||
return true
|
||||
}
|
||||
tokenBytes, err := hex.DecodeString(token)
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusForbidden, "invalid_token", "Invalid admin token")
|
||||
return true
|
||||
}
|
||||
// Token is consumed immediately. See SECURITY NOTE above.
|
||||
if err := h.consumeChallenge(tokenBytes, "admin"); err != nil {
|
||||
ErrorResponse(w, http.StatusForbidden, "expired_token", "Admin token expired or already used")
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// rejectAgentSystemWrite blocks agents from creating/updating agent or scope entries.
|
||||
func rejectAgentSystemWrite(w http.ResponseWriter, r *http.Request, entryType string) bool {
|
||||
if !IsAgentRequest(r) {
|
||||
return false
|
||||
}
|
||||
if entryType == lib.TypeAgent || entryType == lib.TypeScope {
|
||||
ErrorResponse(w, http.StatusForbidden, "system_type", "Agents cannot modify agent or scope records")
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// rejectAgentL3Overwrite blocks agents from overwriting L3 fields with lower-tier data.
|
||||
// If an existing field is tier 3, the agent's update must keep the same value.
|
||||
func rejectAgentL3Overwrite(w http.ResponseWriter, existing, incoming *lib.VaultData) bool {
|
||||
if existing == nil || incoming == nil {
|
||||
return false
|
||||
}
|
||||
existingL3 := make(map[string]string)
|
||||
for _, f := range existing.Fields {
|
||||
if f.Tier >= 3 {
|
||||
existingL3[f.Label] = f.Value
|
||||
}
|
||||
}
|
||||
if len(existingL3) == 0 {
|
||||
return false
|
||||
}
|
||||
for i, f := range incoming.Fields {
|
||||
if val, isL3 := existingL3[f.Label]; isL3 {
|
||||
// Preserve the L3 value — agent cannot change it
|
||||
incoming.Fields[i].Value = val
|
||||
incoming.Fields[i].Tier = 3
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// filterByScope removes entries the agent cannot access.
|
||||
func filterByScope(agent *lib.AgentData, entries []lib.Entry) []lib.Entry {
|
||||
if agent == nil {
|
||||
|
|
@ -272,6 +347,10 @@ func (h *Handlers) AuthRegisterComplete(w http.ResponseWriter, r *http.Request)
|
|||
}
|
||||
|
||||
func (h *Handlers) AuthLoginBegin(w http.ResponseWriter, r *http.Request) {
|
||||
if h.db(r) == nil {
|
||||
ErrorResponse(w, http.StatusNotFound, "no_vault", "No vault exists")
|
||||
return
|
||||
}
|
||||
creds, err := lib.GetWebAuthnCredentials(h.db(r))
|
||||
if err != nil || len(creds) == 0 {
|
||||
ErrorResponse(w, http.StatusNotFound, "no_credentials", "No credentials registered")
|
||||
|
|
@ -303,6 +382,10 @@ func (h *Handlers) AuthLoginBegin(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func (h *Handlers) AuthLoginComplete(w http.ResponseWriter, r *http.Request) {
|
||||
if h.db(r) == nil {
|
||||
ErrorResponse(w, http.StatusNotFound, "no_vault", "No vault exists")
|
||||
return
|
||||
}
|
||||
var req struct {
|
||||
Challenge []byte `json:"challenge"`
|
||||
CredentialID []byte `json:"credential_id"`
|
||||
|
|
@ -326,40 +409,93 @@ func (h *Handlers) AuthLoginComplete(w http.ResponseWriter, r *http.Request) {
|
|||
JSONResponse(w, http.StatusOK, map[string]string{"status": "authenticated"})
|
||||
}
|
||||
|
||||
// AdminAuthBegin starts a WebAuthn assertion for admin operations (PRF tap required).
|
||||
func (h *Handlers) AdminAuthBegin(w http.ResponseWriter, r *http.Request) {
|
||||
if h.requireOwner(w, r) {
|
||||
return
|
||||
}
|
||||
if h.db(r) == nil {
|
||||
ErrorResponse(w, http.StatusNotFound, "no_vault", "No vault exists")
|
||||
return
|
||||
}
|
||||
creds, err := lib.GetWebAuthnCredentials(h.db(r))
|
||||
if err != nil || len(creds) == 0 {
|
||||
ErrorResponse(w, http.StatusNotFound, "no_credentials", "No credentials registered")
|
||||
return
|
||||
}
|
||||
challenge := make([]byte, 32)
|
||||
rand.Read(challenge)
|
||||
h.storeChallenge(challenge, "admin-begin")
|
||||
|
||||
var allowCreds []map[string]any
|
||||
var prfSalt []byte
|
||||
for _, c := range creds {
|
||||
allowCreds = append(allowCreds, map[string]any{"type": "public-key", "id": c.CredentialID})
|
||||
if len(c.PRFSalt) > 0 {
|
||||
prfSalt = c.PRFSalt
|
||||
}
|
||||
}
|
||||
prfExt := map[string]any{}
|
||||
if len(prfSalt) > 0 {
|
||||
prfExt["eval"] = map[string]any{"first": prfSalt}
|
||||
}
|
||||
|
||||
JSONResponse(w, http.StatusOK, map[string]any{
|
||||
"publicKey": map[string]any{
|
||||
"challenge": challenge, "rpId": rpID(r), "allowCredentials": allowCreds,
|
||||
"userVerification": "required", "extensions": map[string]any{"prf": prfExt},
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
// AdminAuthComplete verifies the WebAuthn assertion and returns a one-time admin token.
|
||||
func (h *Handlers) AdminAuthComplete(w http.ResponseWriter, r *http.Request) {
|
||||
if h.requireOwner(w, r) {
|
||||
return
|
||||
}
|
||||
if h.db(r) == nil {
|
||||
ErrorResponse(w, http.StatusNotFound, "no_vault", "No vault exists")
|
||||
return
|
||||
}
|
||||
var req struct {
|
||||
Challenge []byte `json:"challenge"`
|
||||
CredentialID []byte `json:"credential_id"`
|
||||
SignCount int `json:"sign_count"`
|
||||
}
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
ErrorResponse(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
|
||||
return
|
||||
}
|
||||
if err := h.consumeChallenge(req.Challenge, "admin-begin"); err != nil {
|
||||
ErrorResponse(w, http.StatusUnauthorized, "invalid_challenge", "Challenge verification failed")
|
||||
return
|
||||
}
|
||||
cred, err := lib.GetWebAuthnCredentialByRawID(h.db(r), req.CredentialID)
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusUnauthorized, "unknown_credential", "Credential not recognized")
|
||||
return
|
||||
}
|
||||
lib.UpdateWebAuthnSignCount(h.db(r), int64(cred.CredID), req.SignCount)
|
||||
|
||||
// Issue one-time admin token (valid 5 minutes, single use)
|
||||
adminToken := make([]byte, 32)
|
||||
rand.Read(adminToken)
|
||||
h.storeChallenge(adminToken, "admin")
|
||||
|
||||
lib.AuditLog(h.db(r), &lib.AuditEvent{Action: "admin_auth", Actor: lib.ActorWeb, IPAddr: realIP(r)})
|
||||
JSONResponse(w, http.StatusOK, map[string]string{"admin_token": hex.EncodeToString(adminToken)})
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Entry CRUD (scope-checked)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
func (h *Handlers) ListEntries(w http.ResponseWriter, r *http.Request) {
|
||||
agent := h.agent(r)
|
||||
actor := ActorFromContext(r.Context())
|
||||
|
||||
if r.URL.Query().Get("meta") == "1" {
|
||||
entries, err := lib.EntryListMeta(h.db(r))
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusInternalServerError, "list_failed", "Failed to list entries")
|
||||
return
|
||||
}
|
||||
if entries == nil {
|
||||
entries = []lib.Entry{}
|
||||
}
|
||||
entries = filterOutSystemTypes(entries)
|
||||
entries = filterByScope(agent, entries)
|
||||
JSONResponse(w, http.StatusOK, entries)
|
||||
return
|
||||
}
|
||||
|
||||
var parent *int64
|
||||
if pidStr := r.URL.Query().Get("parent_id"); pidStr != "" {
|
||||
pid, err := lib.HexToID(pidStr)
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusBadRequest, "invalid_id", "Invalid parent_id")
|
||||
return
|
||||
}
|
||||
parent = &pid
|
||||
}
|
||||
|
||||
entries, err := lib.EntryList(h.db(r), h.vk(r), parent)
|
||||
// List endpoint returns metadata only — never decrypted data.
|
||||
// Full entry data is only available via GET /entries/{id} with scope checks.
|
||||
entries, err := lib.EntryListMeta(h.db(r))
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusInternalServerError, "list_failed", "Failed to list entries")
|
||||
return
|
||||
|
|
@ -369,14 +505,6 @@ func (h *Handlers) ListEntries(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
entries = filterOutSystemTypes(entries)
|
||||
entries = filterByScope(agent, entries)
|
||||
|
||||
if actor == lib.ActorAgent {
|
||||
for i := range entries {
|
||||
if entries[i].VaultData != nil {
|
||||
stripL2Fields(entries[i].VaultData)
|
||||
}
|
||||
}
|
||||
}
|
||||
JSONResponse(w, http.StatusOK, entries)
|
||||
}
|
||||
|
||||
|
|
@ -416,6 +544,7 @@ func (h *Handlers) GetEntry(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func (h *Handlers) CreateEntry(w http.ResponseWriter, r *http.Request) {
|
||||
agent := h.agent(r)
|
||||
actor := ActorFromContext(r.Context())
|
||||
var req struct {
|
||||
Type string `json:"type"`
|
||||
|
|
@ -428,6 +557,14 @@ func (h *Handlers) CreateEntry(w http.ResponseWriter, r *http.Request) {
|
|||
ErrorResponse(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
|
||||
return
|
||||
}
|
||||
if rejectAgentSystemWrite(w, r, req.Type) {
|
||||
return
|
||||
}
|
||||
// Security: Agents can only create entries with scopes they have access to
|
||||
if agent != nil && !lib.AgentCanAccess(agent, req.Scopes) {
|
||||
ErrorResponse(w, http.StatusForbidden, "forbidden_scope", "Cannot create entry with scopes outside your access")
|
||||
return
|
||||
}
|
||||
if req.Title == "" {
|
||||
ErrorResponse(w, http.StatusBadRequest, "missing_title", "Title is required")
|
||||
return
|
||||
|
|
@ -452,6 +589,12 @@ func (h *Handlers) CreateEntry(w http.ResponseWriter, r *http.Request) {
|
|||
|
||||
func (h *Handlers) CreateEntryBatch(w http.ResponseWriter, r *http.Request) {
|
||||
actor := ActorFromContext(r.Context())
|
||||
// Security: Agents are blocked entirely from batch import
|
||||
// Batch import is a human UI convenience, agents should use single-entry API
|
||||
if IsAgentRequest(r) {
|
||||
ErrorResponse(w, http.StatusForbidden, "agent_forbidden", "Agents cannot use batch import")
|
||||
return
|
||||
}
|
||||
var batch []struct {
|
||||
Type string `json:"type"`
|
||||
Title string `json:"title"`
|
||||
|
|
@ -472,6 +615,11 @@ func (h *Handlers) CreateEntryBatch(w http.ResponseWriter, r *http.Request) {
|
|||
if req.Type == "" {
|
||||
req.Type = lib.TypeCredential
|
||||
}
|
||||
// Security: Return 403 immediately on forbidden types (don't silently skip)
|
||||
if req.Type == lib.TypeAgent || req.Type == lib.TypeScope {
|
||||
ErrorResponse(w, http.StatusForbidden, "system_type", "Cannot create agent or scope entries via batch import")
|
||||
return
|
||||
}
|
||||
// Upsert: find existing by title, update if exists, create if not
|
||||
existing, _ := lib.EntrySearchFuzzy(db, vk, req.Title)
|
||||
var match *lib.Entry
|
||||
|
|
@ -484,6 +632,8 @@ func (h *Handlers) CreateEntryBatch(w http.ResponseWriter, r *http.Request) {
|
|||
if match != nil {
|
||||
match.Type = req.Type
|
||||
if req.Data != nil {
|
||||
// Security: Preserve L3 fields during batch update
|
||||
rejectAgentL3Overwrite(w, match.VaultData, req.Data)
|
||||
match.VaultData = req.Data
|
||||
}
|
||||
if lib.EntryUpdate(db, vk, match) == nil {
|
||||
|
|
@ -519,6 +669,9 @@ func (h *Handlers) UpsertEntry(w http.ResponseWriter, r *http.Request) {
|
|||
ErrorResponse(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
|
||||
return
|
||||
}
|
||||
if rejectAgentSystemWrite(w, r, req.Type) {
|
||||
return
|
||||
}
|
||||
if req.Title == "" {
|
||||
ErrorResponse(w, http.StatusBadRequest, "missing_title", "Title is required")
|
||||
return
|
||||
|
|
@ -545,6 +698,10 @@ func (h *Handlers) UpsertEntry(w http.ResponseWriter, r *http.Request) {
|
|||
match.Type = req.Type
|
||||
match.ParentID = req.ParentID
|
||||
if req.Data != nil {
|
||||
// Security: Agents cannot overwrite L3 fields
|
||||
if IsAgentRequest(r) {
|
||||
rejectAgentL3Overwrite(w, match.VaultData, req.Data)
|
||||
}
|
||||
match.VaultData = req.Data
|
||||
}
|
||||
if err := lib.EntryUpdate(h.db(r), h.vk(r), match); err != nil {
|
||||
|
|
@ -591,6 +748,9 @@ func (h *Handlers) UpdateEntry(w http.ResponseWriter, r *http.Request) {
|
|||
ErrorResponse(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
|
||||
return
|
||||
}
|
||||
if rejectAgentSystemWrite(w, r, req.Type) {
|
||||
return
|
||||
}
|
||||
existing, err := lib.EntryGet(h.db(r), h.vk(r), entryID)
|
||||
if err == lib.ErrNotFound {
|
||||
ErrorResponse(w, http.StatusNotFound, "not_found", "Entry not found")
|
||||
|
|
@ -609,8 +769,17 @@ func (h *Handlers) UpdateEntry(w http.ResponseWriter, r *http.Request) {
|
|||
existing.ParentID = req.ParentID
|
||||
existing.Version = req.Version
|
||||
if req.Data != nil {
|
||||
// Agents cannot overwrite L3 fields — preserve existing L3 values
|
||||
if IsAgentRequest(r) {
|
||||
rejectAgentL3Overwrite(w, existing.VaultData, req.Data)
|
||||
}
|
||||
existing.VaultData = req.Data
|
||||
}
|
||||
// Agents cannot change entry type to a system type
|
||||
if IsAgentRequest(r) && (existing.Type == lib.TypeAgent || existing.Type == lib.TypeScope) {
|
||||
ErrorResponse(w, http.StatusForbidden, "system_type", "Agents cannot modify agent or scope records")
|
||||
return
|
||||
}
|
||||
if err := lib.EntryUpdate(h.db(r), h.vk(r), existing); err != nil {
|
||||
if err == lib.ErrVersionConflict {
|
||||
ErrorResponse(w, http.StatusConflict, "version_conflict", err.Error())
|
||||
|
|
@ -627,27 +796,33 @@ func (h *Handlers) UpdateEntry(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func (h *Handlers) DeleteEntry(w http.ResponseWriter, r *http.Request) {
|
||||
agent := h.agent(r)
|
||||
actor := ActorFromContext(r.Context())
|
||||
entryID, err := lib.HexToID(chi.URLParam(r, "id"))
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusBadRequest, "invalid_id", "Invalid entry ID")
|
||||
return
|
||||
}
|
||||
entry, _ := lib.EntryGet(h.db(r), h.vk(r), entryID)
|
||||
entry, err := lib.EntryGet(h.db(r), h.vk(r), entryID)
|
||||
if err == lib.ErrNotFound {
|
||||
ErrorResponse(w, http.StatusNotFound, "not_found", "Entry not found")
|
||||
return
|
||||
}
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusInternalServerError, "get_failed", "Failed to get entry")
|
||||
return
|
||||
}
|
||||
// Security: Check scope access before deletion
|
||||
if !lib.AgentCanAccess(agent, entry.Scopes) {
|
||||
ErrorResponse(w, http.StatusForbidden, "forbidden", "Access denied")
|
||||
return
|
||||
}
|
||||
if err := lib.EntryDelete(h.db(r), entryID); err != nil {
|
||||
if err == lib.ErrNotFound {
|
||||
ErrorResponse(w, http.StatusNotFound, "not_found", "Entry not found")
|
||||
return
|
||||
}
|
||||
ErrorResponse(w, http.StatusInternalServerError, "delete_failed", "Failed to delete entry")
|
||||
return
|
||||
}
|
||||
title := ""
|
||||
if entry != nil {
|
||||
title = entry.Title
|
||||
}
|
||||
lib.AuditLog(h.db(r), &lib.AuditEvent{
|
||||
EntryID: lib.HexID(entryID), Title: title, Action: lib.ActionDelete,
|
||||
EntryID: lib.HexID(entryID), Title: entry.Title, Action: lib.ActionDelete,
|
||||
Actor: actor, IPAddr: realIP(r),
|
||||
})
|
||||
JSONResponse(w, http.StatusOK, map[string]string{"status": "deleted"})
|
||||
|
|
@ -682,6 +857,11 @@ func (h *Handlers) SearchEntries(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
}
|
||||
}
|
||||
// Security: Log search to audit trail
|
||||
lib.AuditLog(h.db(r), &lib.AuditEvent{
|
||||
Action: lib.ActionRead, Actor: actor, IPAddr: realIP(r),
|
||||
Title: fmt.Sprintf("search: %q (%d results)", query, len(entries)),
|
||||
})
|
||||
JSONResponse(w, http.StatusOK, entries)
|
||||
}
|
||||
|
||||
|
|
@ -872,7 +1052,7 @@ func (h *Handlers) GetAuditLog(w http.ResponseWriter, r *http.Request) {
|
|||
// ---------------------------------------------------------------------------
|
||||
|
||||
func (h *Handlers) HandleCreateAgent(w http.ResponseWriter, r *http.Request) {
|
||||
if h.requireOwner(w, r) {
|
||||
if h.requireAdmin(w, r) {
|
||||
return
|
||||
}
|
||||
var req struct {
|
||||
|
|
@ -889,7 +1069,9 @@ func (h *Handlers) HandleCreateAgent(w http.ResponseWriter, r *http.Request) {
|
|||
ErrorResponse(w, http.StatusBadRequest, "missing_name", "Name is required")
|
||||
return
|
||||
}
|
||||
|
||||
// DESIGN NOTE: Empty scopes with all_access=false is intentional.
|
||||
// This allows users to create a "blocked" agent that cannot access any entries,
|
||||
// effectively quarantining a rogue agent without deleting it.
|
||||
agent, credential, err := lib.AgentCreate(h.db(r), h.vk(r), h.l0(r), req.Name, req.Scopes, req.AllAccess, req.Admin)
|
||||
if err != nil {
|
||||
ErrorResponse(w, http.StatusInternalServerError, "create_failed", "Failed to create agent")
|
||||
|
|
@ -912,7 +1094,7 @@ func (h *Handlers) HandleCreateAgent(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func (h *Handlers) HandleListAgents(w http.ResponseWriter, r *http.Request) {
|
||||
if h.requireOwner(w, r) {
|
||||
if h.requireAdmin(w, r) {
|
||||
return
|
||||
}
|
||||
entries, err := lib.EntryList(h.db(r), h.vk(r), nil)
|
||||
|
|
@ -942,7 +1124,7 @@ func (h *Handlers) HandleListAgents(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func (h *Handlers) HandleDeleteAgent(w http.ResponseWriter, r *http.Request) {
|
||||
if h.requireOwner(w, r) {
|
||||
if h.requireAdmin(w, r) {
|
||||
return
|
||||
}
|
||||
entryID, err := lib.HexToID(chi.URLParam(r, "id"))
|
||||
|
|
@ -967,7 +1149,7 @@ func (h *Handlers) HandleDeleteAgent(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func (h *Handlers) HandleUpdateEntryScopes(w http.ResponseWriter, r *http.Request) {
|
||||
if h.requireOwner(w, r) {
|
||||
if h.requireAdmin(w, r) {
|
||||
return
|
||||
}
|
||||
entryID, err := lib.HexToID(chi.URLParam(r, "id"))
|
||||
|
|
@ -982,6 +1164,14 @@ func (h *Handlers) HandleUpdateEntryScopes(w http.ResponseWriter, r *http.Reques
|
|||
ErrorResponse(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
|
||||
return
|
||||
}
|
||||
// Security: Validate scope format. Invalid format indicates possible data corruption.
|
||||
if err := validateScopeFormat(req.Scopes); err != nil {
|
||||
// Community: Log to stderr. Commercial: Also POSTs to telemetry endpoint.
|
||||
edition.Current.AlertOperator(r.Context(), "data_corruption",
|
||||
"Invalid scope format detected", map[string]any{"entry_id": entryID, "scopes": req.Scopes, "error": err.Error()})
|
||||
ErrorResponse(w, http.StatusBadRequest, "invalid_scopes", "Invalid scope format - possible data corruption")
|
||||
return
|
||||
}
|
||||
if err := lib.EntryUpdateScopes(h.db(r), entryID, req.Scopes); err != nil {
|
||||
if err == lib.ErrNotFound {
|
||||
ErrorResponse(w, http.StatusNotFound, "not_found", "Entry not found")
|
||||
|
|
@ -1068,10 +1258,12 @@ func (h *Handlers) HandleWebAuthnAuthBegin(w http.ResponseWriter, r *http.Reques
|
|||
prfSalt = c.PRFSalt
|
||||
}
|
||||
}
|
||||
prfExt := map[string]any{}
|
||||
if len(prfSalt) > 0 {
|
||||
prfExt["eval"] = map[string]any{"first": prfSalt}
|
||||
// Security: All credentials must have PRF enabled. No non-PRF fallbacks.
|
||||
if len(prfSalt) == 0 {
|
||||
ErrorResponse(w, http.StatusInternalServerError, "no_prf", "No PRF-enabled credentials found")
|
||||
return
|
||||
}
|
||||
prfExt := map[string]any{"eval": map[string]any{"first": prfSalt}}
|
||||
JSONResponse(w, http.StatusOK, map[string]any{
|
||||
"publicKey": map[string]any{
|
||||
"challenge": challenge, "allowCredentials": allowCreds,
|
||||
|
|
@ -1082,6 +1274,7 @@ func (h *Handlers) HandleWebAuthnAuthBegin(w http.ResponseWriter, r *http.Reques
|
|||
|
||||
func (h *Handlers) HandleWebAuthnAuthComplete(w http.ResponseWriter, r *http.Request) {
|
||||
var req struct {
|
||||
Challenge []byte `json:"challenge"`
|
||||
CredID lib.HexID `json:"cred_id"`
|
||||
SignCount int `json:"sign_count"`
|
||||
}
|
||||
|
|
@ -1089,6 +1282,11 @@ func (h *Handlers) HandleWebAuthnAuthComplete(w http.ResponseWriter, r *http.Req
|
|||
ErrorResponse(w, http.StatusBadRequest, "invalid_json", "Invalid request body")
|
||||
return
|
||||
}
|
||||
// Security: Verify the challenge was issued by us
|
||||
if err := h.consumeChallenge(req.Challenge, "webauthn-auth"); err != nil {
|
||||
ErrorResponse(w, http.StatusUnauthorized, "invalid_challenge", "Challenge verification failed")
|
||||
return
|
||||
}
|
||||
lib.UpdateWebAuthnSignCount(h.db(r), int64(req.CredID), req.SignCount)
|
||||
JSONResponse(w, http.StatusOK, map[string]string{"status": "authenticated"})
|
||||
}
|
||||
|
|
@ -1138,6 +1336,9 @@ func (h *Handlers) CreateBackup(w http.ResponseWriter, r *http.Request) {
|
|||
}
|
||||
|
||||
func (h *Handlers) RestoreBackup(w http.ResponseWriter, r *http.Request) {
|
||||
if h.requireAdmin(w, r) {
|
||||
return
|
||||
}
|
||||
var req struct {
|
||||
Name string `json:"name"`
|
||||
}
|
||||
|
|
@ -1149,6 +1350,10 @@ func (h *Handlers) RestoreBackup(w http.ResponseWriter, r *http.Request) {
|
|||
ErrorResponse(w, http.StatusInternalServerError, "restore_error", err.Error())
|
||||
return
|
||||
}
|
||||
lib.AuditLog(h.db(r), &lib.AuditEvent{
|
||||
Action: lib.ActionBackupRestore, Actor: ActorFromContext(r.Context()),
|
||||
IPAddr: realIP(r), Title: req.Name,
|
||||
})
|
||||
JSONResponse(w, http.StatusOK, map[string]string{"status": "restored", "name": req.Name})
|
||||
}
|
||||
|
||||
|
|
@ -1177,6 +1382,26 @@ func extractDomain(urlStr string) string {
|
|||
return urlStr
|
||||
}
|
||||
|
||||
// validateScopeFormat validates that scopes is comma-separated hex IDs (16 chars each).
|
||||
// Empty string is valid (no scopes). Returns error for invalid format.
|
||||
func validateScopeFormat(scopes string) error {
|
||||
if scopes == "" {
|
||||
return nil
|
||||
}
|
||||
for _, s := range strings.Split(scopes, ",") {
|
||||
s = strings.TrimSpace(s)
|
||||
if len(s) != 16 {
|
||||
return fmt.Errorf("invalid scope ID length: %q (expected 16 hex chars)", s)
|
||||
}
|
||||
for _, c := range s {
|
||||
if !((c >= '0' && c <= '9') || (c >= 'a' && c <= 'f') || (c >= 'A' && c <= 'F')) {
|
||||
return fmt.Errorf("invalid scope ID characters: %q (expected hex only)", s)
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func generateTOTPSecret() string {
|
||||
b := make([]byte, 20)
|
||||
rand.Read(b)
|
||||
|
|
|
|||
|
|
@ -13,6 +13,7 @@ import (
|
|||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/johanj/clavitor/edition"
|
||||
"github.com/johanj/clavitor/lib"
|
||||
)
|
||||
|
||||
|
|
@ -117,8 +118,11 @@ func L1Middleware(dataDir string) func(http.Handler) http.Handler {
|
|||
agentIDHex := hex.EncodeToString(agentID)
|
||||
agent, err := lib.AgentLookup(db, l1Key, agentIDHex)
|
||||
if err != nil {
|
||||
log.Printf("agent lookup error: %v", err)
|
||||
ErrorResponse(w, http.StatusInternalServerError, "agent_error", "Agent lookup failed")
|
||||
// Community: Log to stderr. Commercial: Also POSTs to telemetry endpoint.
|
||||
// This indicates DB corruption, decryption failure, or disk issues.
|
||||
edition.Current.AlertOperator(r.Context(), "auth_system_error",
|
||||
"Agent lookup failed (DB/decryption error)", map[string]any{"error": err.Error()})
|
||||
ErrorResponse(w, http.StatusInternalServerError, "system_error", "Authentication system error - contact support")
|
||||
return
|
||||
}
|
||||
if agent == nil {
|
||||
|
|
@ -126,6 +130,47 @@ func L1Middleware(dataDir string) func(http.Handler) http.Handler {
|
|||
return
|
||||
}
|
||||
|
||||
clientIP := realIP(r)
|
||||
|
||||
// IP whitelist: first contact fills it, subsequent requests checked
|
||||
if agent.AllowedIPs == "" {
|
||||
// First contact — record the IP
|
||||
//
|
||||
// SECURITY NOTE: There is a theoretical race condition here.
|
||||
// If two parallel requests from different IPs arrive simultaneously
|
||||
// for the same agent's first contact, both could pass the empty check
|
||||
// before either writes to the database.
|
||||
//
|
||||
// This was reviewed and accepted because:
|
||||
// 1. Requires a stolen agent token (already a compromise scenario)
|
||||
// 2. Requires two agents with the same token racing first contact
|
||||
// 3. The "loser" simply won't be auto-whitelisted (one IP wins)
|
||||
// 4. Cannot be reproduced in testing; practically impossible to trigger
|
||||
// 5. Per-vault SQLite isolation limits blast radius
|
||||
//
|
||||
// The fix would require plaintext allowed_ips column + atomic conditional
|
||||
// update. Not worth the complexity for this edge case.
|
||||
agent.AllowedIPs = clientIP
|
||||
if err := lib.AgentUpdateAllowedIPs(db, l1Key, agent); err != nil {
|
||||
log.Printf("agent %s: failed to record first-contact IP: %v", agent.Name, err)
|
||||
ErrorResponse(w, http.StatusInternalServerError, "ip_record_failed", "Failed to record agent IP")
|
||||
return
|
||||
}
|
||||
log.Printf("agent %s: first contact from %s, IP recorded", agent.Name, clientIP)
|
||||
} else if !lib.AgentIPAllowed(agent, clientIP) {
|
||||
log.Printf("agent %s: blocked IP %s (allowed: %s)", agent.Name, clientIP, agent.AllowedIPs)
|
||||
ErrorResponse(w, http.StatusForbidden, "ip_blocked", "IP not allowed for this agent")
|
||||
return
|
||||
}
|
||||
|
||||
// Per-agent rate limiting
|
||||
if agent.RateLimit > 0 {
|
||||
if !agentRateLimiter.allow(agent.AgentID, agent.RateLimit) {
|
||||
ErrorResponse(w, http.StatusTooManyRequests, "rate_limited", "Agent rate limit exceeded")
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
ctx := context.WithValue(r.Context(), ctxDB, db)
|
||||
ctx = context.WithValue(ctx, ctxVaultKey, l1Key)
|
||||
ctx = context.WithValue(ctx, ctxActor, lib.ActorAgent)
|
||||
|
|
@ -234,6 +279,45 @@ type rateLimitEntry struct {
|
|||
count int
|
||||
}
|
||||
|
||||
// Per-agent rate limiter (keyed by agent ID, not IP).
|
||||
var agentRateLimiter = newAgentLimiter()
|
||||
|
||||
type agentLimiter struct {
|
||||
mu sync.Mutex
|
||||
agents map[string]*rateLimitEntry
|
||||
}
|
||||
|
||||
func newAgentLimiter() *agentLimiter {
|
||||
al := &agentLimiter{agents: make(map[string]*rateLimitEntry)}
|
||||
go func() {
|
||||
for {
|
||||
time.Sleep(time.Minute)
|
||||
al.mu.Lock()
|
||||
now := time.Now()
|
||||
for id, e := range al.agents {
|
||||
if now.Sub(e.windowStart) > time.Minute {
|
||||
delete(al.agents, id)
|
||||
}
|
||||
}
|
||||
al.mu.Unlock()
|
||||
}
|
||||
}()
|
||||
return al
|
||||
}
|
||||
|
||||
func (al *agentLimiter) allow(agentID string, maxPerMinute int) bool {
|
||||
al.mu.Lock()
|
||||
defer al.mu.Unlock()
|
||||
now := time.Now()
|
||||
e, exists := al.agents[agentID]
|
||||
if !exists || now.Sub(e.windowStart) > time.Minute {
|
||||
e = &rateLimitEntry{windowStart: now, count: 0}
|
||||
al.agents[agentID] = e
|
||||
}
|
||||
e.count++
|
||||
return e.count <= maxPerMinute
|
||||
}
|
||||
|
||||
// CORSMiddleware handles CORS headers.
|
||||
func CORSMiddleware(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
|
|
@ -260,11 +344,47 @@ func SecurityHeadersMiddleware(next http.Handler) http.Handler {
|
|||
w.Header().Set("X-Content-Type-Options", "nosniff")
|
||||
w.Header().Set("X-XSS-Protection", "1; mode=block")
|
||||
w.Header().Set("Referrer-Policy", "strict-origin-when-cross-origin")
|
||||
w.Header().Set("Content-Security-Policy", "default-src 'self'; script-src 'self' 'unsafe-inline' https://cdn.tailwindcss.com; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' data: https://fonts.gstatic.com; img-src 'self' data: https:; connect-src 'self' localhost 127.0.0.1")
|
||||
// CSP: removed unused tailwindcss, tightened img-src to self+data only
|
||||
w.Header().Set("Content-Security-Policy", "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline' https://fonts.googleapis.com; font-src 'self' data: https://fonts.gstatic.com; img-src 'self' data:; connect-src 'self' localhost 127.0.0.1 https://clavitor.ai")
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
// MaxBodySizeMiddleware limits request body size and rejects binary content.
|
||||
// Allows 64KB max for markdown notes. Rejects binary data (images, executables, etc).
|
||||
func MaxBodySizeMiddleware(maxBytes int64) func(http.Handler) http.Handler {
|
||||
return func(next http.Handler) http.Handler {
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
// Security: Reject binary content types
|
||||
contentType := r.Header.Get("Content-Type")
|
||||
if isBinaryContentType(contentType) {
|
||||
ErrorResponse(w, http.StatusUnsupportedMediaType, "binary_not_allowed",
|
||||
"Binary content not allowed. Only text/markdown data accepted.")
|
||||
return
|
||||
}
|
||||
r.Body = http.MaxBytesReader(w, r.Body, maxBytes)
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// isBinaryContentType detects common binary content types.
|
||||
func isBinaryContentType(ct string) bool {
|
||||
ct = strings.ToLower(ct)
|
||||
binaryTypes := []string{
|
||||
"image/", "audio/", "video/", "application/pdf",
|
||||
"application/zip", "application/gzip", "application/octet-stream",
|
||||
"application/x-executable", "application/x-dosexec",
|
||||
"multipart/form-data", // usually file uploads
|
||||
}
|
||||
for _, bt := range binaryTypes {
|
||||
if strings.Contains(ct, bt) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// ErrorResponse sends a JSON error response.
|
||||
func ErrorResponse(w http.ResponseWriter, status int, code, message string) {
|
||||
w.Header().Set("Content-Type", "application/json")
|
||||
|
|
|
|||
|
|
@ -19,6 +19,9 @@ func NewRouter(cfg *lib.Config, webFS embed.FS) *chi.Mux {
|
|||
r.Use(LoggingMiddleware)
|
||||
r.Use(CORSMiddleware)
|
||||
r.Use(SecurityHeadersMiddleware)
|
||||
// Security: Limit request body to 64KB. Rejects binary uploads (images, executables).
|
||||
// Markdown notes and text data only. Returns 413 if exceeded, 415 for binary.
|
||||
r.Use(MaxBodySizeMiddleware(65536))
|
||||
r.Use(RateLimitMiddleware(120))
|
||||
r.Use(L1Middleware(cfg.DataDir))
|
||||
|
||||
|
|
@ -124,7 +127,11 @@ func mountAPIRoutes(r chi.Router, h *Handlers) {
|
|||
r.Post("/backups", h.CreateBackup)
|
||||
r.Post("/backups/restore", h.RestoreBackup)
|
||||
|
||||
// Agent management (owner-only)
|
||||
// Admin auth (PRF tap required for admin operations)
|
||||
r.Post("/auth/admin/begin", h.AdminAuthBegin)
|
||||
r.Post("/auth/admin/complete", h.AdminAuthComplete)
|
||||
|
||||
// Agent management (admin-only — requires PRF tap + admin token)
|
||||
r.Post("/agents", h.HandleCreateAgent)
|
||||
r.Get("/agents", h.HandleListAgents)
|
||||
r.Delete("/agents/{id}", h.HandleDeleteAgent)
|
||||
|
|
|
|||
Binary file not shown.
Binary file not shown.
|
|
@ -1,6 +1,7 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"embed"
|
||||
"flag"
|
||||
"log"
|
||||
|
|
@ -8,6 +9,7 @@ import (
|
|||
"strconv"
|
||||
|
||||
"github.com/johanj/clavitor/api"
|
||||
"github.com/johanj/clavitor/edition"
|
||||
"github.com/johanj/clavitor/lib"
|
||||
)
|
||||
|
||||
|
|
@ -25,9 +27,12 @@ func main() {
|
|||
|
||||
port := flag.Int("port", envInt("PORT", 443), "Listen port")
|
||||
backupURL := flag.String("backup-url", envStr("BACKUP_URL", ""), "Backup vault URL for replication")
|
||||
|
||||
// Telemetry flags (commercial edition only, ignored in community)
|
||||
telemetryFreq := flag.Int("telemetry-freq", envInt("TELEMETRY_FREQ", 0), "Telemetry POST interval in seconds (0 = disabled)")
|
||||
telemetryHost := flag.String("telemetry-host", envStr("TELEMETRY_HOST", ""), "Telemetry endpoint URL")
|
||||
telemetryToken := flag.String("telemetry-token", envStr("TELEMETRY_TOKEN", ""), "Bearer token for telemetry endpoint")
|
||||
popRegion := flag.String("pop-region", envStr("POP_REGION", ""), "POP region identifier (commercial only)")
|
||||
flag.Parse()
|
||||
|
||||
_ = backupURL // TODO: wire up replication
|
||||
|
|
@ -38,13 +43,32 @@ func main() {
|
|||
}
|
||||
cfg.Port = strconv.Itoa(*port)
|
||||
|
||||
lib.StartTelemetry(lib.TelemetryConfig{
|
||||
FreqSeconds: *telemetryFreq,
|
||||
Host: *telemetryHost,
|
||||
Token: *telemetryToken,
|
||||
DataDir: cfg.DataDir,
|
||||
Version: version,
|
||||
})
|
||||
// Initialize edition-specific configuration
|
||||
log.Printf("Starting Clavitor Vault %s - %s Edition", version, edition.Current.Name())
|
||||
|
||||
if edition.Current.Name() == "commercial" {
|
||||
// Commercial: Set up centralized telemetry and alerting
|
||||
edition.SetCommercialConfig(&edition.CommercialConfig{
|
||||
TelemetryHost: *telemetryHost,
|
||||
TelemetryToken: *telemetryToken,
|
||||
TelemetryFreq: *telemetryFreq,
|
||||
POPRegion: *popRegion,
|
||||
})
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
defer cancel()
|
||||
edition.StartTelemetry(ctx)
|
||||
} else {
|
||||
// Community: Telemetry disabled by default, can be enabled manually
|
||||
if *telemetryHost != "" {
|
||||
lib.StartTelemetry(lib.TelemetryConfig{
|
||||
FreqSeconds: *telemetryFreq,
|
||||
Host: *telemetryHost,
|
||||
Token: *telemetryToken,
|
||||
DataDir: cfg.DataDir,
|
||||
Version: version,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
lib.StartBackupTimer(cfg.DataDir)
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,271 @@
|
|||
# Import Matrix — Field Mapping
|
||||
|
||||
All tables use the same columns. Empty cells = competitor does not support that field for that type.
|
||||
|
||||
## Field Kinds
|
||||
|
||||
`kind` describes the data type for rendering. NOT sensitivity — that's `tier`.
|
||||
|
||||
| Kind | Rendering | Examples |
|
||||
|---|---|---|
|
||||
| `text` | Plain text | username, cardholder, city, ssn, cvv, password |
|
||||
| `email` | mailto: link | email fields |
|
||||
| `phone` | tel: link | phone fields |
|
||||
| `totp` | TOTP code generator | totp seeds / otpauth URIs |
|
||||
| `url` | Clickable link | hostname fields |
|
||||
| `date` | Formatted date | expiry, purchase_date |
|
||||
|
||||
## Encryption Tiers
|
||||
|
||||
`tier` determines encryption and visibility. Independent of `kind`.
|
||||
|
||||
| Tier | Encryption | Who can decrypt | Examples |
|
||||
|---|---|---|---|
|
||||
| L1 | Server-side (AES-GCM with L1 key) | Server, all agents | title, urls, notes, username, email, cardholder, address, expiry |
|
||||
| **L2** | Client-side (PRF-derived L2 key) | CLI, extension, agents with L2 | password, totp, license_key, passphrase, db password, wifi password |
|
||||
| **L3** | Client-side (PRF-derived L3 key) | Hardware tap only | card number, cvv, pin, ssn, passport, license, private_key, seed_phrase |
|
||||
|
||||
## Canonical Field Definitions
|
||||
|
||||
Every field we store. The `label`, `kind`, and `tier` are authoritative.
|
||||
|
||||
### Credential fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| username | text | L1 |
|
||||
| password | text | **L2** |
|
||||
| totp | totp | **L2** |
|
||||
| email | email | L1 |
|
||||
|
||||
### Card fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| cardholder | text | L1 |
|
||||
| number | text | **L3** |
|
||||
| cvv | text | **L3** |
|
||||
| pin | text | **L3** |
|
||||
| brand | text | L1 |
|
||||
|
||||
### Identity fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| first_name | text | L1 |
|
||||
| last_name | text | L1 |
|
||||
| middle_name | text | L1 |
|
||||
| email | email | L1 |
|
||||
| phone | phone | L1 |
|
||||
| address1 | text | L1 |
|
||||
| address2 | text | L1 |
|
||||
| city | text | L1 |
|
||||
| state | text | L1 |
|
||||
| zip | text | L1 |
|
||||
| country | text | L1 |
|
||||
| company | text | L1 |
|
||||
| ssn | text | **L3** |
|
||||
| passport | text | **L3** |
|
||||
| license | text | **L3** |
|
||||
|
||||
### SSH Key fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| public_key | text | L1 |
|
||||
| private_key | text | **L3** |
|
||||
| passphrase | text | **L2** |
|
||||
| fingerprint | text | L1 |
|
||||
| key_type | text | L1 |
|
||||
|
||||
### Software License fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| license_key | text | **L2** |
|
||||
| version | text | L1 |
|
||||
| publisher | text | L1 |
|
||||
| purchase_date | date | L1 |
|
||||
| email | email | L1 |
|
||||
|
||||
### Database fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| db_type | text | L1 |
|
||||
| hostname | url | L1 |
|
||||
| port | text | L1 |
|
||||
| database | text | L1 |
|
||||
| username | text | L1 |
|
||||
| password | text | **L2** |
|
||||
| sid | text | L1 |
|
||||
| connection_string | text | **L2** |
|
||||
|
||||
### Wi-Fi fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| ssid | text | L1 |
|
||||
| password | text | **L2** |
|
||||
| encryption | text | L1 |
|
||||
|
||||
### Server fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| hostname | url | L1 |
|
||||
| username | text | L1 |
|
||||
| password | text | **L2** |
|
||||
|
||||
### Crypto Wallet fields
|
||||
| Label | Kind | Tier |
|
||||
|---|---|---|
|
||||
| seed_phrase | text | **L3** |
|
||||
| private_key | text | **L3** |
|
||||
| wallet_address | text | L1 |
|
||||
| derivation_path | text | L1 |
|
||||
| network | text | L1 |
|
||||
| passphrase | text | **L2** |
|
||||
|
||||
---
|
||||
|
||||
## Competitor Mapping
|
||||
|
||||
## Credential
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | `metadata.name` | `name` | `overview.title` | `name` | `title` | `Title` | `name` | `title` | `Name` | `title` | `name` | `hostname` | `Title` | `Title` |
|
||||
| **urls** | — | L1 | `content.urls[]` | `login.uris[].uri` | `overview.urls[]` | `url` | `domain` | `URL` | `url` | `login_url` | `Url` | field type=`url` | `url` | `url` | `URL` | `URL` |
|
||||
| **notes** | — | L1 | `metadata.note` | `notes` | `details.notesPlain` | `extra` | `note` | `Notes` | `note` | `notes` | `Note` | `notes` | | | `Notes` | `Notes` |
|
||||
| username | text | L1 | `content.itemEmail` / `itemUsername` | `login.username` | designation=`username` | `username` | `email` / `login` | `UserName` | `username` | `login` | `Login` | type=`username` | `username` | `username` | `Username` | `Username` |
|
||||
| password | password | **L2** | `content.password` | `login.password` | designation=`password` | `password` | `password` | `Password` | `Password` | `Password` | `password` | `password` | `Pwd` | type=`password` | `password` | `password` |
|
||||
| totp | totp | **L2** | `content.totpUri` | `login.totp` | field type=`otp` | `totp` | `otpSecret` | plugin: `TimeOtp-Secret-Base32` | | `$oneTimeCode` | | type=`totp` | | | `OTPAuth` | `TOTP` |
|
||||
| email | text | L1 | `content.itemEmail` | | field label=`email` | | `email` | | | | | type=`email` | | | | |
|
||||
| custom fields | per type | L1/L2 | `extraFields[]` | `fields[]` | `sections[].fields[]` | parsed from `extra` | | custom strings | | `custom_fields[]` | `RfFieldsV2` | extra fields | | | | |
|
||||
|
||||
## Card
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | `metadata.name` | `name` | `overview.title` | `name` | `name` | `Title` | `name` | `title` | `Name` | `title` | | |
|
||||
| **notes** | — | L1 | `metadata.note` | `notes` | `details.notesPlain` | `Notes` | `note` | `Notes` | `note` | `notes` | `Note` | `notes` | | |
|
||||
| cardholder | text | L1 | `cardholderName` | `card.cardholderName` | field=`cardholder` | `Name on Card` | `holder` | | `cardholdername` | `cardholderName` | | label=`Cardholder` | | |
|
||||
| number | password | **L3** | `number` | `card.number` | field=`ccnum` | `Number` | `cardNumber` | | `cardnumber` | `cardNumber` | | type=`credit_card` | | |
|
||||
| cvv | password | **L3** | `verificationNumber` | `card.code` | field=`cvv` | `Security Code` | `securityCode` | | `cvc` | `cardSecurityCode` | | label=`CVV` | | |
|
||||
| expiry | text | L1 | `expirationDate` | `expMonth`+`expYear` | field=`expiry` | `Expiration Date` | `expireDate` | | `expirydate` | `cardExpirationDate` | | label=`Expiry` | | |
|
||||
| pin | password | **L3** | `pin` | | field=`pin` | | | | | `pinCode` | | | | |
|
||||
| brand | text | L1 | `cardType` | `card.brand` | field=`type` | `Type` | `issuing_bank` | | | | | | | |
|
||||
|
||||
## Identity
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | `metadata.name` | `name` | `overview.title` | `name` | `name` | `Title` | `name` | `title` | `Name` | `title` | | |
|
||||
| **notes** | — | L1 | `metadata.note` | `notes` | `details.notesPlain` | `Notes` | `note` | `Notes` | | `notes` | `Note` | `notes` | | |
|
||||
| first_name | text | L1 | `firstName` | `identity.firstName` | field=`firstname` | `First Name` | `firstName` | | `full_name` | `name.first` | `First Name` | label=`First name` | | |
|
||||
| last_name | text | L1 | `lastName` | `identity.lastName` | field=`lastname` | `Last Name` | `lastName` | | | `name.last` | `Last Name` | label=`Last name` | | |
|
||||
| middle_name | text | L1 | | `identity.middleName` | field=`initial` | `Middle Name` | `middleName` | | | `name.middle` | `Middle Name` | | | |
|
||||
| email | text | L1 | `email` | `identity.email` | field=`email` | `Email` | `email` | | `email` | `email` | `Email` | type=`email` | | |
|
||||
| phone | text | L1 | `phoneNumber` | `identity.phone` | field=`defphone` | `Phone` | `phone_number` | | `phone_number` | `phone.default` | `Phone` | type=`phone` | | |
|
||||
| address1 | text | L1 | `streetAddress` | `identity.address1` | field=`address.street` | `Address 1` | `addressStreet` | | `address1` | `address.street` | `Address 1` | label=`Address` | | |
|
||||
| address2 | text | L1 | | `identity.address2` | | `Address 2` | | | `address2` | | `Address 2` | | | |
|
||||
| city | text | L1 | `city` | `identity.city` | field=`address.city` | `City / Town` | `addressCity` | | `city` | `address.city` | `City` | label=`City` | | |
|
||||
| state | text | L1 | `stateOrProvince` | `identity.state` | field=`address.state` | `State` | `addressState` | | `state` | `address.state` | `State` | label=`State` | | |
|
||||
| zip | text | L1 | `zipOrPostalCode` | `identity.postalCode` | field=`address.zip` | `Zip / Postal Code` | `addressZipcode` | | `zipcode` | `address.zip` | `Zip` | label=`ZIP` | | |
|
||||
| country | text | L1 | `country` | `identity.country` | field=`address.country` | `Country` | `addressCountry` | | `country` | `address.country` | `Country` | label=`Country` | | |
|
||||
| company | text | L1 | `organization` | `identity.company` | field=`company` | `Company` | | | | `company` | `Company` | label=`Company` | | |
|
||||
| ssn | password | **L3** | `socialSecurityNumber` | `identity.ssn` | field=`socialsecurity` | `Social Security Number` | | | | `accountNumber` | | | | |
|
||||
| passport | password | **L3** | `passportNumber` | `identity.passportNumber` | field=`passport` | `Passport Number` | | | | | | label=`Passport` | | |
|
||||
| license | password | **L3** | `licenseNumber` | `identity.licenseNumber` | field=`license` | `Driver's License` | | | | | | label=`License` | | |
|
||||
|
||||
## Note
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | `metadata.name` | `name` | `overview.title` | `name` | `title` | `Title` | `name` | `title` | `Name` | `title` | | |
|
||||
| **notes** | — | **L2** | `metadata.note` | `notes` | `details.notesPlain` | `extra` | `content` | `Notes` | `note` | `notes` | `Note` | `notes` | | |
|
||||
|
||||
## SSH Key
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | | | `overview.title` | `name` | | | | `title` | | | | |
|
||||
| public_key | text | L1 | | | field=`public_key` | `Public Key` | | | | `keyPair.publicKey` | | | | |
|
||||
| private_key | password | **L3** | | | field=`private_key` | `Private Key` | | | | `keyPair.privateKey` | | | | |
|
||||
| passphrase | password | **L2** | | | field=`password` | `Passphrase` | | | | `passphrase` | | | | |
|
||||
| fingerprint | text | L1 | | | field=`fingerprint` | | | | | | | | | |
|
||||
| key_type | text | L1 | | | | | | | | | | | | |
|
||||
|
||||
## Software License
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | | | `overview.title` | `name` | | | | | | | | |
|
||||
| license_key | password | **L2** | | | field=`reg_code` | `License Key` | | | | | | | | |
|
||||
| version | text | L1 | | | field=`product_version` | `Version` | | | | | | | | |
|
||||
| publisher | text | L1 | | | field=`publisher_name` | | | | | | | | | |
|
||||
| purchase_date | text | L1 | | | field=`order_date` | | | | | | | | | |
|
||||
| email | text | L1 | | | field=`reg_email` | | | | | | | | | |
|
||||
|
||||
## Database
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | | | `overview.title` | `name` | | | | | | | | |
|
||||
| db_type | text | L1 | | | field=`database_type` | `Type` | | | | | | | | |
|
||||
| hostname | text | L1 | | | field=`hostname` | `Hostname` | | | | | | | | |
|
||||
| port | text | L1 | | | field=`port` | `Port` | | | | | | | | |
|
||||
| database | text | L1 | | | field=`database` | `Database` | | | | | | | | |
|
||||
| username | text | L1 | | | field=`username` | `Username` | | | | | | | | |
|
||||
| password | password | **L2** | | | field=`password` | `Password` | | | | | | | | |
|
||||
| sid | text | L1 | | | field=`options` | | | | | | | | | |
|
||||
| connection_string | password | **L2** | | | | | | | | | | | | |
|
||||
|
||||
## Wi-Fi
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | | | | `name` | | | | | | | | |
|
||||
| ssid | text | L1 | | | | `SSID` | | | | | | | | |
|
||||
| password | password | **L2** | | | | `Password` | | | | | | | | |
|
||||
| encryption | text | L1 | | | | `Connection Type` | | | | | | | | |
|
||||
|
||||
## Server
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | | | `overview.title` | `name` | | | | | | | | |
|
||||
| hostname | text | L1 | | | field=`url` | `Hostname` | | | | | | | | |
|
||||
| username | text | L1 | | | field=`username` | `Username` | | | | | | | | |
|
||||
| password | password | **L2** | | | field=`password` | `Password` | | | | | | | | |
|
||||
|
||||
## Crypto Wallet (Clavitor-only)
|
||||
|
||||
| Clavitor Field | Kind | Tier | Proton Pass | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| **title** | — | L1 | | | | | | | | | | | | |
|
||||
| seed_phrase | password | **L3** | | | | | | | | | | | | |
|
||||
| private_key | password | **L3** | | | | | | | | | | | | |
|
||||
| wallet_address | text | L1 | | | | | | | | | | | | |
|
||||
| derivation_path | text | L1 | | | | | | | | | | | | |
|
||||
| network | text | L1 | | | | | | | | | | | | |
|
||||
| passphrase | password | **L2** | | | | | | | | | | | | |
|
||||
|
||||
## Tier Summary
|
||||
|
||||
| Tier | What | Examples |
|
||||
|---|---|---|
|
||||
| L1 | Server-readable. Titles, URLs, usernames, labels, metadata. | title, urls, notes, username, email, cardholder, expiry, address fields |
|
||||
| **L2** | Agent-decryptable. Operational secrets that agents/extensions need. | password, totp, license_key, db password, wifi password, server password, ssh passphrase, note content |
|
||||
| **L3** | Hardware tap only. PII, financial, government IDs. | card number, cvv, pin, ssn, passport, driver's license, private keys, seed phrases |
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Feature | Proton | Bitwarden | 1Password | LastPass | Dashlane | KeePass | NordPass | Keeper | RoboForm | Enpass | Chrome | Firefox | Safari/iCloud | KeePassXC | **Clavitor** |
|
||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
||||
| Credential | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | **✓** |
|
||||
| TOTP | ✓ | ✓ | ✓ | ✓ | ✓ | plugin | | ✓ | | ✓ | | | ✓ | ✓ | **✓** |
|
||||
| Card | ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ | ✓ | | ✓ | | | | | **✓** |
|
||||
| Identity | ✓ | ✓ | ✓ | ✓ | ✓ | | ✓ | ✓ | ✓ | ✓ | | | | | **✓** |
|
||||
| Note | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ | | | | ✓ | **✓** |
|
||||
| SSH Key | | | ✓ | ✓ | | | | ✓ | | | | | | | **✓** |
|
||||
| License | | | ✓ | ✓ | | | | | | | | | | | **✓** |
|
||||
| Database | | | ✓ | ✓ | | | | | | | | | | | **✓** |
|
||||
| Wi-Fi | | | | ✓ | | | | | | | | | | | **✓** |
|
||||
| Server | | | ✓ | ✓ | | | | | | | | | | | **✓** |
|
||||
| Crypto Wallet | | | | | | | | | | | | | | | **✓** |
|
||||
| Custom Fields | ✓ | ✓ | ✓ | ✓ | | ✓ | | ✓ | ✓ | ✓ | | | | ✓ | **✓** |
|
||||
| **L2/L3 tiers** | | | | | | | | | | | | | | | **✓** |
|
||||
|
|
@ -69,11 +69,12 @@ body.theme-light .vault-lock-banner { background: rgba(239,68,68,0.08); }
|
|||
|
||||
/* === RESET === */
|
||||
*, *::before, *::after { box-sizing: border-box; margin: 0; padding: 0; }
|
||||
html, body { height: 100%; }
|
||||
html, body, #app { height: 100%; }
|
||||
body { background: var(--bg); color: var(--text); font-family: var(--font-sans); font-size: 0.875rem; line-height: 1.6; }
|
||||
a { color: inherit; text-decoration: none; }
|
||||
button { font-family: inherit; font-size: inherit; cursor: pointer; border: none; background: none; color: inherit; }
|
||||
input, select, textarea { font-family: inherit; font-size: inherit; color: var(--text); }
|
||||
input[type="checkbox"] { accent-color: var(--text); }
|
||||
|
||||
/* === TYPOGRAPHY === */
|
||||
h1 { font-size: 1.875rem; font-weight: 800; line-height: 1.1; color: var(--text); }
|
||||
|
|
@ -180,17 +181,21 @@ p { color: var(--muted); line-height: 1.75; }
|
|||
.list-badge.type-ssh_key { color: var(--red); background: rgba(239,68,68,0.12); border-color: rgba(239,68,68,0.15); }
|
||||
.list-badge.type-totp { color: #a855f7; background: rgba(168,85,247,0.12); border-color: rgba(168,85,247,0.15); }
|
||||
|
||||
.entry-row { display: flex; align-items: center; gap: 0.875rem; padding: 0.75rem 1rem; border-bottom: 1px solid rgba(255,255,255,0.04); cursor: pointer; transition: background 0.15s, transform 0.15s; }
|
||||
.entry-row:hover { background: rgba(255,255,255,0.05); }
|
||||
/* --- Shared item system (vault list, import list, agent list, etc.) --- */
|
||||
.item-row { display: flex; align-items: center; gap: 0.625rem; padding: 0.625rem 0.75rem; border-bottom: 1px solid var(--border); cursor: pointer; transition: background 0.15s, transform 0.15s; }
|
||||
.item-row:hover { background: rgba(100,140,200,0.08); }
|
||||
.item-row:active { transform: scale(0.995); }
|
||||
.item-row.faded { opacity: 0.35; }
|
||||
.item-icon { width: 2.75rem; height: 1.375rem; border-radius: 0.25rem; background: var(--text); display: flex; align-items: center; justify-content: center; font-size: 0.5rem; font-weight: 600; color: var(--bg); flex-shrink: 0; font-family: var(--font-mono); text-transform: uppercase; letter-spacing: 0.05em; }
|
||||
.item-title { font-weight: 500; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; color: var(--text); }
|
||||
.item-sub { color: var(--muted); overflow: hidden; text-overflow: ellipsis; white-space: nowrap; font-size: 0.75rem; }
|
||||
.item-list { display: flex; flex-direction: column; max-height: 60vh; overflow-y: auto; }
|
||||
|
||||
/* Legacy aliases — vault list uses these, migrate later */
|
||||
.entry-row { display: flex; align-items: center; gap: 0.875rem; padding: 0.75rem 1rem; border-bottom: 1px solid var(--border); cursor: pointer; transition: background 0.15s, transform 0.15s; }
|
||||
.entry-row:hover { background: rgba(100,140,200,0.08); }
|
||||
.entry-row:active { transform: scale(0.995); }
|
||||
.entry-icon { width: 2.75rem; height: 1.375rem; border-radius: 0.25rem; background: rgba(100,140,200,0.12); display: flex; align-items: center; justify-content: center; font-size: 0.5rem; font-weight: 600; color: var(--muted); flex-shrink: 0; font-family: var(--font-mono); text-transform: uppercase; letter-spacing: 0.05em; }
|
||||
.entry-icon.type-credential { background: rgba(74,222,128,0.1); color: var(--accent); }
|
||||
.entry-icon.type-card { background: rgba(212,175,55,0.1); color: var(--gold); }
|
||||
.entry-icon.type-identity { background: rgba(96,165,250,0.1); color: #60a5fa; }
|
||||
.entry-icon.type-note { background: rgba(148,163,184,0.1); color: var(--muted); }
|
||||
.entry-icon.type-ssh_key { background: rgba(239,68,68,0.1); color: var(--red); }
|
||||
.entry-icon.type-totp { background: rgba(168,85,247,0.1); color: #a855f7; }
|
||||
.entry-icon.type-folder { background: rgba(212,175,55,0.1); color: var(--gold); }
|
||||
.entry-icon { width: 2.75rem; height: 1.375rem; border-radius: 0.25rem; background: var(--text); display: flex; align-items: center; justify-content: center; font-size: 0.5rem; font-weight: 600; color: var(--bg); flex-shrink: 0; font-family: var(--font-mono); text-transform: uppercase; letter-spacing: 0.05em; }
|
||||
.entry-domain { font-weight: 600; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; color: var(--text); }
|
||||
.entry-user { color: var(--subtle); overflow: hidden; text-overflow: ellipsis; white-space: nowrap; font-size: 0.8125rem; }
|
||||
.entry-user::before { content: '·'; margin: 0 0.5rem; color: var(--subtle); }
|
||||
|
|
@ -322,11 +327,6 @@ p { color: var(--muted); line-height: 1.75; }
|
|||
|
||||
.import-summary { display: flex; align-items: center; gap: 0.75rem; margin-bottom: 1rem; flex-wrap: wrap; padding: 0.75rem; background: rgba(100,140,200,0.06); border-radius: var(--radius-sm); border: 1px solid var(--border); }
|
||||
.import-summary label { cursor: pointer; user-select: none; display: inline-flex; align-items: center; gap: 0.375rem; font-size: 0.8125rem; }
|
||||
.import-list { max-height: 60vh; overflow-y: auto; display: flex; flex-direction: column; gap: 0.375rem; }
|
||||
.import-item { display: flex; align-items: center; gap: 0.625rem; padding: 0.625rem 0.75rem; background: rgba(100,140,200,0.08); border: 1px solid rgba(148,163,184,0.08); border-radius: var(--radius-sm); transition: background 0.15s; }
|
||||
.import-item:hover { background: rgba(100,140,200,0.12); }
|
||||
.import-item.faded { opacity: 0.35; }
|
||||
.import-item-title { flex: 1; overflow: hidden; text-overflow: ellipsis; white-space: nowrap; font-weight: 500; }
|
||||
|
||||
/* ============================================================
|
||||
APP — Onboarding / Unlock
|
||||
|
|
|
|||
|
|
@ -17,40 +17,42 @@
|
|||
<body>
|
||||
<div class="main-area">
|
||||
<div id="topbar"></div>
|
||||
<div style="max-width:640px;margin:2rem auto;padding:0 1rem">
|
||||
<div style="max-width:960px;margin:2rem auto;padding:0 1rem">
|
||||
<div class="modal-body" style="background:var(--surface);border:1px solid var(--border);border-radius:var(--radius)">
|
||||
<h3 class="modal-title">Import Entries</h3>
|
||||
<p class="mb-4" style="color:var(--muted)">Upload a password manager export or scan a TOTP QR code.</p>
|
||||
|
||||
<div id="dropZone" class="drop-zone mb-4">
|
||||
<div class="drop-zone-icon">📁</div>
|
||||
<div class="drop-zone-text">Drop file here or click to browse</div>
|
||||
<div class="drop-zone-hint">Proton Pass, Bitwarden, 1Password, LastPass, Dashlane, KeePass, KeePassXC, NordPass, Keeper, RoboForm, Enpass, Safari/iCloud, Chrome, Firefox</div>
|
||||
<input type="file" id="fileInput" class="hidden" accept=".zip,.json,.csv,.txt">
|
||||
</div>
|
||||
|
||||
<div class="import-divider"><span>or</span></div>
|
||||
|
||||
<button type="button" onclick="startQRScanner()" class="btn btn-qr-scan mb-4">
|
||||
<svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"><path d="M3 7V5a2 2 0 0 1 2-2h2M17 3h2a2 2 0 0 1 2 2v2M21 17v2a2 2 0 0 1-2 2h-2M7 21H5a2 2 0 0 1-2-2v-2"/><rect x="7" y="7" width="10" height="10" rx="1"/></svg>
|
||||
Scan Authenticator QR
|
||||
</button>
|
||||
|
||||
<div id="qrScanner" class="hidden">
|
||||
<div class="qr-viewfinder">
|
||||
<video id="qrVideo" autoplay playsinline></video>
|
||||
<canvas id="qrCanvas" class="hidden"></canvas>
|
||||
<div id="fileStep">
|
||||
<div id="dropZone" class="drop-zone mb-4">
|
||||
<div class="drop-zone-icon">📁</div>
|
||||
<div class="drop-zone-text">Drop file here or click to browse</div>
|
||||
<div class="drop-zone-hint">Proton Pass, Bitwarden, 1Password, LastPass, Dashlane, KeePass, KeePassXC, NordPass, Keeper, RoboForm, Enpass, Safari/iCloud, Chrome, Edge, Brave, Vivaldi, Opera, Arc, Firefox</div>
|
||||
<input type="file" id="fileInput" class="hidden" accept=".zip,.json,.csv,.txt">
|
||||
</div>
|
||||
|
||||
<div class="import-divider"><span>or</span></div>
|
||||
|
||||
<button type="button" onclick="startQRScanner()" class="btn btn-qr-scan mb-4">
|
||||
<svg width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2"><path d="M3 7V5a2 2 0 0 1 2-2h2M17 3h2a2 2 0 0 1 2 2v2M21 17v2a2 2 0 0 1-2 2h-2M7 21H5a2 2 0 0 1-2-2v-2"/><rect x="7" y="7" width="10" height="10" rx="1"/></svg>
|
||||
Scan Authenticator QR
|
||||
</button>
|
||||
|
||||
<div id="qrScanner" class="hidden">
|
||||
<div class="qr-viewfinder">
|
||||
<video id="qrVideo" autoplay playsinline></video>
|
||||
<canvas id="qrCanvas" class="hidden"></canvas>
|
||||
</div>
|
||||
<button type="button" onclick="stopQRScanner()" class="btn btn-ghost mt-4" style="width:100%">Cancel Scan</button>
|
||||
</div>
|
||||
<button type="button" onclick="stopQRScanner()" class="btn btn-ghost mt-4" style="width:100%">Cancel Scan</button>
|
||||
</div>
|
||||
|
||||
<div id="preview" class="hidden">
|
||||
<div id="summary" class="import-summary mb-4" style="display:flex;align-items:center;gap:0.75rem;flex-wrap:wrap"></div>
|
||||
<div style="display:flex;gap:0.75rem;margin-bottom:1rem">
|
||||
<div id="importActions" class="hidden" style="display:flex;gap:0.75rem;margin-bottom:1rem">
|
||||
<button onclick="doImport()" class="btn btn-primary" id="importBtn">Import</button>
|
||||
<a href="/app/" class="btn btn-ghost">Cancel</a>
|
||||
</div>
|
||||
<div id="entryList" class="import-list"></div>
|
||||
<div id="entryList" class="item-list"></div>
|
||||
</div>
|
||||
|
||||
<div id="noPreview">
|
||||
|
|
@ -80,16 +82,43 @@
|
|||
});
|
||||
|
||||
function esc(s) { return s ? String(s).replace(/&/g,'&').replace(/</g,'<').replace(/>/g,'>') : ''; }
|
||||
|
||||
// Consent dialog for domain classification
|
||||
function showClassifyConsent(count) {
|
||||
return new Promise(function(resolve) {
|
||||
var el = document.getElementById('summary');
|
||||
el.innerHTML =
|
||||
'<div style="padding:0.5rem 0">' +
|
||||
'<p style="font-weight:600;margin-bottom:0.5rem">Auto-assign scopes?</p>' +
|
||||
'<p style="font-size:0.8125rem;color:var(--muted);margin-bottom:0.75rem">' +
|
||||
'We can automatically categorize your ' + count + ' entries into scopes ' +
|
||||
'(finance, social, dev, etc.) so you can easily control who sees what.' +
|
||||
'</p>' +
|
||||
'<p style="font-size:0.75rem;color:var(--subtle);margin-bottom:0.75rem">' +
|
||||
'Only <strong>domain names</strong> (e.g. "github.com") are sent to clavitor.ai over TLS. ' +
|
||||
'No usernames, passwords, or other data leaves your browser.' +
|
||||
'</p>' +
|
||||
'<div style="display:flex;gap:0.5rem">' +
|
||||
'<button id="classifyYes" class="btn btn-primary btn-sm">Yes, classify</button>' +
|
||||
'<button id="classifyNo" class="btn btn-ghost btn-sm">Skip</button>' +
|
||||
'<button id="classifyCancel" class="btn btn-ghost btn-sm">Cancel</button>' +
|
||||
'</div>' +
|
||||
'</div>';
|
||||
document.getElementById('classifyYes').onclick = function() { resolve(true); };
|
||||
document.getElementById('classifyNo').onclick = function() { resolve(false); };
|
||||
document.getElementById('classifyCancel').onclick = function() { resolve(null); };
|
||||
});
|
||||
}
|
||||
function typeLabel(t) { return {login:'LGN',credential:'LGN',note:'NOTE',creditCard:'CARD',card:'CARD',identity:'ID'}[t] || 'OTH'; }
|
||||
|
||||
function toggleAll(checked) {
|
||||
document.querySelectorAll('.import-check').forEach(function(cb) { cb.checked = checked; });
|
||||
document.querySelectorAll('.item-check').forEach(function(cb) { cb.checked = checked; });
|
||||
updateCount();
|
||||
}
|
||||
|
||||
function updateCount() {
|
||||
var total = document.querySelectorAll('.import-check').length;
|
||||
var selected = document.querySelectorAll('.import-check:checked').length;
|
||||
var total = document.querySelectorAll('.item-check').length;
|
||||
var selected = document.querySelectorAll('.item-check:checked').length;
|
||||
var el = document.getElementById('selectedCount');
|
||||
if (el) el.textContent = selected + ' / ' + total + ' selected';
|
||||
}
|
||||
|
|
@ -119,48 +148,119 @@
|
|||
var records = detectAndParse(text);
|
||||
if (!records || records.length === 0) { document.getElementById('summary').innerHTML = '<span class="badge" style="background:var(--red);color:#fff">No records found</span>'; return; }
|
||||
|
||||
// Classify domains
|
||||
document.getElementById('summary').innerHTML = '<span class="badge muted">Classifying ' + records.length + ' entries...</span>';
|
||||
var classifications = await classifyDomains(records);
|
||||
// Parsing done — hide the file step, we're in assignment mode now
|
||||
document.getElementById('fileStep').classList.add('hidden');
|
||||
|
||||
// Ask user if they want auto-classification
|
||||
document.getElementById('importActions').classList.add('hidden');
|
||||
var doClassify = await showClassifyConsent(records.length);
|
||||
|
||||
// Cancel — go back to file step
|
||||
if (doClassify === null) {
|
||||
document.getElementById('fileStep').classList.remove('hidden');
|
||||
document.getElementById('preview').classList.add('hidden');
|
||||
document.getElementById('noPreview').classList.remove('hidden');
|
||||
document.getElementById('summary').innerHTML = '';
|
||||
document.getElementById('entryList').innerHTML = '';
|
||||
return;
|
||||
}
|
||||
|
||||
var classifications = {};
|
||||
if (doClassify) {
|
||||
document.getElementById('summary').innerHTML = '<span class="badge muted">Classifying domains... this may take a moment</span>';
|
||||
document.getElementById('entryList').innerHTML = '';
|
||||
classifications = await classifyDomains(records);
|
||||
}
|
||||
applyScopes(records, classifications);
|
||||
|
||||
// Sort by scope, then title
|
||||
// Sort by primary scope, then title
|
||||
records.sort(function(a, b) {
|
||||
if (a.scope !== b.scope) return (a.scope || '').localeCompare(b.scope || '');
|
||||
var sa = (a.scope || 'misc').split(',')[0].trim();
|
||||
var sb = (b.scope || 'misc').split(',')[0].trim();
|
||||
if (sa !== sb) return sa.localeCompare(sb);
|
||||
return (a.title || '').toLowerCase().localeCompare((b.title || '').toLowerCase());
|
||||
});
|
||||
parsedRecords = records;
|
||||
|
||||
// Count by scope
|
||||
// Count by individual scope (split multi-scopes)
|
||||
var scopeCounts = {};
|
||||
records.forEach(function(r) { scopeCounts[r.scope] = (scopeCounts[r.scope] || 0) + 1; });
|
||||
records.forEach(function(r) {
|
||||
(r.scope || 'misc').split(',').forEach(function(s) {
|
||||
s = s.trim();
|
||||
if (s) scopeCounts[s] = (scopeCounts[s] || 0) + 1;
|
||||
});
|
||||
});
|
||||
|
||||
// Scope pills — one per scope, clickable to filter
|
||||
var summaryHTML = '<span class="badge accent">' + records.length + ' entries</span>';
|
||||
Object.keys(scopeCounts).sort().forEach(function(s) {
|
||||
summaryHTML += '<span class="badge muted" style="font-size:0.7rem">' + s + ': ' + scopeCounts[s] + '</span>';
|
||||
summaryHTML += '<span class="badge muted scope-pill" data-scope="' + esc(s) + '" onclick="filterScope(\'' + esc(s) + '\')" style="font-size:0.7rem;cursor:pointer">' + s + ': ' + scopeCounts[s] + '</span>';
|
||||
});
|
||||
summaryHTML += '<span id="selectedCount" style="font-size:0.8125rem;color:var(--muted)">' + records.length + ' / ' + records.length + ' selected</span>' +
|
||||
'<label style="cursor:pointer;font-size:0.8125rem;margin-left:auto"><input type="checkbox" checked onchange="toggleAll(this.checked)"> Select all</label>';
|
||||
document.getElementById('summary').innerHTML = summaryHTML;
|
||||
|
||||
// Render grouped by scope
|
||||
renderImportList(records);
|
||||
|
||||
// NOW show the Import button
|
||||
document.getElementById('importActions').classList.remove('hidden');
|
||||
document.getElementById('importActions').style.display = 'flex';
|
||||
}
|
||||
|
||||
var activeFilter = null;
|
||||
|
||||
function filterScope(scope) {
|
||||
if (activeFilter === scope) {
|
||||
activeFilter = null; // toggle off
|
||||
} else {
|
||||
activeFilter = scope;
|
||||
}
|
||||
// Highlight active pill
|
||||
document.querySelectorAll('.scope-pill').forEach(function(p) {
|
||||
p.style.outline = (p.dataset.scope === activeFilter) ? '2px solid var(--accent)' : 'none';
|
||||
});
|
||||
renderImportList(parsedRecords);
|
||||
}
|
||||
|
||||
function hasScope(entry, scope) {
|
||||
return (',' + (entry.scope || '') + ',').indexOf(',' + scope + ',') >= 0;
|
||||
}
|
||||
|
||||
// Primary scope = first one in the comma list
|
||||
function primaryScope(entry) {
|
||||
return (entry.scope || 'misc').split(',')[0].trim();
|
||||
}
|
||||
|
||||
function toggleScope(scope, checked) {
|
||||
document.querySelectorAll('.item-check').forEach(function(cb) {
|
||||
var idx = parseInt(cb.dataset.idx);
|
||||
if (parsedRecords[idx] && hasScope(parsedRecords[idx], scope)) {
|
||||
cb.checked = checked;
|
||||
}
|
||||
});
|
||||
updateCount();
|
||||
}
|
||||
|
||||
function renderImportList(records) {
|
||||
var html = '';
|
||||
var currentScope = '';
|
||||
for (var i = 0; i < records.length; i++) {
|
||||
var r = records[i];
|
||||
if (r.scope !== currentScope) {
|
||||
currentScope = r.scope;
|
||||
html += '<div style="padding:0.5rem 0.75rem;font-size:0.75rem;font-weight:600;color:var(--accent);text-transform:uppercase;letter-spacing:0.05em;margin-top:0.5rem">' + esc(currentScope) + '</div>';
|
||||
if (activeFilter && !hasScope(r, activeFilter)) continue;
|
||||
var ps = primaryScope(r);
|
||||
if (ps !== currentScope) {
|
||||
currentScope = ps;
|
||||
html += '<div style="padding:0.5rem 0.75rem;font-size:0.75rem;font-weight:600;color:var(--accent);text-transform:uppercase;letter-spacing:0.05em;margin-top:0.5rem;display:flex;align-items:center;gap:0.5rem">' +
|
||||
'<input type="checkbox" checked onchange="toggleScope(\'' + esc(currentScope) + '\', this.checked)">' +
|
||||
esc(currentScope) +
|
||||
'</div>';
|
||||
}
|
||||
var user = getUsername(r);
|
||||
html += '<div class="import-item">' +
|
||||
'<input type="checkbox" checked class="import-check" data-idx="' + i + '" onchange="updateCount()">' +
|
||||
'<span class="entry-icon type-' + (r.type === 'login' ? 'credential' : r.type) + '" style="width:1.5rem;height:1.5rem;font-size:0.5rem">' + typeLabel(r.type) + '</span>' +
|
||||
'<span style="flex:1;min-width:0">' +
|
||||
'<span style="font-weight:500;font-size:0.8125rem">' + esc(r.title) + '</span>' +
|
||||
(user ? '<span style="color:var(--muted);font-size:0.75rem;margin-left:0.5rem">' + esc(user) + '</span>' : '') +
|
||||
'</span>' +
|
||||
(r.urls && r.urls.length ? '<span style="color:var(--subtle);font-size:0.7rem;font-family:var(--font-mono);max-width:180px;overflow:hidden;text-overflow:ellipsis;white-space:nowrap">' + esc(r.urls[0]) + '</span>' : '') +
|
||||
html += '<div class="item-row">' +
|
||||
'<input type="checkbox" checked class="item-check" data-idx="' + i + '" onchange="updateCount()">' +
|
||||
'<span class="item-icon type-' + (r.type === 'login' ? 'credential' : r.type) + '" style="width:1.5rem;height:1.5rem;font-size:0.5rem">' + typeLabel(r.type) + '</span>' +
|
||||
'<span style="font-weight:500;font-size:0.8125rem;white-space:nowrap">' + esc(r.title) + '</span>' +
|
||||
(user ? '<span style="color:var(--muted);font-size:0.75rem;overflow:hidden;text-overflow:ellipsis;white-space:nowrap;min-width:0">' + esc(user) + '</span>' : '') +
|
||||
'</div>';
|
||||
}
|
||||
document.getElementById('entryList').innerHTML = html;
|
||||
|
|
@ -178,7 +278,7 @@
|
|||
btn.textContent = 'Encrypting...';
|
||||
|
||||
var unchecked = {};
|
||||
document.querySelectorAll('.import-check:not(:checked)').forEach(function(cb) { unchecked[parseInt(cb.dataset.idx)] = true; });
|
||||
document.querySelectorAll('.item-check:not(:checked)').forEach(function(cb) { unchecked[parseInt(cb.dataset.idx)] = true; });
|
||||
|
||||
var selected = [];
|
||||
parsedRecords.forEach(function(r, i) { if (!unchecked[i]) selected.push(r); });
|
||||
|
|
@ -276,7 +376,7 @@
|
|||
document.getElementById('noPreview').classList.add('hidden');
|
||||
document.getElementById('summary').innerHTML = '<span class="badge accent">1 TOTP entry</span>';
|
||||
document.getElementById('entryList').innerHTML =
|
||||
'<div class="import-item"><span style="font-weight:500">' + esc(title) + '</span><span style="color:var(--muted);margin-left:0.5rem">' + esc(label) + '</span></div>';
|
||||
'<div class="item-row"><span style="font-weight:500">' + esc(title) + '</span><span style="color:var(--muted);margin-left:0.5rem">' + esc(label) + '</span></div>';
|
||||
}
|
||||
|
||||
</script>
|
||||
|
|
|
|||
|
|
@ -122,6 +122,7 @@ async function classifyDomains(entries) {
|
|||
});
|
||||
|
||||
var domains = Object.keys(domainSet);
|
||||
console.log('classifyDomains: ' + domains.length + ' unique domains', domains.slice(0, 10));
|
||||
if (domains.length === 0) return {};
|
||||
|
||||
try {
|
||||
|
|
@ -131,7 +132,9 @@ async function classifyDomains(entries) {
|
|||
body: JSON.stringify(domains)
|
||||
});
|
||||
if (!resp.ok) throw new Error('HTTP ' + resp.status);
|
||||
return await resp.json();
|
||||
var result = await resp.json();
|
||||
console.log('classifyDomains: got', Object.keys(result).length, 'results', result);
|
||||
return result;
|
||||
} catch (e) {
|
||||
// Service unavailable — return empty, all entries get "misc"
|
||||
console.warn('Domain classification unavailable:', e.message);
|
||||
|
|
@ -142,18 +145,20 @@ async function classifyDomains(entries) {
|
|||
// Apply scope classifications to entries
|
||||
function applyScopes(entries, classifications) {
|
||||
entries.forEach(function(e) {
|
||||
var scope = 'misc';
|
||||
var scope = null;
|
||||
// Check URLs first
|
||||
(e.urls || []).forEach(function(u) {
|
||||
if (scope) return;
|
||||
var d = extractETLD1(u);
|
||||
if (d && classifications[d]) scope = classifications[d];
|
||||
});
|
||||
// Fallback: check title as domain
|
||||
if (scope === 'misc' && e.title && e.title.indexOf('.') > 0) {
|
||||
if (!scope && e.title && e.title.indexOf('.') > 0) {
|
||||
var d = extractETLD1(e.title);
|
||||
if (d && classifications[d]) scope = classifications[d];
|
||||
}
|
||||
e.scope = scope;
|
||||
// No URL to classify = unclassified. LLM returned nothing useful = misc.
|
||||
e.scope = scope || (e.urls && e.urls.length ? 'misc' : 'unclassified');
|
||||
});
|
||||
return entries;
|
||||
}
|
||||
|
|
|
|||
|
|
@ -539,7 +539,7 @@
|
|||
// Entry list
|
||||
async function loadEntries(autoImport) {
|
||||
try {
|
||||
entries = await api('GET', '/api/entries?meta=1');
|
||||
entries = await api('GET', '/api/entries');
|
||||
history.replaceState({list: true}, '', '/app/');
|
||||
renderEntryList();
|
||||
if (autoImport && entries.length === 0) {
|
||||
|
|
|
|||
|
|
@ -86,7 +86,7 @@ function initTopbar() {
|
|||
nav += '<a href="/app/import.html"' + (path.indexOf('import') >= 0 ? ' class="topbar-active"' : '') + '>Import</a>';
|
||||
if (typeof showAudit === 'function') nav += '<a href="#" onclick="showAudit();return false">Audit</a>';
|
||||
if (typeof showAgents === 'function') nav += '<a href="#" onclick="showAgents();return false">Agents</a>';
|
||||
nav += '<span style="font-size:0.65rem;opacity:0.35;margin-left:auto;font-family:var(--font-mono)">v2.0.32</span>';
|
||||
nav += '<span style="font-size:0.65rem;opacity:0.35;margin-left:auto;font-family:var(--font-mono)">v2.0.44</span>';
|
||||
nav += '<button onclick="if(typeof lockVault===\'function\'){lockVault()}else{sessionStorage.removeItem(\'clavitor_master\');location.href=\'/app/\'}" class="topbar-lock">Lock</button>';
|
||||
|
||||
var logo = '<a href="/app/" class="topbar-lockup" style="display:inline-flex;gap:10px;align-items:center;text-decoration:none">' +
|
||||
|
|
|
|||
|
|
@ -0,0 +1,132 @@
|
|||
# Clavitor Edition System
|
||||
|
||||
This directory implements build-time differentiation between **Community** (OSS) and **Commercial** (hosted) editions of Clavitor Vault.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
edition/
|
||||
├── edition.go # Interface definition (build-agnostic)
|
||||
├── community.go # Community Edition (default, !commercial build tag)
|
||||
└── commercial.go # Commercial Edition (commercial build tag)
|
||||
```
|
||||
|
||||
## Build Instructions
|
||||
|
||||
### Community Edition (Default)
|
||||
```bash
|
||||
# Self-hosted, no telemetry, AGPL-compliant
|
||||
go build -o clavitor ./cmd/clavitor/
|
||||
```
|
||||
|
||||
### Commercial Edition
|
||||
```bash
|
||||
# Managed by clavitor.ai, telemetry enabled, SCIM/SIEM support
|
||||
go build -tags commercial -o clavitor ./cmd/clavitor/
|
||||
```
|
||||
|
||||
## Key Differences
|
||||
|
||||
| Feature | Community | Commercial |
|
||||
|---------|-----------|------------|
|
||||
| Telemetry | Manual opt-in via CLI flags | Enabled by default, centralized |
|
||||
| Operator Alerts | Local logs only | POSTs to `/v1/alerts` endpoint |
|
||||
| Central Management | None | Multi-POP dashboard at clavitor.ai |
|
||||
| SCIM/SIEM | No | Yes |
|
||||
| License | AGPL | Commercial license |
|
||||
|
||||
## Usage in Code
|
||||
|
||||
### Sending Operator Alerts
|
||||
|
||||
```go
|
||||
// Always use edition.Current.AlertOperator() instead of log.Printf
|
||||
edition.Current.AlertOperator(ctx, "auth_error", "message", map[string]any{
|
||||
"key": "value",
|
||||
})
|
||||
```
|
||||
|
||||
**Community behavior:** Logs to stderr with `OPERATOR ALERT [type]: message` prefix.
|
||||
|
||||
**Commercial behavior:** Logs locally + POSTs JSON to `{TelemetryHost}/v1/alerts`.
|
||||
|
||||
### Checking Edition
|
||||
|
||||
```go
|
||||
if edition.Current.Name() == "commercial" {
|
||||
// Commercial-only features
|
||||
}
|
||||
|
||||
if edition.Current.IsTelemetryEnabled() {
|
||||
// Telemetry is active (commercial always, community if configured)
|
||||
}
|
||||
```
|
||||
|
||||
### Commercial Configuration
|
||||
|
||||
Only valid for commercial builds. Community builds log a warning if called.
|
||||
|
||||
```go
|
||||
edition.SetCommercialConfig(&edition.CommercialConfig{
|
||||
TelemetryHost: "https://hq.clavitor.com",
|
||||
TelemetryToken: "bearer-token",
|
||||
TelemetryFreq: 300, // 5 minutes
|
||||
POPRegion: "us-east-1",
|
||||
})
|
||||
|
||||
// Start periodic telemetry reporting
|
||||
ctx := context.Background()
|
||||
edition.StartTelemetry(ctx)
|
||||
```
|
||||
|
||||
## Environment Variables (Commercial)
|
||||
|
||||
| Variable | Purpose |
|
||||
|----------|---------|
|
||||
| `TELEMETRY_HOST` | Base URL for telemetry (e.g., `https://hq.clavitor.com`) |
|
||||
| `TELEMETRY_TOKEN` | Bearer token for authentication |
|
||||
| `TELEMETRY_FREQ` | Seconds between POSTs (default: 300) |
|
||||
| `POP_REGION` | POP identifier for dashboards |
|
||||
|
||||
## Alert Endpoint (Commercial)
|
||||
|
||||
Commercial builds POST to:
|
||||
```
|
||||
POST {TelemetryHost}/v1/alerts
|
||||
Authorization: Bearer {TelemetryToken}
|
||||
Content-Type: application/json
|
||||
```
|
||||
|
||||
Payload:
|
||||
```json
|
||||
{
|
||||
"edition": "commercial",
|
||||
"type": "auth_system_error",
|
||||
"message": "Agent lookup failed",
|
||||
"details": {"error": "..."},
|
||||
"hostname": "pop-us-east-1-03",
|
||||
"pop_region": "us-east-1",
|
||||
"timestamp": "2026-04-02T00:11:45Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Important Notes for Opus
|
||||
|
||||
1. **Never remove the build tags** — they're essential for the dual-license model.
|
||||
2. **Always use `edition.Current`** — don't branch on build tags in application code.
|
||||
3. **Community is default** — if someone builds without tags, they get OSS edition.
|
||||
4. **Commercial config is optional** — commercial builds work without telemetry (just logs).
|
||||
5. **Telemetry is separate** — the old `lib/telemetry.go` still exists for community opt-in.
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
# Test community
|
||||
go test ./edition/...
|
||||
|
||||
# Test commercial
|
||||
go test -tags commercial ./edition/...
|
||||
|
||||
# Build both
|
||||
go build ./cmd/clavitor/ && go build -tags commercial ./cmd/clavitor/
|
||||
```
|
||||
|
|
@ -0,0 +1,131 @@
|
|||
//go:build commercial
|
||||
|
||||
// Package edition - Commercial Edition implementation.
|
||||
// This file is built ONLY when the "commercial" build tag is specified.
|
||||
//
|
||||
// Commercial Edition features:
|
||||
// - Telemetry to clavitor.ai (metrics, health)
|
||||
// - Centralized operator alerting
|
||||
// - Multi-POP management
|
||||
// - SCIM/SIEM integration
|
||||
//
|
||||
// Build: go build -tags commercial ./cmd/clavitor/
|
||||
package edition
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"time"
|
||||
)
|
||||
|
||||
func init() {
|
||||
Current = &commercialEdition{name: "commercial"}
|
||||
SetCommercialConfig = setCommercialConfig
|
||||
}
|
||||
|
||||
// commercialEdition is the Commercial Edition implementation.
|
||||
type commercialEdition struct {
|
||||
name string
|
||||
config CommercialConfig
|
||||
}
|
||||
|
||||
// Global config set at startup via SetCommercialConfig().
|
||||
var globalConfig *CommercialConfig
|
||||
|
||||
func setCommercialConfig(cfg *CommercialConfig) {
|
||||
globalConfig = cfg
|
||||
if cfg.TelemetryFreq <= 0 {
|
||||
cfg.TelemetryFreq = 300 // 5 minutes default
|
||||
}
|
||||
}
|
||||
|
||||
func (e *commercialEdition) Name() string { return e.name }
|
||||
|
||||
func (e *commercialEdition) IsTelemetryEnabled() bool {
|
||||
if globalConfig == nil {
|
||||
return false
|
||||
}
|
||||
return globalConfig.TelemetryHost != ""
|
||||
}
|
||||
|
||||
func (e *commercialEdition) AlertOperator(ctx context.Context, alertType, message string, details map[string]any) {
|
||||
// Always log locally first
|
||||
if details != nil {
|
||||
log.Printf("OPERATOR ALERT [%s]: %s - %+v", alertType, message, details)
|
||||
} else {
|
||||
log.Printf("OPERATOR ALERT [%s]: %s", alertType, message)
|
||||
}
|
||||
|
||||
// Commercial: POST to telemetry alert endpoint if configured
|
||||
if globalConfig == nil || globalConfig.TelemetryHost == "" {
|
||||
return
|
||||
}
|
||||
|
||||
hostname, _ := os.Hostname()
|
||||
alert := map[string]any{
|
||||
"edition": "commercial",
|
||||
"type": alertType,
|
||||
"message": message,
|
||||
"details": details,
|
||||
"hostname": hostname,
|
||||
"pop_region": globalConfig.POPRegion,
|
||||
"timestamp": time.Now().UTC().Format(time.RFC3339),
|
||||
}
|
||||
|
||||
body, _ := json.Marshal(alert)
|
||||
url := globalConfig.TelemetryHost + "/v1/alerts"
|
||||
req, err := http.NewRequestWithContext(ctx, "POST", url, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
log.Printf("telemetry alert failed to create request: %v", err)
|
||||
return
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
if globalConfig.TelemetryToken != "" {
|
||||
req.Header.Set("Authorization", "Bearer "+globalConfig.TelemetryToken)
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
log.Printf("telemetry alert failed to POST: %v", err)
|
||||
return
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode >= 400 {
|
||||
log.Printf("telemetry alert rejected: HTTP %d", resp.StatusCode)
|
||||
}
|
||||
}
|
||||
|
||||
// StartTelemetry begins the periodic telemetry reporter (commercial only).
|
||||
func StartTelemetry(ctx context.Context) {
|
||||
if globalConfig == nil || globalConfig.TelemetryHost == "" {
|
||||
log.Printf("Commercial edition: telemetry disabled (no config)")
|
||||
return
|
||||
}
|
||||
|
||||
log.Printf("Commercial edition: telemetry enabled to %s", globalConfig.TelemetryHost)
|
||||
|
||||
go func() {
|
||||
ticker := time.NewTicker(time.Duration(globalConfig.TelemetryFreq) * time.Second)
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return
|
||||
case <-ticker.C:
|
||||
postMetrics(ctx)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
func postMetrics(ctx context.Context) {
|
||||
// TODO: Implement metrics POST to /v1/telemetry
|
||||
// This collects vault count, entry count, system metrics
|
||||
}
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
//go:build !commercial
|
||||
|
||||
// Package edition - Community Edition implementation.
|
||||
// This file is built when NO build tags are specified (default).
|
||||
//
|
||||
// Community Edition features:
|
||||
// - No external telemetry (privacy-first)
|
||||
// - Local logging only
|
||||
// - Self-hosted, no central management
|
||||
// - AGPL/compliant open source
|
||||
package edition
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
)
|
||||
|
||||
func init() {
|
||||
Current = &communityEdition{name: "community"}
|
||||
SetCommercialConfig = func(cfg *CommercialConfig) {
|
||||
// No-op in community edition
|
||||
log.Printf("WARNING: CommercialConfig ignored in Community Edition")
|
||||
}
|
||||
}
|
||||
|
||||
// communityEdition is the Community Edition implementation.
|
||||
type communityEdition struct {
|
||||
name string
|
||||
}
|
||||
|
||||
func (e *communityEdition) Name() string { return e.name }
|
||||
|
||||
func (e *communityEdition) IsTelemetryEnabled() bool { return false }
|
||||
|
||||
func (e *communityEdition) AlertOperator(ctx context.Context, alertType, message string, details map[string]any) {
|
||||
// Community: Log locally only. No external calls.
|
||||
if details != nil {
|
||||
log.Printf("OPERATOR ALERT [%s]: %s - %+v", alertType, message, details)
|
||||
} else {
|
||||
log.Printf("OPERATOR ALERT [%s]: %s", alertType, message)
|
||||
}
|
||||
}
|
||||
|
||||
// StartTelemetry is a no-op in Community Edition.
|
||||
func StartTelemetry(ctx context.Context) {
|
||||
log.Printf("Community edition: telemetry disabled (privacy-first)")
|
||||
}
|
||||
|
|
@ -0,0 +1,49 @@
|
|||
// Package edition provides build-time differentiation between Community and Commercial editions.
|
||||
//
|
||||
// Community Edition (default): No telemetry, no central management, self-hosted only.
|
||||
// Commercial Edition (build tag: commercial): Telemetry, alerting, managed by clavitor.ai.
|
||||
//
|
||||
// Build instructions:
|
||||
//
|
||||
// Community: go build ./cmd/clavitor/
|
||||
// Commercial: go build -tags commercial ./cmd/clavitor/
|
||||
//
|
||||
// Usage in code:
|
||||
//
|
||||
// edition.Current.AlertOperator(ctx, "auth_error", "message", details)
|
||||
//
|
||||
// To add commercial config at startup:
|
||||
//
|
||||
// edition.SetCommercialConfig(&edition.CommercialConfig{...})
|
||||
package edition
|
||||
|
||||
import "context"
|
||||
|
||||
// Edition defines the interface for community vs commercial behavior.
|
||||
type Edition interface {
|
||||
// Name returns "community" or "commercial"
|
||||
Name() string
|
||||
// AlertOperator sends critical operational alerts.
|
||||
// Community: logs to stderr with OPERATOR ALERT prefix.
|
||||
// Commercial: POSTs to telemetry endpoint + logs locally.
|
||||
AlertOperator(ctx context.Context, alertType, message string, details map[string]any)
|
||||
// IsTelemetryEnabled returns true if this edition sends data to central servers.
|
||||
IsTelemetryEnabled() bool
|
||||
}
|
||||
|
||||
// Current is the edition implementation for this build.
|
||||
// Set at init() time in community.go or commercial.go based on build tags.
|
||||
var Current Edition
|
||||
|
||||
// CommercialConfig is defined in commercial.go (commercial build only).
|
||||
// Stub here for API compatibility.
|
||||
type CommercialConfig struct {
|
||||
TelemetryHost string
|
||||
TelemetryToken string
|
||||
TelemetryFreq int
|
||||
POPRegion string
|
||||
}
|
||||
|
||||
// SetCommercialConfig is a no-op in community edition.
|
||||
// Implemented in commercial.go for commercial builds.
|
||||
var SetCommercialConfig func(cfg *CommercialConfig)
|
||||
|
|
@ -81,8 +81,13 @@ func hasRecentBackup(backupDir, vaultID string, maxAge time.Duration) bool {
|
|||
}
|
||||
|
||||
// createBackup copies a DB using VACUUM INTO (consistent, compacted snapshot).
|
||||
// Security: Validates filename to prevent path injection.
|
||||
func createBackup(dbPath, backupDir string, now time.Time) error {
|
||||
name := strings.TrimSuffix(filepath.Base(dbPath), ".db")
|
||||
// Security: Validate vault name format (alphanumeric only)
|
||||
if !isValidVaultName(name) {
|
||||
return fmt.Errorf("invalid vault name: %s", name)
|
||||
}
|
||||
stamp := now.Format("20060102-150405")
|
||||
dest := filepath.Join(backupDir, fmt.Sprintf("%s_%s.db", name, stamp))
|
||||
|
||||
|
|
@ -96,6 +101,19 @@ func createBackup(dbPath, backupDir string, now time.Time) error {
|
|||
return err
|
||||
}
|
||||
|
||||
// isValidVaultName ensures the vault name only contains safe characters.
|
||||
func isValidVaultName(name string) bool {
|
||||
if len(name) == 0 || len(name) > 128 {
|
||||
return false
|
||||
}
|
||||
for _, c := range name {
|
||||
if !((c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z') || (c >= '0' && c <= '9') || c == '-' || c == '_') {
|
||||
return false
|
||||
}
|
||||
}
|
||||
return true
|
||||
}
|
||||
|
||||
// pruneBackups deletes all backup files older than maxAge.
|
||||
func pruneBackups(backupDir string, maxAge time.Duration) {
|
||||
files, _ := filepath.Glob(filepath.Join(backupDir, "*.db"))
|
||||
|
|
|
|||
|
|
@ -428,14 +428,56 @@ func AgentLookup(db *DB, vaultKey []byte, agentIDHex string) (*AgentData, error)
|
|||
}
|
||||
|
||||
return &AgentData{
|
||||
AgentID: vd.AgentID,
|
||||
Name: vd.Title,
|
||||
Scopes: vd.Scopes,
|
||||
AllAccess: vd.AllAccess,
|
||||
Admin: vd.Admin,
|
||||
AgentID: vd.AgentID,
|
||||
Name: vd.Title,
|
||||
Scopes: vd.Scopes,
|
||||
AllAccess: vd.AllAccess,
|
||||
Admin: vd.Admin,
|
||||
AllowedIPs: vd.AllowedIPs,
|
||||
RateLimit: vd.RateLimit,
|
||||
EntryID: e.EntryID,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// AgentUpdateAllowedIPs re-encrypts the agent entry data with updated AllowedIPs.
|
||||
func AgentUpdateAllowedIPs(db *DB, vaultKey []byte, agent *AgentData) error {
|
||||
var e Entry
|
||||
err := db.Conn.QueryRow(
|
||||
`SELECT entry_id, data, data_level FROM entries WHERE entry_id = ? AND deleted_at IS NULL`,
|
||||
int64(agent.EntryID),
|
||||
).Scan(&e.EntryID, &e.Data, &e.DataLevel)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
entryKey, err := DeriveEntryKey(vaultKey, int64(e.EntryID))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dataText, err := Unpack(entryKey, e.Data)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var vd VaultData
|
||||
if err := json.Unmarshal([]byte(dataText), &vd); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
vd.AllowedIPs = agent.AllowedIPs
|
||||
updated, err := json.Marshal(vd)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
packed, err := Pack(entryKey, string(updated))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = db.Conn.Exec(`UPDATE entries SET data = ?, updated_at = ? WHERE entry_id = ?`,
|
||||
packed, time.Now().Unix(), int64(agent.EntryID))
|
||||
return err
|
||||
}
|
||||
|
||||
// AgentCreate creates an agent entry and returns the client credential token.
|
||||
func AgentCreate(db *DB, vaultKey, l0 []byte, name string, scopes string, allAccess, admin bool) (*AgentData, string, error) {
|
||||
// Generate random 16-byte agent_id and scope_id
|
||||
|
|
|
|||
|
|
@ -1,324 +0,0 @@
|
|||
package lib
|
||||
|
||||
import (
|
||||
"testing"
|
||||
)
|
||||
|
||||
// testDB creates a temp database, migrates it, returns DB + cleanup.
|
||||
func testDB(t *testing.T) *DB {
|
||||
t.Helper()
|
||||
db, err := OpenDB(t.TempDir() + "/test.db")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if err := MigrateDB(db); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
t.Cleanup(func() { db.Close() })
|
||||
return db
|
||||
}
|
||||
|
||||
// testVaultKey returns a fixed 16-byte key for testing.
|
||||
func testVaultKey() []byte {
|
||||
return []byte{1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4, 5, 6, 7, 8}
|
||||
}
|
||||
|
||||
func TestEntryCreate_and_Get(t *testing.T) {
|
||||
db := testDB(t)
|
||||
vk := testVaultKey()
|
||||
|
||||
entry := &Entry{
|
||||
Type: TypeCredential,
|
||||
Title: "GitHub",
|
||||
VaultData: &VaultData{
|
||||
Title: "GitHub",
|
||||
Type: "credential",
|
||||
Fields: []VaultField{
|
||||
{Label: "username", Value: "octocat", Kind: "text"},
|
||||
{Label: "password", Value: "ghp_abc123", Kind: "password"},
|
||||
},
|
||||
URLs: []string{"https://github.com"},
|
||||
},
|
||||
}
|
||||
|
||||
if err := EntryCreate(db, vk, entry); err != nil {
|
||||
t.Fatalf("create: %v", err)
|
||||
}
|
||||
if entry.EntryID == 0 {
|
||||
t.Fatal("entry ID should be assigned")
|
||||
}
|
||||
if entry.Version != 1 {
|
||||
t.Errorf("initial version should be 1, got %d", entry.Version)
|
||||
}
|
||||
|
||||
got, err := EntryGet(db, vk, int64(entry.EntryID))
|
||||
if err != nil {
|
||||
t.Fatalf("get: %v", err)
|
||||
}
|
||||
if got.Title != "GitHub" {
|
||||
t.Errorf("title = %q, want GitHub", got.Title)
|
||||
}
|
||||
if got.VaultData == nil {
|
||||
t.Fatal("VaultData should be unpacked")
|
||||
}
|
||||
if len(got.VaultData.Fields) != 2 {
|
||||
t.Fatalf("expected 2 fields, got %d", len(got.VaultData.Fields))
|
||||
}
|
||||
if got.VaultData.Fields[0].Value != "octocat" {
|
||||
t.Errorf("username = %q, want octocat", got.VaultData.Fields[0].Value)
|
||||
}
|
||||
if got.VaultData.Fields[1].Value != "ghp_abc123" {
|
||||
t.Errorf("password = %q, want ghp_abc123", got.VaultData.Fields[1].Value)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEntryUpdate_optimistic_locking(t *testing.T) {
|
||||
db := testDB(t)
|
||||
vk := testVaultKey()
|
||||
|
||||
entry := &Entry{
|
||||
Type: TypeCredential,
|
||||
Title: "Original",
|
||||
VaultData: &VaultData{Title: "Original", Type: "credential"},
|
||||
}
|
||||
EntryCreate(db, vk, entry)
|
||||
|
||||
// Update with correct version
|
||||
entry.Title = "Updated"
|
||||
entry.VaultData.Title = "Updated"
|
||||
if err := EntryUpdate(db, vk, entry); err != nil {
|
||||
t.Fatalf("update: %v", err)
|
||||
}
|
||||
if entry.Version != 2 {
|
||||
t.Errorf("version after update should be 2, got %d", entry.Version)
|
||||
}
|
||||
|
||||
// Update with stale version should fail
|
||||
entry.Version = 1 // stale
|
||||
entry.Title = "Stale"
|
||||
err := EntryUpdate(db, vk, entry)
|
||||
if err != ErrVersionConflict {
|
||||
t.Errorf("expected ErrVersionConflict, got %v", err)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEntryDelete_soft_delete(t *testing.T) {
|
||||
db := testDB(t)
|
||||
vk := testVaultKey()
|
||||
|
||||
entry := &Entry{
|
||||
Type: TypeCredential,
|
||||
Title: "ToDelete",
|
||||
VaultData: &VaultData{Title: "ToDelete", Type: "credential"},
|
||||
}
|
||||
EntryCreate(db, vk, entry)
|
||||
|
||||
if err := EntryDelete(db, int64(entry.EntryID)); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
// Should not appear in list
|
||||
entries, err := EntryList(db, vk, nil)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
for _, e := range entries {
|
||||
if e.EntryID == entry.EntryID {
|
||||
t.Error("deleted entry should not appear in list")
|
||||
}
|
||||
}
|
||||
|
||||
// Direct get should still work but have DeletedAt set
|
||||
got, err := EntryGet(db, vk, int64(entry.EntryID))
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if got.DeletedAt == nil {
|
||||
t.Error("deleted entry should have DeletedAt set")
|
||||
}
|
||||
}
|
||||
|
||||
func TestEntryList_filters_by_parent(t *testing.T) {
|
||||
db := testDB(t)
|
||||
vk := testVaultKey()
|
||||
|
||||
folder := &Entry{Type: TypeFolder, Title: "Work", VaultData: &VaultData{Title: "Work", Type: "folder"}}
|
||||
EntryCreate(db, vk, folder)
|
||||
|
||||
child := &Entry{
|
||||
Type: TypeCredential,
|
||||
Title: "WorkGitHub",
|
||||
ParentID: folder.EntryID,
|
||||
VaultData: &VaultData{Title: "WorkGitHub", Type: "credential"},
|
||||
}
|
||||
EntryCreate(db, vk, child)
|
||||
|
||||
orphan := &Entry{
|
||||
Type: TypeCredential,
|
||||
Title: "Personal",
|
||||
VaultData: &VaultData{Title: "Personal", Type: "credential"},
|
||||
}
|
||||
EntryCreate(db, vk, orphan)
|
||||
|
||||
parentID := int64(folder.EntryID)
|
||||
children, err := EntryList(db, vk, &parentID)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(children) != 1 {
|
||||
t.Fatalf("expected 1 child, got %d", len(children))
|
||||
}
|
||||
if children[0].Title != "WorkGitHub" {
|
||||
t.Errorf("child title = %q", children[0].Title)
|
||||
}
|
||||
}
|
||||
|
||||
func TestEntrySearchFuzzy(t *testing.T) {
|
||||
db := testDB(t)
|
||||
vk := testVaultKey()
|
||||
|
||||
for _, title := range []string{"GitHub", "GitLab", "AWS Console"} {
|
||||
EntryCreate(db, vk, &Entry{
|
||||
Type: TypeCredential,
|
||||
Title: title,
|
||||
VaultData: &VaultData{Title: title, Type: "credential"},
|
||||
})
|
||||
}
|
||||
|
||||
results, err := EntrySearchFuzzy(db, vk, "Git")
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(results) != 2 {
|
||||
t.Errorf("search for 'Git' should return 2 results, got %d", len(results))
|
||||
}
|
||||
}
|
||||
|
||||
func TestEntryCount(t *testing.T) {
|
||||
db := testDB(t)
|
||||
vk := testVaultKey()
|
||||
|
||||
count, _ := EntryCount(db)
|
||||
if count != 0 {
|
||||
t.Errorf("empty db should have 0 entries, got %d", count)
|
||||
}
|
||||
|
||||
EntryCreate(db, vk, &Entry{
|
||||
Type: TypeCredential, Title: "One",
|
||||
VaultData: &VaultData{Title: "One", Type: "credential"},
|
||||
})
|
||||
EntryCreate(db, vk, &Entry{
|
||||
Type: TypeCredential, Title: "Two",
|
||||
VaultData: &VaultData{Title: "Two", Type: "credential"},
|
||||
})
|
||||
|
||||
count, _ = EntryCount(db)
|
||||
if count != 2 {
|
||||
t.Errorf("expected 2 entries, got %d", count)
|
||||
}
|
||||
}
|
||||
|
||||
func TestAuditLog_write_and_read(t *testing.T) {
|
||||
db := testDB(t)
|
||||
|
||||
AuditLog(db, &AuditEvent{
|
||||
Action: ActionCreate,
|
||||
Actor: ActorWeb,
|
||||
Title: "GitHub",
|
||||
IPAddr: "127.0.0.1",
|
||||
})
|
||||
AuditLog(db, &AuditEvent{
|
||||
Action: ActionRead,
|
||||
Actor: ActorAgent,
|
||||
Title: "GitHub",
|
||||
IPAddr: "10.0.0.1",
|
||||
})
|
||||
|
||||
events, err := AuditList(db, 10)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(events) != 2 {
|
||||
t.Fatalf("expected 2 audit events, got %d", len(events))
|
||||
}
|
||||
// Both actions should be present (order depends on timestamp resolution)
|
||||
actions := map[string]bool{}
|
||||
for _, e := range events {
|
||||
actions[e.Action] = true
|
||||
}
|
||||
if !actions[ActionCreate] {
|
||||
t.Error("missing create action")
|
||||
}
|
||||
if !actions[ActionRead] {
|
||||
t.Error("missing read action")
|
||||
}
|
||||
}
|
||||
|
||||
func TestSessionCreate_and_Get(t *testing.T) {
|
||||
db := testDB(t)
|
||||
|
||||
session, err := SessionCreate(db, 3600, ActorWeb)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if session.Token == "" {
|
||||
t.Fatal("session token should not be empty")
|
||||
}
|
||||
|
||||
got, err := SessionGet(db, session.Token)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if got == nil {
|
||||
t.Fatal("session should exist")
|
||||
}
|
||||
if got.Actor != ActorWeb {
|
||||
t.Errorf("actor = %q, want web", got.Actor)
|
||||
}
|
||||
}
|
||||
|
||||
func TestSessionGet_expired(t *testing.T) {
|
||||
db := testDB(t)
|
||||
|
||||
// Create session with negative TTL (guaranteed expired)
|
||||
session, _ := SessionCreate(db, -1, ActorWeb)
|
||||
|
||||
got, err := SessionGet(db, session.Token)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if got != nil {
|
||||
t.Error("expired session should return nil")
|
||||
}
|
||||
}
|
||||
|
||||
func TestWebAuthnCredential_store_and_list(t *testing.T) {
|
||||
db := testDB(t)
|
||||
|
||||
cred := &WebAuthnCredential{
|
||||
CredID: HexID(NewID()),
|
||||
Name: "YubiKey",
|
||||
PublicKey: []byte{1, 2, 3},
|
||||
CredentialID: []byte{4, 5, 6},
|
||||
PRFSalt: []byte{7, 8, 9},
|
||||
}
|
||||
if err := StoreWebAuthnCredential(db, cred); err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
|
||||
creds, err := GetWebAuthnCredentials(db)
|
||||
if err != nil {
|
||||
t.Fatal(err)
|
||||
}
|
||||
if len(creds) != 1 {
|
||||
t.Fatalf("expected 1 credential, got %d", len(creds))
|
||||
}
|
||||
if creds[0].Name != "YubiKey" {
|
||||
t.Errorf("name = %q", creds[0].Name)
|
||||
}
|
||||
|
||||
count, _ := WebAuthnCredentialCount(db)
|
||||
if count != 1 {
|
||||
t.Errorf("count = %d, want 1", count)
|
||||
}
|
||||
}
|
||||
|
|
@ -28,24 +28,24 @@ type TelemetryConfig struct {
|
|||
|
||||
// TelemetryPayload is the JSON body posted to the telemetry endpoint.
|
||||
type TelemetryPayload struct {
|
||||
Version string `json:"version"`
|
||||
Hostname string `json:"hostname"`
|
||||
UptimeSeconds int64 `json:"uptime_seconds"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
System SystemMetrics `json:"system"`
|
||||
Vaults VaultMetrics `json:"vaults"`
|
||||
Version string `json:"version"`
|
||||
Hostname string `json:"hostname"`
|
||||
UptimeSeconds int64 `json:"uptime_seconds"`
|
||||
Timestamp string `json:"timestamp"`
|
||||
System SystemMetrics `json:"system"`
|
||||
Vaults VaultMetrics `json:"vaults"`
|
||||
}
|
||||
|
||||
type SystemMetrics struct {
|
||||
OS string `json:"os"`
|
||||
Arch string `json:"arch"`
|
||||
CPUs int `json:"cpus"`
|
||||
CPUPercent float64 `json:"cpu_percent"`
|
||||
MemTotalMB int64 `json:"memory_total_mb"`
|
||||
MemUsedMB int64 `json:"memory_used_mb"`
|
||||
DiskTotalMB int64 `json:"disk_total_mb"`
|
||||
DiskUsedMB int64 `json:"disk_used_mb"`
|
||||
Load1m float64 `json:"load_1m"`
|
||||
OS string `json:"os"`
|
||||
Arch string `json:"arch"`
|
||||
CPUs int `json:"cpus"`
|
||||
CPUPercent float64 `json:"cpu_percent"`
|
||||
MemTotalMB int64 `json:"memory_total_mb"`
|
||||
MemUsedMB int64 `json:"memory_used_mb"`
|
||||
DiskTotalMB int64 `json:"disk_total_mb"`
|
||||
DiskUsedMB int64 `json:"disk_used_mb"`
|
||||
Load1m float64 `json:"load_1m"`
|
||||
}
|
||||
|
||||
type VaultMetrics struct {
|
||||
|
|
@ -54,7 +54,48 @@ type VaultMetrics struct {
|
|||
TotalEntries int64 `json:"total_entries"`
|
||||
}
|
||||
|
||||
// StartTelemetry launches a background goroutine that periodically
|
||||
// AlertOperator sends critical operational alerts.
|
||||
// Community: logs to stderr only.
|
||||
// Commercial (telemetry enabled): also POSTs to alert endpoint.
|
||||
func AlertOperator(cfg TelemetryConfig, alertType, message string, details map[string]any) {
|
||||
// Always log locally
|
||||
if details != nil {
|
||||
log.Printf("OPERATOR ALERT [%s]: %s - %+v", alertType, message, details)
|
||||
} else {
|
||||
log.Printf("OPERATOR ALERT [%s]: %s", alertType, message)
|
||||
}
|
||||
|
||||
// Commercial: POST to telemetry alert endpoint if configured
|
||||
if cfg.Host == "" {
|
||||
return
|
||||
}
|
||||
|
||||
hostname, _ := os.Hostname()
|
||||
alert := map[string]any{
|
||||
"type": alertType,
|
||||
"message": message,
|
||||
"details": details,
|
||||
"hostname": hostname,
|
||||
"version": cfg.Version,
|
||||
"timestamp": time.Now().UTC().Format(time.RFC3339),
|
||||
}
|
||||
|
||||
body, _ := json.Marshal(alert)
|
||||
req, _ := http.NewRequest("POST", cfg.Host+"/alerts", bytes.NewReader(body))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
if cfg.Token != "" {
|
||||
req.Header.Set("Authorization", "Bearer "+cfg.Token)
|
||||
}
|
||||
|
||||
client := &http.Client{Timeout: 5 * time.Second}
|
||||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
log.Printf("telemetry alert failed: %v", err)
|
||||
return
|
||||
}
|
||||
resp.Body.Close()
|
||||
}
|
||||
|
||||
// collects metrics and POSTs them to cfg.Host. Does nothing if
|
||||
// FreqSeconds <= 0 or Host is empty.
|
||||
func StartTelemetry(cfg TelemetryConfig) {
|
||||
|
|
@ -69,11 +110,17 @@ func StartTelemetry(cfg TelemetryConfig) {
|
|||
log.Printf("Telemetry enabled: posting every %ds to %s", cfg.FreqSeconds, cfg.Host)
|
||||
|
||||
go func() {
|
||||
// Post immediately on startup, then on interval.
|
||||
// Post immediately on startup at 10s intervals until ACK'd,
|
||||
// then settle into the normal interval.
|
||||
retry := 10 * time.Second
|
||||
for {
|
||||
payload := CollectPayload(cfg, startTime)
|
||||
postTelemetry(client, cfg.Host, cfg.Token, payload)
|
||||
time.Sleep(interval)
|
||||
if postTelemetry(client, cfg.Host, cfg.Token, payload) {
|
||||
time.Sleep(interval)
|
||||
} else {
|
||||
log.Printf("telemetry: no ACK, retrying in %s", retry)
|
||||
time.Sleep(retry)
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
|
@ -92,17 +139,17 @@ func CollectPayload(cfg TelemetryConfig, startTime time.Time) TelemetryPayload {
|
|||
}
|
||||
}
|
||||
|
||||
func postTelemetry(client *http.Client, host, token string, payload TelemetryPayload) {
|
||||
func postTelemetry(client *http.Client, host, token string, payload TelemetryPayload) bool {
|
||||
body, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
log.Printf("telemetry: marshal error: %v", err)
|
||||
return
|
||||
return false
|
||||
}
|
||||
|
||||
req, err := http.NewRequest("POST", host, bytes.NewReader(body))
|
||||
if err != nil {
|
||||
log.Printf("telemetry: request error: %v", err)
|
||||
return
|
||||
return false
|
||||
}
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
if token != "" {
|
||||
|
|
@ -112,13 +159,20 @@ func postTelemetry(client *http.Client, host, token string, payload TelemetryPay
|
|||
resp, err := client.Do(req)
|
||||
if err != nil {
|
||||
log.Printf("telemetry: post error: %v", err)
|
||||
return
|
||||
return false
|
||||
}
|
||||
resp.Body.Close()
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode >= 300 {
|
||||
log.Printf("telemetry: unexpected status %d", resp.StatusCode)
|
||||
return false
|
||||
}
|
||||
|
||||
var ack struct {
|
||||
OK bool `json:"ok"`
|
||||
}
|
||||
json.NewDecoder(resp.Body).Decode(&ack)
|
||||
return ack.OK
|
||||
}
|
||||
|
||||
func collectSystemMetrics(dataDir string) SystemMetrics {
|
||||
|
|
|
|||
|
|
@ -1,106 +0,0 @@
|
|||
package lib
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"sync"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestCollectPayload(t *testing.T) {
|
||||
cfg := TelemetryConfig{
|
||||
FreqSeconds: 60,
|
||||
Host: "http://localhost:9999",
|
||||
Token: "test-token",
|
||||
DataDir: t.TempDir(),
|
||||
Version: "test-1.0",
|
||||
}
|
||||
startTime := time.Now().Add(-5 * time.Minute)
|
||||
|
||||
payload := CollectPayload(cfg, startTime)
|
||||
|
||||
if payload.Version != "test-1.0" {
|
||||
t.Errorf("version = %q, want test-1.0", payload.Version)
|
||||
}
|
||||
if payload.Hostname == "" {
|
||||
t.Error("hostname should not be empty")
|
||||
}
|
||||
if payload.UptimeSeconds < 299 {
|
||||
t.Errorf("uptime should be ~300s, got %d", payload.UptimeSeconds)
|
||||
}
|
||||
if payload.Timestamp == "" {
|
||||
t.Error("timestamp should not be empty")
|
||||
}
|
||||
if payload.System.OS == "" {
|
||||
t.Error("OS should not be empty")
|
||||
}
|
||||
if payload.System.CPUs < 1 {
|
||||
t.Errorf("CPUs should be >= 1, got %d", payload.System.CPUs)
|
||||
}
|
||||
if payload.System.MemTotalMB <= 0 {
|
||||
t.Errorf("memory total should be > 0, got %d", payload.System.MemTotalMB)
|
||||
}
|
||||
|
||||
// JSON roundtrip
|
||||
data, err := json.Marshal(payload)
|
||||
if err != nil {
|
||||
t.Fatalf("marshal: %v", err)
|
||||
}
|
||||
var decoded TelemetryPayload
|
||||
if err := json.Unmarshal(data, &decoded); err != nil {
|
||||
t.Fatalf("unmarshal: %v", err)
|
||||
}
|
||||
if decoded.Hostname != payload.Hostname {
|
||||
t.Error("hostname mismatch after roundtrip")
|
||||
}
|
||||
}
|
||||
|
||||
func TestPostTelemetry(t *testing.T) {
|
||||
var mu sync.Mutex
|
||||
var received TelemetryPayload
|
||||
var authHeader string
|
||||
|
||||
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
authHeader = r.Header.Get("Authorization")
|
||||
body, _ := io.ReadAll(r.Body)
|
||||
json.Unmarshal(body, &received)
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
defer server.Close()
|
||||
|
||||
cfg := TelemetryConfig{
|
||||
FreqSeconds: 1,
|
||||
Host: server.URL,
|
||||
Token: "secret-token",
|
||||
DataDir: t.TempDir(),
|
||||
Version: "test-post",
|
||||
}
|
||||
|
||||
StartTelemetry(cfg)
|
||||
time.Sleep(2 * time.Second) // CPU sampling takes 500ms, then POST
|
||||
|
||||
mu.Lock()
|
||||
defer mu.Unlock()
|
||||
|
||||
if authHeader != "Bearer secret-token" {
|
||||
t.Errorf("expected Bearer secret-token, got %q", authHeader)
|
||||
}
|
||||
if received.Version != "test-post" {
|
||||
t.Errorf("version = %q, want test-post", received.Version)
|
||||
}
|
||||
if received.Hostname == "" {
|
||||
t.Error("hostname should not be empty in posted payload")
|
||||
}
|
||||
}
|
||||
|
||||
func TestTelemetryDisabled(t *testing.T) {
|
||||
// None of these should panic or start goroutines.
|
||||
StartTelemetry(TelemetryConfig{})
|
||||
StartTelemetry(TelemetryConfig{FreqSeconds: 0, Host: "http://example.com"})
|
||||
StartTelemetry(TelemetryConfig{FreqSeconds: 60, Host: ""})
|
||||
}
|
||||
|
|
@ -1,70 +0,0 @@
|
|||
package lib
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"errors"
|
||||
"fmt"
|
||||
|
||||
_ "github.com/mattn/go-sqlite3"
|
||||
)
|
||||
|
||||
const tokenMapSchema = `
|
||||
CREATE TABLE IF NOT EXISTS token_map (
|
||||
token TEXT PRIMARY KEY,
|
||||
vault_id INTEGER NOT NULL
|
||||
);
|
||||
`
|
||||
|
||||
// TokenMap wraps node.db for token→vault_id lookups.
|
||||
type TokenMap struct {
|
||||
db *sql.DB
|
||||
}
|
||||
|
||||
// OpenTokenMap opens (or creates) the node.db token registry.
|
||||
func OpenTokenMap(dbPath string) (*TokenMap, error) {
|
||||
conn, err := sql.Open("sqlite3", dbPath+"?_journal_mode=WAL&_busy_timeout=5000")
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("open token map: %w", err)
|
||||
}
|
||||
if _, err := conn.Exec(tokenMapSchema); err != nil {
|
||||
conn.Close()
|
||||
return nil, fmt.Errorf("migrate token map: %w", err)
|
||||
}
|
||||
return &TokenMap{db: conn}, nil
|
||||
}
|
||||
|
||||
// Close closes the token map database.
|
||||
func (tm *TokenMap) Close() error {
|
||||
return tm.db.Close()
|
||||
}
|
||||
|
||||
// Register adds a token→vault_id mapping.
|
||||
func (tm *TokenMap) Register(token string, vaultID int64) error {
|
||||
_, err := tm.db.Exec(
|
||||
`INSERT OR REPLACE INTO token_map (token, vault_id) VALUES (?, ?)`,
|
||||
token, vaultID,
|
||||
)
|
||||
return err
|
||||
}
|
||||
|
||||
// Lookup resolves a token to a vault_id. Returns 0, nil if not found.
|
||||
func (tm *TokenMap) Lookup(token string) (int64, error) {
|
||||
var vaultID int64
|
||||
err := tm.db.QueryRow(`SELECT vault_id FROM token_map WHERE token = ?`, token).Scan(&vaultID)
|
||||
if errors.Is(err, sql.ErrNoRows) {
|
||||
return 0, nil
|
||||
}
|
||||
return vaultID, err
|
||||
}
|
||||
|
||||
// Remove deletes a token mapping.
|
||||
func (tm *TokenMap) Remove(token string) error {
|
||||
_, err := tm.db.Exec(`DELETE FROM token_map WHERE token = ?`, token)
|
||||
return err
|
||||
}
|
||||
|
||||
// RemoveAllForVault removes all tokens for a vault.
|
||||
func (tm *TokenMap) RemoveAllForVault(vaultID int64) error {
|
||||
_, err := tm.db.Exec(`DELETE FROM token_map WHERE vault_id = ?`, vaultID)
|
||||
return err
|
||||
}
|
||||
|
|
@ -3,9 +3,67 @@ package lib
|
|||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"net"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
)
|
||||
|
||||
// fqdncache caches FQDN lookups to prevent DoS from repeated DNS queries.
|
||||
// Entries expire after 5 minutes.
|
||||
var fqdncache = &fqdnCache{
|
||||
entries: make(map[string]fqdnEntry),
|
||||
ttl: 5 * time.Minute,
|
||||
}
|
||||
|
||||
type fqdnEntry struct {
|
||||
addrs []string
|
||||
expiresAt time.Time
|
||||
}
|
||||
|
||||
type fqdnCache struct {
|
||||
mu sync.RWMutex
|
||||
entries map[string]fqdnEntry
|
||||
ttl time.Duration
|
||||
}
|
||||
|
||||
func (c *fqdnCache) lookup(fqdn string) ([]string, bool) {
|
||||
c.mu.RLock()
|
||||
entry, ok := c.entries[fqdn]
|
||||
c.mu.RUnlock()
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
if time.Now().After(entry.expiresAt) {
|
||||
c.mu.Lock()
|
||||
delete(c.entries, fqdn)
|
||||
c.mu.Unlock()
|
||||
return nil, false
|
||||
}
|
||||
return entry.addrs, true
|
||||
}
|
||||
|
||||
func (c *fqdnCache) store(fqdn string, addrs []string) {
|
||||
c.mu.Lock()
|
||||
c.entries[fqdn] = fqdnEntry{addrs: addrs, expiresAt: time.Now().Add(c.ttl)}
|
||||
c.mu.Unlock()
|
||||
}
|
||||
|
||||
func (c *fqdnCache) resolve(fqdn string) ([]string, error) {
|
||||
// Check cache first
|
||||
if addrs, ok := c.lookup(fqdn); ok {
|
||||
return addrs, nil
|
||||
}
|
||||
// Cache miss — do DNS lookup
|
||||
addrs, err := net.LookupHost(fqdn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Store in cache (even if empty — negative caching)
|
||||
c.store(fqdn, addrs)
|
||||
return addrs, nil
|
||||
}
|
||||
|
||||
// HexID is an int64 that marshals to/from 16-char hex in JSON.
|
||||
type HexID int64
|
||||
|
||||
|
|
@ -33,10 +91,10 @@ func (h *HexID) UnmarshalJSON(data []byte) error {
|
|||
type VaultField struct {
|
||||
Label string `json:"label"`
|
||||
Value string `json:"value"`
|
||||
Kind string `json:"kind"` // text|password|totp|url|file
|
||||
Kind string `json:"kind"` // text|password|totp|url|file
|
||||
Section string `json:"section,omitempty"`
|
||||
L2 bool `json:"l2,omitempty"` // legacy: true = L3 in new model
|
||||
Tier int `json:"tier,omitempty"` // 1=L1, 2=L2, 3=L3
|
||||
L2 bool `json:"l2,omitempty"` // legacy: true = L3 in new model
|
||||
Tier int `json:"tier,omitempty"` // 1=L1, 2=L2, 3=L3
|
||||
}
|
||||
|
||||
// VaultFile represents an attached file.
|
||||
|
|
@ -59,10 +117,12 @@ type VaultData struct {
|
|||
SourceModified int64 `json:"source_modified,omitempty"`
|
||||
|
||||
// Agent-specific fields (only present when type = "agent")
|
||||
AgentID string `json:"agent_id,omitempty"` // 32-char hex (16 bytes)
|
||||
Scopes string `json:"scopes,omitempty"` // comma-separated 32-char hex scope IDs
|
||||
AllAccess bool `json:"all_access,omitempty"`
|
||||
Admin bool `json:"admin,omitempty"`
|
||||
AgentID string `json:"agent_id,omitempty"` // 32-char hex (16 bytes)
|
||||
Scopes string `json:"scopes,omitempty"` // comma-separated 32-char hex scope IDs
|
||||
AllAccess bool `json:"all_access,omitempty"`
|
||||
Admin bool `json:"admin,omitempty"`
|
||||
AllowedIPs string `json:"allowed_ips,omitempty"` // comma-separated CIDRs or FQDNs
|
||||
RateLimit int `json:"rate_limit,omitempty"` // max requests/minute; 0 = unlimited
|
||||
|
||||
// Scope-specific fields (only present when type = "scope")
|
||||
ScopeID string `json:"scope_id,omitempty"` // 32-char hex (16 bytes)
|
||||
|
|
@ -92,11 +152,14 @@ type Entry struct {
|
|||
|
||||
// AgentData is the in-memory representation of an agent after decrypting its entry.
|
||||
type AgentData struct {
|
||||
AgentID string // 32-char hex
|
||||
Name string
|
||||
Scopes string // comma-separated 32-char hex scope IDs
|
||||
AllAccess bool
|
||||
Admin bool
|
||||
AgentID string // 32-char hex
|
||||
Name string
|
||||
Scopes string // comma-separated 32-char hex scope IDs
|
||||
AllAccess bool
|
||||
Admin bool
|
||||
AllowedIPs string // comma-separated CIDRs or FQDNs; empty = not set yet (first contact fills it)
|
||||
RateLimit int // max requests per minute; 0 = unlimited
|
||||
EntryID HexID // entry ID of the agent record (for updating AllowedIPs on first contact)
|
||||
}
|
||||
|
||||
// AuditEvent represents a security audit log entry.
|
||||
|
|
@ -160,18 +223,19 @@ const (
|
|||
|
||||
// Action types
|
||||
const (
|
||||
ActionRead = "read"
|
||||
ActionFill = "fill"
|
||||
ActionAIRead = "ai_read"
|
||||
ActionCreate = "create"
|
||||
ActionUpdate = "update"
|
||||
ActionDelete = "delete"
|
||||
ActionImport = "import"
|
||||
ActionExport = "export"
|
||||
ActionAgentCreate = "agent_create"
|
||||
ActionAgentUpdate = "agent_update"
|
||||
ActionAgentDelete = "agent_delete"
|
||||
ActionScopeUpdate = "scope_update"
|
||||
ActionRead = "read"
|
||||
ActionFill = "fill"
|
||||
ActionAIRead = "ai_read"
|
||||
ActionCreate = "create"
|
||||
ActionUpdate = "update"
|
||||
ActionDelete = "delete"
|
||||
ActionImport = "import"
|
||||
ActionExport = "export"
|
||||
ActionAgentCreate = "agent_create"
|
||||
ActionAgentUpdate = "agent_update"
|
||||
ActionAgentDelete = "agent_delete"
|
||||
ActionScopeUpdate = "scope_update"
|
||||
ActionBackupRestore = "backup_restore"
|
||||
)
|
||||
|
||||
// Owner scope — reserved, no agent ever gets this.
|
||||
|
|
@ -208,3 +272,47 @@ func AgentCanAccess(agent *AgentData, entryScopes string) bool {
|
|||
}
|
||||
return false
|
||||
}
|
||||
|
||||
// AgentIPAllowed checks if the given IP is allowed by the agent's whitelist.
|
||||
// Empty AllowedIPs means not set yet — always allowed (first contact will fill it).
|
||||
// Supports CIDR notation (10.0.0.0/16) and FQDNs (home.smith.family), comma-separated.
|
||||
func AgentIPAllowed(agent *AgentData, ip string) bool {
|
||||
if agent == nil || agent.AllowedIPs == "" {
|
||||
return true
|
||||
}
|
||||
parsed := net.ParseIP(ip)
|
||||
if parsed == nil {
|
||||
return false
|
||||
}
|
||||
for _, entry := range strings.Split(agent.AllowedIPs, ",") {
|
||||
entry = strings.TrimSpace(entry)
|
||||
if entry == "" {
|
||||
continue
|
||||
}
|
||||
// CIDR
|
||||
if strings.Contains(entry, "/") {
|
||||
_, cidr, err := net.ParseCIDR(entry)
|
||||
if err == nil && cidr.Contains(parsed) {
|
||||
return true
|
||||
}
|
||||
continue
|
||||
}
|
||||
// Exact IP
|
||||
if net.ParseIP(entry) != nil {
|
||||
if entry == ip {
|
||||
return true
|
||||
}
|
||||
continue
|
||||
}
|
||||
// FQDN — resolve and match (with caching)
|
||||
addrs, err := fqdncache.resolve(entry)
|
||||
if err == nil {
|
||||
for _, a := range addrs {
|
||||
if a == ip {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
|
|
|||
Loading…
Reference in New Issue