chore: auto-commit uncommitted changes
This commit is contained in:
parent
d2402007cf
commit
c30087ce99
|
|
@ -0,0 +1,71 @@
|
|||
# Memory — 2026-03-22
|
||||
|
||||
## Johan's Working Style (05:32 AM — explicit correction)
|
||||
|
||||
**No symlinks. No rsync pipelines. No "clever" file plumbing.**
|
||||
When something needs to be in two places, copy it explicitly. Simple, obvious, traceable.
|
||||
"That's not how I roll" — figure it out, don't ask, don't add infrastructure for file movement.
|
||||
|
||||
|
||||
## Clavitor Project Setup (03:55–04:21 AM)
|
||||
|
||||
### Project Structure (decided)
|
||||
Single workspace on forge: `/home/johan/dev/clavitor/`
|
||||
|
||||
```
|
||||
clavitor/
|
||||
├── docs/ # SHARED docs for both OSS and commercial
|
||||
├── oss/ # PUBLIC — goes to GitHub
|
||||
│ ├── server/
|
||||
│ ├── cli/
|
||||
│ ├── extension/
|
||||
│ └── mobile/ # Flutter (iOS + Android)
|
||||
└── commercial/ # PRIVATE — never on GitHub
|
||||
├── website/
|
||||
├── admin/
|
||||
├── billing/
|
||||
└── infrastructure/
|
||||
```
|
||||
|
||||
### Repo strategy
|
||||
- **Monorepo** under `github.com/clavitor/clavitor`
|
||||
- OSS half goes to GitHub. Commercial stays on forge/Zurich only.
|
||||
- `scripts/sync-to-github.sh` will push `oss/` to GitHub
|
||||
- vault1984 source stays intact at `/home/johan/dev/vault1984/` as backup
|
||||
|
||||
### Migration status (as of 04:21 AM)
|
||||
- Structure created at `/home/johan/dev/clavitor/`
|
||||
- vault1984 files COPIED (not moved) to clavitor/oss/ and clavitor/commercial/
|
||||
- Makefile updated: binary output names changed vault1984 → clavitor
|
||||
- Go module names / import paths: LEFT UNCHANGED (internal plumbing, no need to rename)
|
||||
- Claude Code subagent running (pid 1363913, session gentle-shell) to:
|
||||
- Finish user-facing renames (README, web UI titles, CLI help text)
|
||||
- Attempt compile
|
||||
- Report results
|
||||
|
||||
### Key decisions
|
||||
- Do NOT rename Go import paths or module names — internal plumbing, code compiles fine as-is
|
||||
- Only rename user-facing strings: binary names, README, <title> tags, CLI --help text
|
||||
- vault1984 stays intact. clavitor is a separate copy.
|
||||
- No MCP integration for credential access — MCP can't hold decryption keys (L2/L3 access impossible via MCP)
|
||||
- Viral angle: "the vault agents can query but can't steal from" — security architecture is the feature
|
||||
|
||||
### Pending (still needed)
|
||||
- [x] Domain DNS: clavitor.ai + clavitor.com — **both in Cloudflare** (not Openprovider). A records → 82.22.36.202 (Zurich). Placeholder live.
|
||||
- [ ] GitHub org creation: needs token with admin:org scope — Johan action
|
||||
- [ ] Cloudflare Browser Rendering token: current token in cloudflare.env is invalid (401) — Johan action
|
||||
- [ ] Compile result from Claude Code subagent — pending
|
||||
- [ ] OSS sync script: scripts/sync-to-github.sh — not yet written
|
||||
|
||||
### Product vision
|
||||
- Positioning: FIPS 140-3 vault, post-quantum (CRYSTALS-Kyber / ML-KEM), credential issuance for agents
|
||||
- Pricing: $12/year (personal), Pro tier (AgentPass), Business, Enterprise
|
||||
- OSS + hosted (GitLab model): same codebase, hosted service adds infrastructure layer
|
||||
- Go wide after OSS: consumer → SMB → MME → MSP → Enterprise
|
||||
- AgentPass = feature tier inside Clavitor, not a separate product
|
||||
|
||||
### Fireworks Developer Pass
|
||||
- Model: `accounts/fireworks/routers/kimi-k2p5-turbo`
|
||||
- Expires: March 28 trial (then $20/week opt-in)
|
||||
- All agents switched to this as default model
|
||||
- OpenCode configured at `~/.config/opencode/opencode.json`
|
||||
Binary file not shown.
|
|
@ -1,9 +1,9 @@
|
|||
{
|
||||
"last_updated": "2026-03-22T04:00:01.956714Z",
|
||||
"last_updated": "2026-03-22T10:00:01.920282Z",
|
||||
"source": "api",
|
||||
"session_percent": 2,
|
||||
"session_resets": "2026-03-22T06:00:00.900200+00:00",
|
||||
"weekly_percent": 33,
|
||||
"weekly_resets": "2026-03-27T02:59:59.900224+00:00",
|
||||
"sonnet_percent": 46
|
||||
"session_percent": 10,
|
||||
"session_resets": "2026-03-22T10:59:59.864556+00:00",
|
||||
"weekly_percent": 35,
|
||||
"weekly_resets": "2026-03-27T02:59:59.864574+00:00",
|
||||
"sonnet_percent": 48
|
||||
}
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
name: cf-browser
|
||||
description: Fetch any webpage as markdown, JSON, or screenshot using Cloudflare Browser Rendering. Use when web_fetch fails on JS-heavy pages (SPAs, Zillow, Redfin, PCPAO, Cloudflare-protected sites). Handles full JS rendering on Cloudflare global network.
|
||||
---
|
||||
|
||||
# Cloudflare Browser Rendering
|
||||
|
||||
Use this skill when you need to fetch a webpage that WebFetch can't handle — JS-heavy SPAs, sites behind Cloudflare protection, or pages that need full browser rendering to load content.
|
||||
|
||||
## When to use
|
||||
|
||||
- WebFetch returns empty/incomplete content
|
||||
- The site is a SPA (React, Vue, etc.) that requires JS execution
|
||||
- The site is behind Cloudflare bot protection
|
||||
- You need a visual screenshot of a page
|
||||
- You need to scrape specific elements via CSS selectors
|
||||
|
||||
## How to call
|
||||
|
||||
The script is at `/home/johan/clawd/skills/cf-browser/scripts/cf-fetch.sh`.
|
||||
|
||||
### Get page as markdown (most common)
|
||||
|
||||
```bash
|
||||
/home/johan/clawd/skills/cf-browser/scripts/cf-fetch.sh markdown https://example.com
|
||||
```
|
||||
|
||||
Returns clean markdown of the rendered page. This is the best option for reading page content.
|
||||
|
||||
### Take a screenshot
|
||||
|
||||
```bash
|
||||
/home/johan/clawd/skills/cf-browser/scripts/cf-fetch.sh screenshot https://example.com [output.png]
|
||||
```
|
||||
|
||||
Saves a PNG screenshot to the specified path (defaults to `/tmp/screenshot.png`). Use this when you need to see what a page looks like visually.
|
||||
|
||||
### Scrape structured data with CSS selectors
|
||||
|
||||
```bash
|
||||
/home/johan/clawd/skills/cf-browser/scripts/cf-fetch.sh scrape https://example.com ".price, .title"
|
||||
```
|
||||
|
||||
Returns JSON with the text content of elements matching the given CSS selectors. Use this when you need specific data points from a page.
|
||||
|
||||
## Interpreting output
|
||||
|
||||
- **markdown**: Raw markdown text printed to stdout. Pipe or capture as needed.
|
||||
- **screenshot**: Prints the output file path on success. View the PNG with your screenshot/image tools.
|
||||
- **scrape**: JSON array printed to stdout. Each element has `selector`, `results` with matched text content.
|
||||
|
||||
## Error handling
|
||||
|
||||
If the API returns an error, the script prints the error message to stderr and exits with code 1. Common issues:
|
||||
- Invalid URL (must include https://)
|
||||
- Cloudflare API rate limits
|
||||
- Page timeout (very slow sites)
|
||||
|
|
@ -0,0 +1,138 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Load Cloudflare credentials
|
||||
CONFIG_FILE="/home/johan/.config/cloudflare.env"
|
||||
if [[ ! -f "$CONFIG_FILE" ]]; then
|
||||
echo "Error: Config file not found at $CONFIG_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
source "$CONFIG_FILE"
|
||||
|
||||
if [[ -z "${CF_API_TOKEN:-}" || -z "${CF_ACCOUNT_ID:-}" ]]; then
|
||||
echo "Error: CF_API_TOKEN and CF_ACCOUNT_ID must be set in $CONFIG_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
BASE_URL="https://api.cloudflare.com/client/v4/accounts/${CF_ACCOUNT_ID}/browser-rendering"
|
||||
|
||||
usage() {
|
||||
echo "Usage: cf-fetch.sh <command> <url> [options]" >&2
|
||||
echo "" >&2
|
||||
echo "Commands:" >&2
|
||||
echo " markdown <url> Get page as markdown" >&2
|
||||
echo " screenshot <url> [output.png] Save screenshot (default: /tmp/screenshot.png)" >&2
|
||||
echo " scrape <url> \"<selectors>\" Scrape elements by CSS selector" >&2
|
||||
exit 1
|
||||
}
|
||||
|
||||
cmd_markdown() {
|
||||
local url="$1"
|
||||
local tmpfile
|
||||
tmpfile=$(mktemp)
|
||||
local http_code
|
||||
http_code=$(curl -s -o "$tmpfile" -w "%{http_code}" \
|
||||
-H "Authorization: Bearer ${CF_API_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"url\": \"${url}\"}" \
|
||||
"${BASE_URL}/markdown") || {
|
||||
echo "Error: API request failed" >&2
|
||||
rm -f "$tmpfile"
|
||||
exit 1
|
||||
}
|
||||
if [[ "$http_code" -ge 400 ]]; then
|
||||
echo "Error: API returned HTTP ${http_code}" >&2
|
||||
cat "$tmpfile" >&2
|
||||
rm -f "$tmpfile"
|
||||
exit 1
|
||||
fi
|
||||
cat "$tmpfile"
|
||||
rm -f "$tmpfile"
|
||||
}
|
||||
|
||||
cmd_screenshot() {
|
||||
local url="$1"
|
||||
local output="${2:-/tmp/screenshot.png}"
|
||||
local http_code
|
||||
http_code=$(curl -s -o "$output" -w "%{http_code}" \
|
||||
-H "Authorization: Bearer ${CF_API_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"url\": \"${url}\"}" \
|
||||
"${BASE_URL}/screenshot") || {
|
||||
echo "Error: API request failed" >&2
|
||||
exit 1
|
||||
}
|
||||
if [[ "$http_code" -ge 400 ]]; then
|
||||
echo "Error: API returned HTTP ${http_code}" >&2
|
||||
cat "$output" >&2
|
||||
rm -f "$output"
|
||||
exit 1
|
||||
fi
|
||||
echo "Screenshot saved to ${output}"
|
||||
}
|
||||
|
||||
cmd_scrape() {
|
||||
local url="$1"
|
||||
local selectors="$2"
|
||||
# Build elements array from comma-separated selectors
|
||||
local elements="[]"
|
||||
IFS=',' read -ra SELS <<< "$selectors"
|
||||
local items=()
|
||||
for sel in "${SELS[@]}"; do
|
||||
sel=$(echo "$sel" | xargs) # trim whitespace
|
||||
items+=("{\"selector\": \"${sel}\"}")
|
||||
done
|
||||
local joined
|
||||
joined=$(IFS=','; echo "${items[*]}")
|
||||
elements="[${joined}]"
|
||||
|
||||
local tmpfile
|
||||
tmpfile=$(mktemp)
|
||||
local http_code
|
||||
http_code=$(curl -s -o "$tmpfile" -w "%{http_code}" \
|
||||
-H "Authorization: Bearer ${CF_API_TOKEN}" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d "{\"url\": \"${url}\", \"elements\": ${elements}}" \
|
||||
"${BASE_URL}/scrape") || {
|
||||
echo "Error: API request failed" >&2
|
||||
rm -f "$tmpfile"
|
||||
exit 1
|
||||
}
|
||||
if [[ "$http_code" -ge 400 ]]; then
|
||||
echo "Error: API returned HTTP ${http_code}" >&2
|
||||
cat "$tmpfile" >&2
|
||||
rm -f "$tmpfile"
|
||||
exit 1
|
||||
fi
|
||||
cat "$tmpfile"
|
||||
rm -f "$tmpfile"
|
||||
}
|
||||
|
||||
# Main
|
||||
if [[ $# -lt 2 ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
command="$1"
|
||||
url="$2"
|
||||
shift 2
|
||||
|
||||
case "$command" in
|
||||
markdown)
|
||||
cmd_markdown "$url"
|
||||
;;
|
||||
screenshot)
|
||||
cmd_screenshot "$url" "${1:-}"
|
||||
;;
|
||||
scrape)
|
||||
if [[ $# -lt 1 ]]; then
|
||||
echo "Error: scrape requires CSS selectors argument" >&2
|
||||
usage
|
||||
fi
|
||||
cmd_scrape "$url" "$1"
|
||||
;;
|
||||
*)
|
||||
echo "Error: Unknown command '${command}'" >&2
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
Loading…
Reference in New Issue