106 lines
3.4 KiB
Markdown
106 lines
3.4 KiB
Markdown
# Vault1984 POP Deployment — Handoff for Hans
|
|
|
|
**From:** Johan / James
|
|
**Date:** March 7, 2026
|
|
**Status:** Binaries built, download endpoints added, ready for your rollout
|
|
|
|
---
|
|
|
|
## What's ready on HQ (noc.vault1984.com)
|
|
|
|
Two new binaries in `/home/johan/vault1984-dashboard/`:
|
|
|
|
| File | Arch | Size |
|
|
|------|------|------|
|
|
| `vault1984` | linux/amd64 | ~18MB |
|
|
| `vault1984-arm64` | linux/arm64 | ~18MB |
|
|
|
|
Download endpoints added to the dashboard (need rebuild):
|
|
- `http://185.218.204.47:8080/download/vault1984`
|
|
- `http://185.218.204.47:8080/download/vault1984-arm64`
|
|
|
|
To activate: rebuild `dashboard-go` and restart the service.
|
|
|
|
## What's new in the binary
|
|
|
|
Built-in telemetry. When launched with these flags, the vault POSTs system + vault metrics to HQ every N seconds:
|
|
|
|
```
|
|
--telemetry-freq=60
|
|
--telemetry-host=http://185.218.204.47:8080/telemetry
|
|
--telemetry-token=<your-choice>
|
|
```
|
|
|
|
Also works via env vars: `TELEMETRY_FREQ`, `TELEMETRY_HOST`, `TELEMETRY_TOKEN`. Without flags, telemetry is off — no behavior change for self-hosters.
|
|
|
|
**Payload** (JSON POST):
|
|
```json
|
|
{
|
|
"version": "0.1.0",
|
|
"hostname": "virginia",
|
|
"uptime_seconds": 3600,
|
|
"timestamp": "2026-03-06T10:00:00Z",
|
|
"system": {
|
|
"os": "linux", "arch": "arm64", "cpus": 2,
|
|
"cpu_percent": 12.5,
|
|
"memory_total_mb": 1024, "memory_used_mb": 340,
|
|
"disk_total_mb": 8000, "disk_used_mb": 1200,
|
|
"load_1m": 0.3
|
|
},
|
|
"vaults": {
|
|
"count": 0, "total_size_mb": 0, "total_entries": 0
|
|
},
|
|
"mode": "hosted"
|
|
}
|
|
```
|
|
|
|
## What needs doing
|
|
|
|
### 1. Telemetry inbox on the dashboard
|
|
|
|
The dashboard doesn't have a `/telemetry` handler yet. You'll want to add one that:
|
|
- Accepts the JSON payload above
|
|
- Stores it (SQLite, or just update the existing nodes table)
|
|
- Feeds into the status page
|
|
|
|
This is your call on how to wire it in — you know the dashboard code best.
|
|
|
|
### 2. Wipe the status DB
|
|
|
|
Johan wants the status.db wiped clean and rebuilt with only the three live nodes:
|
|
|
|
| Node ID | Name | Region | IP |
|
|
|---------|------|--------|----|
|
|
| `hq-zurich` | HQ — Zürich | Hostkey / CH | 185.218.204.47 |
|
|
| `virginia` | Virginia | **us-east-1** | ? |
|
|
| `singapore` | Singapore | ap-southeast-1 | 47.129.4.217 |
|
|
|
|
**Important:** The current "virginia" POP is tagged `us-east-2` with IP `3.145.131.247` — that's **Ohio (Dublin)**. Johan does NOT want Ohio. Please confirm:
|
|
- Was this already moved to us-east-1 (actual Virginia)?
|
|
- If not, we need to spin down Ohio and deploy in us-east-1.
|
|
|
|
The planned nodes (london, frankfurt, tokyo, etc.) can stay in the seed data as "planned" but shouldn't be in the live status rotation until deployed.
|
|
|
|
### 3. Deploy vault1984 to the two AWS POPs
|
|
|
|
Each POP needs:
|
|
- The vault1984 binary (arm64 for t4g.micro)
|
|
- A systemd service with telemetry flags pointing to HQ
|
|
- Port 1984 open
|
|
- `DATA_DIR` for vault storage
|
|
|
|
You already have `deploy-pop.sh` and SSM access — adapt as you see fit. The vault1984 binary replaces nothing; it runs alongside the existing v1984-agent (or you can consolidate, since vault1984 now reports its own metrics).
|
|
|
|
### 4. Infrastructure overview (for clarity)
|
|
|
|
| Server | Role | Location |
|
|
|--------|------|----------|
|
|
| zurich.inou.com | Kuma, security checks, shared git | Hostkey Zürich |
|
|
| noc.vault1984.com | Dashboard, status page, marketing site, HQ | Hostkey Zürich |
|
|
| virginia POP | Vault1984 hosted node | AWS us-east-1 (confirm!) |
|
|
| singapore POP | Vault1984 hosted node | AWS ap-southeast-1 |
|
|
|
|
---
|
|
|
|
Questions? Ping Johan or ask James in the next session.
|