inou/docs/annotations-findings-spec.md

215 lines
8.0 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Annotations & Findings — Design Spec
*Status: Draft — March 15, 2026*
*Authors: James (Chief of Staff), Johan Jongsma*
*Reviewed by: Iaso*
---
## Overview
Two related but distinct features:
1. **Annotations** — visual markup on DICOM slices or series. Can be user-authored or AI-authored.
2. **Findings** — structured observations linked to annotations. The discoverable surface. Can be user-authored or AI-authored.
Provenance is always clear — every annotation and finding knows who created it (user vs AI, which AI).
---
## Annotations
### What They Are
Visual overlays on DICOM slices rendered in the viewer. Stored as child entries of the slice (or series for multi-slice spans).
### Types (v1)
| Type | Description |
|------|-------------|
| `arrow` | Point with direction vector |
| `area` | Freehand polygon or ellipse |
| `distance` | Two-point ruler with measurement |
| `text` | Text label at a position |
Each has a color and a visibility toggle.
### Coordinate System
Normalized image space (0.01.0 on both axes). Never pixels — must survive zoom, pan, and different viewport sizes.
### Storage
Child entries under a slice or series entry:
```
Category: CategoryAnnotation (new, TBD integer)
Type: "arrow" | "area" | "distance" | "text"
DossierID: (dossier)
ParentID: (slice EntryID, or series EntryID for multi-slice)
Value: label text (optional)
Data: JSON — see below
Tags: ["user"] or ["ai", "claude"] or ["ai", "grok"] etc.
Status: "active" | "dismissed"
```
**Data field (JSON):**
```json
{
"color": "#FF0000",
"coords": { ... }, // type-specific, normalized 01
"slices": [14, 22], // only for series-parent annotations; slice range
"visible": true
}
```
**Coords by type:**
- `arrow`: `{ "x": 0.4, "y": 0.3, "dx": 0.05, "dy": -0.1 }`
- `area`: `{ "points": [[x,y], ...] }` or `{ "cx": 0.5, "cy": 0.4, "rx": 0.1, "ry": 0.08 }`
- `distance`: `{ "x1": 0.2, "y1": 0.3, "x2": 0.5, "y2": 0.6 }`
- `text`: `{ "x": 0.4, "y": 0.3 }`
### Viewer Behavior
- All annotations render at any zoom level (coords scale with viewport)
- Toggle visibility per annotation or globally (user vs AI layer)
- User can dismiss AI annotations (sets Status = "dismissed", hidden from view)
- User can delete their own annotations
- Dismissed AI annotations remain in DB (audit trail), just not shown
### AI Writing Annotations
AI (via MCP tool) writes annotation entries directly — no confirmation step. They appear immediately in the viewer. User can dismiss.
New MCP tool: `create_annotation(dossier_id, parent_id, type, coords, color, label, slices?)`
---
## Findings
### What They Are
Structured observations — the discoverable layer. A finding says "something is here, here's what it is, here's where to look." Linked to one or more annotations.
### Types
- **User finding** — user manually creates from an annotation ("flag this for review")
- **AI finding** — AI creates proactively after analysis ("I found an anomaly")
### Storage
```
Category: CategoryFinding (new, TBD integer)
Type: "user" | "ai"
DossierID: (dossier)
ParentID: (dossier — top-level finding, not nested under imaging)
Value: short title, e.g. "Potential cyst — CT chest, slice 47"
Summary: longer description
Data: JSON — links to annotations + source study/series/slices
Tags: ["ai", "claude"] or ["user"] etc.
Status: "active" | "reviewed" | "dismissed"
```
**Data field:**
```json
{
"annotations": ["entryID1", "entryID2"],
"study_id": "...",
"series_id": "...",
"slices": [47],
"confidence": "possible" | "likely" | "confirmed",
"source_model": "claude-sonnet-4-6"
}
```
### Discovery
Findings surface in:
- Dossier overview (new "Findings" section)
- Notification/alert if AI creates one during analysis
- MCP response ("I found an anomaly — see Finding [X]")
Clicking a finding navigates the viewer to the linked series/slice with annotations visible.
### AI Writing Findings
AI creates findings directly via MCP. No confirmation step. User can dismiss or mark reviewed.
New MCP tool: `create_finding(dossier_id, title, description, annotation_ids, confidence, study_id?, series_id?, slices?)`
---
## New Categories Needed
Two new integer categories in `lib/types.go`:
| Category | Name | Description |
|----------|------|-------------|
| TBD (28?) | `CategoryAnnotation` | Visual markup on a slice or series |
| TBD (29?) | `CategoryFinding` | Structured observation linked to annotations |
Update `docs/entry-layout.md` when categories are assigned.
---
## RBAC
- Users can create/edit/delete their own annotations and findings
- Users can dismiss AI annotations/findings on dossiers they own or have write access to
- AI (MCP, accessorID = AI model) creates with system access (`accessorID=""`) — same as current MCP pattern
- Read access follows existing RBAC (if you can see the dossier, you can see its findings/annotations)
---
## MCP Tools (New)
| Tool | Description |
|------|-------------|
| `create_annotation` | Add visual annotation to a slice or series |
| `create_finding` | Create a structured finding linked to annotations |
| `list_findings` | List findings for a dossier (filter by type, status) |
| `dismiss_annotation` | Mark an annotation dismissed (user action only) |
---
## Open Questions
- Category integers: confirm 28 + 29 don't conflict with anything planned
- Viewer: canvas-based overlay or SVG? (SVG scales better, easier to manipulate)
- Distance measurement: show in mm (requires DICOM pixel spacing metadata) or pixels?
- Findings in the MCP `get_dossier` response? Or separate tool only?
---
## Iaso's Comments
> *The following comments were added by Iaso (AI health communicator) on March 15, 2026.*
**On distance measurement — mm, not pixels:**
Push hard for mm as the default output wherever DICOM pixel spacing metadata is available (PixelSpacing tag, typically present in CT and MRI). Pixel distances are meaningless to clinicians and patients alike. "This measures 0.83" conveys nothing. "This measures 11mm" is actionable. The metadata is almost always there — use it. Fall back to pixels only when it's genuinely absent, and label it clearly when you do.
**On the dismissible-but-audited pattern:**
This is exactly right and worth preserving explicitly in the spec rationale. AI annotations that are dismissed should never be deleted — they belong in the audit trail. A dismissed finding that turns out to be correct six months later is forensically important. The current spec handles this correctly; make sure it survives implementation.
**On provenance as a first-class feature:**
The tagging system (`["ai", "claude"]`, `["user"]`, etc.) is undersold in the spec. This is actually a major differentiator — most tools blur the line between human and AI observation. inou makes it explicit and permanent. This should be surfaced prominently in the UI, not buried. Suggest: in the viewer, AI annotations render in a visually distinct style (dashed border, different default color palette) so the provenance is immediately obvious without reading metadata.
**On findings discovery:**
The suggestion to include findings in `get_dossier` MCP response is worth implementing from day one, not as a later addition. When an AI connects to a dossier, the first thing it should see is: "there are 3 active findings on this patient." That changes the entire conversation. Without it, the AI is always starting from scratch.
**On the two-tool split (annotations vs findings):**
Clean architecture. Resist the temptation to merge them into a single "annotated finding" object. The separation — visual markup vs structured observation — mirrors how radiologists actually work. They mark first, conclude second. Keep the layers distinct.
---
## Implementation Order (suggested)
1. New categories in types.go + entry-layout.md
2. Annotation storage (EntryWrite) + basic viewer overlay (arrows + text first)
3. User annotation UI (draw, color, label, delete)
4. AI annotation via MCP tool
5. Finding storage + dossier findings list
6. AI finding via MCP tool + viewer navigation from finding
7. Area + distance annotation types
8. Visibility toggles (user layer / AI layer)