NVMe Upgrade — claw

// TB4 dock + 2 TB NVMe → durable storage pool for the clip pipeline, ApertureDB, and colima

Overview

Add a Thunderbolt 4 dock to claw (M4 Mac mini, 24/7 render worker on Tailscale) and attach a 2 TB NVMe SSD as a dedicated pool for everything the clip pipeline touches: the pipeline-viz/workspace tree, the ApertureDB data volume, the colima VM disk, persistent thumbnail cache, and Remotion bundle + staging dirs. Boot drive stays clean; the pool survives reboots; claw's role on the Tailnet doesn't change.

TB4 is 40 Gbps. Real-world NVMe throughput over TB4 is ~3 GB/s — which is an order of magnitude above Tailscale's practical ceiling (~1–4 Gbps). The disk is never the bottleneck for inter-node transfers; it just means claw can feed the Tailnet at wire speed instead of being disk-bound.

Why

Five pain points the current all-on-internal setup has, and what fast external storage resolves:

paintodayafter NVMe
iCloud FUSE breaks Remotion SSR (ETIMEDOUT / error-70)workaround: stage to /tmp/remotion-render-staging/, wiped on rebootstaging lives on /Volumes/claw-fast/remotion-cache/staging/ — durable
Thumbnail cache evaporates/tmp/pipeline-viz-thumbs wiped on reboot; first paint of dashboard re-runs ffmpeg for every clipcache lives on external; dashboard is instantly hydrated cold
ApertureDB growthdocker volume on 256-GB boot drive; clip embeddings (1024-d × N) grow without boundbind-mounted to external; room for ~100 K clips with headroom
Colima disk cap60 GB, single large image in ~/.colima/; resizing requires rebuildVM mounts external at /Volumes/claw-fast:w; data dirs live outside the disk image
Source VOD cachescattered between iCloud + /tmp; Twitch 3-hour recordings are 4-6 GB each/Volumes/claw-fast/source-vods/ — local, fast, rsync-able to mbp over Tailscale

Tailscale impact

The disk is invisible to Tailscale. What changes is the shape of what's worth exposing over it:

Filesystem layout

/Volumes/claw-fast/                            ← APFS, named "claw-fast"
├── pipeline-viz/
│   ├── workspace/                             ← source / moments / _transcripts / clips
│   └── thumbs/                                ← persistent thumbnail cache (was /tmp)
├── docker/
│   └── aperturedb-data/                       ← bind-mounted into the container
├── remotion-cache/
│   ├── bundle/                                ← webpack bundle cache
│   └── staging/                               ← per-render staged sources
├── archive/                                   ← rendered clips, cold storage by session
└── source-vods/                               ← long-form recordings

~/pipeline-viz/workspace      → symlink → /Volumes/claw-fast/pipeline-viz/workspace
/tmp/pipeline-viz-thumbs      → symlink → /Volumes/claw-fast/pipeline-viz/thumbs
/tmp/remotion-bundle          → symlink → /Volumes/claw-fast/remotion-cache/bundle
/tmp/remotion-render-staging  → symlink → /Volumes/claw-fast/remotion-cache/staging

~/claw-fast-backup/                            ← internal rsync mirror (cable-jostle insurance)

Symlinks from legacy paths mean no code changes in ws.ts, watchers, scripts, or skills. Everything keeps working; the data just lives somewhere new.

Hardware shopping list

partnotes
TB4 dock or direct TB4 NVMe enclosureDock is convenience (more ports, PD power-through). Direct enclosure is cheaper and hits the same ~3 GB/s. Either works.
TB4 NVMe enclosureLook for genuine Thunderbolt 4, not USB4 40 Gbps (marketing overlaps but performance does not). Brands: OWC, Acasis TBU401, Satechi, Plugable TBT3-NVME2C (TB3 but works on TB4 ports).
M.2 2280 NVMe SSD, 2 TB, PCIe 4.0WD SN850X, Samsung 990 Pro, Crucial T700. PCIe 4.0 so the enclosure isn't the bottleneck.
1× TB4-to-TB4 cableAlready included with most enclosures. 40 Gbps-rated, keep it short (≤ 1 m) for best signal integrity.

Pro tip. M4 Mac mini has three TB5 ports — plug the enclosure directly into a port on the back, not through a hub or dock, for the cleanest 40 Gbps path.

Before the dock arrives

  1. Snapshot current state on claw — du -sh ~/pipeline-viz/workspace ~/.colima /tmp/pipeline-viz-thumbs so you know what's migrating.
  2. Free space on the internal drive — at least 10 GB. The ApertureDB backup in phase 5 needs temporary room.
  3. Make sure the migration script is on claw: already synced to ~/pipeline-viz/server/claw-fast-migrate.sh.
  4. Confirm tunnel + launchd agents are healthy — launchctl list | grep pipeline-viz should show com.pipeline-viz.tunnel and com.pipeline-viz.youtube-poster.

Phase 1 — Format external drive

Plug in the enclosure, then:

ssh claw@100.82.244.127
~/pipeline-viz/server/claw-fast-migrate.sh status    # always safe
~/pipeline-viz/server/claw-fast-migrate.sh phase1

The script runs diskutil list external physical, prompts for the disk identifier (e.g. disk4), and erases as APFS named claw-fast. If the drive is already formatted with that name, type skip when prompted.

diskutil eraseDisk APFS claw-fast GPT /dev/disk4

Everything on /dev/diskN gets erased. Double-check the identifier against diskutil list before confirming. Script asks twice.

Phase 2 — Directory tree

~/pipeline-viz/server/claw-fast-migrate.sh phase2

Creates every directory shown in #layout. Idempotent — safe to re-run.

Phase 3 — Migrate pipeline-viz workspace

~/pipeline-viz/server/claw-fast-migrate.sh phase3

What it does:

  1. Stops the WS server (pkill -f "tsx.*ws.ts").
  2. Rsyncs ~/pipeline-viz/workspace/ → /Volumes/claw-fast/pipeline-viz/workspace/.
  3. Renames old internal dir to workspace.pre-migration.
  4. Creates ~/pipeline-viz/workspace → /Volumes/claw-fast/... symlink.
  5. Restarts WS server with PIPELINE_ROOT and THUMB_CACHE pointed at external.
  6. Curls /health to confirm.

Phase 4 — Persistent thumb cache

~/pipeline-viz/server/claw-fast-migrate.sh phase4

Warms the external cache from whatever's in /tmp/pipeline-viz-thumbs, then symlinks the old path to external. Dashboard thumbs now survive reboots.

Phase 5 — ApertureDB + colima relocation

The most involved phase. Backs up the current docker volume, stops colima, restarts with the external drive mounted into the VM, re-creates the container bind-mounted to a path on the external drive.

~/pipeline-viz/server/claw-fast-migrate.sh phase5

Under the hood:

# 1. Backup existing volume
docker run --rm \
  -v aperturedb-data:/data \
  -v ~/claw-fast-backup:/backup \
  alpine tar czf /backup/aperturedb-data.pre-migration.tgz -C /data .

# 2. Extract into external pool
tar xzf ~/claw-fast-backup/aperturedb-data.pre-migration.tgz \
  -C /Volumes/claw-fast/docker/aperturedb-data

# 3. Stop containers + colima
docker rm -f aperturedb catalog
colima stop

# 4. Restart colima with external mount exposed R/W to the VM
colima start --cpu 4 --memory 6 --disk 60 --mount /Volumes/claw-fast:w

# 5. Restart aperturedb bind-mounted to the external path
docker run -d --name aperturedb --restart unless-stopped \
  -p 55555:55555 -p 8788:8788 \
  -v /Volumes/claw-fast/docker/aperturedb-data:/aperturedb/data \
  aperturedata/aperturedb-community

# 6. Restart catalog (unchanged — shares aperturedb's network)
docker run -d --name catalog --restart unless-stopped \
  --network container:aperturedb \
  -v ~/pipeline-viz/server:/work -w /work \
  python:3.12-slim sh -c 'pip install --quiet aperturedb && python catalog.py'

# 7. Kick the SSH tunnel + health check
launchctl kickstart -k gui/$UID/com.pipeline-viz.tunnel
curl -fsS http://localhost:48788/health

Why bind-mount, not named volume. Using -v /Volumes/claw-fast/docker/aperturedb-data:/aperturedb/data puts the data plainly on the external drive — no docker-volume indirection. You can ls the raw contents, rsync them, back them up with Time Machine. Named volumes hide the path inside colima's VM image.

Phase 6 — Remotion bundle + staging

~/pipeline-viz/server/claw-fast-migrate.sh phase6

Symlinks /tmp/remotion-bundle and /tmp/remotion-render-staging to external. Warm renders get faster because the webpack bundle cache survives reboots.

Phase 7 — Nightly backup (cable-jostle insurance)

~/pipeline-viz/server/claw-fast-migrate.sh phase7

Installs a launchd agent at ~/Library/LaunchAgents/com.pipeline-viz.nightly-backup.plist that runs at 04:00 every day:

rsync -ah --delete \
  /Volumes/claw-fast/docker/aperturedb-data/ \
  ~/claw-fast-backup/aperturedb-data/

If the TB4 cable jostles loose during a render and the volume corrupts, last night's data is on the internal drive. Cheap insurance; one-way mirror so it won't propagate corruption back.

Verify

~/pipeline-viz/server/claw-fast-migrate.sh verify

Runs seven checks:

  1. df -h /Volumes/claw-fast — disk online, free space sane
  2. curl http://localhost:8787/health — WS server alive
  3. curl http://localhost:48788/health — catalog reachable through SSH tunnel
  4. Checks each symlink: workspace, thumbs, remotion bundle, staging
  5. docker inspect aperturedb — confirms bind-mount source is external path
  6. Fires a synthetic composite_complete event — writes a 2 s testsrc MP4 into the watched dir and confirms it lands in the queue
  7. Tails the WS server log for evidence the event broadcast

Rollback

If anything goes sideways and you want to undo:

# 1. Stop things that hold open files on external
pkill -f "tsx.*ws.ts"
docker rm -f aperturedb catalog
colima stop

# 2. Restore the pre-migration volume backup
docker volume create aperturedb-data
docker run --rm \
  -v aperturedb-data:/data \
  -v ~/claw-fast-backup:/backup \
  alpine tar xzf /backup/aperturedb-data.pre-migration.tgz -C /data

# 3. Remove symlinks, restore renamed dirs
rm ~/pipeline-viz/workspace /tmp/pipeline-viz-thumbs \
   /tmp/remotion-bundle /tmp/remotion-render-staging
mv ~/pipeline-viz/workspace.pre-migration ~/pipeline-viz/workspace

# 4. Relaunch colima WITHOUT external mount + restart containers on named volume
colima start --cpu 4 --memory 6 --disk 60
docker run -d --name aperturedb --restart unless-stopped \
  -p 55555:55555 -p 8788:8788 \
  -v aperturedb-data:/aperturedb/data \
  aperturedata/aperturedb-community
# (+ restart catalog as before)

# 5. Restart WS server with the original (internal) PIPELINE_ROOT
cd ~/pipeline-viz/server
PIPELINE_ROOT=~/pipeline-viz/workspace \
THUMB_CACHE=/tmp/pipeline-viz-thumbs \
PUBLIC_HOST=100.82.244.127 \
CATALOG_URL=http://localhost:48788 \
  nohup npx tsx ws.ts > ws.log 2>&1 & disown

Pitfalls