// TB4 dock + 2 TB NVMe → durable storage pool for the clip pipeline, ApertureDB, and colima
Add a Thunderbolt 4 dock to claw (M4 Mac mini, 24/7 render worker on Tailscale) and attach a 2 TB NVMe SSD as a dedicated pool for everything the clip pipeline touches: the pipeline-viz/workspace tree, the ApertureDB data volume, the colima VM disk, persistent thumbnail cache, and Remotion bundle + staging dirs. Boot drive stays clean; the pool survives reboots; claw's role on the Tailnet doesn't change.
TB4 is 40 Gbps. Real-world NVMe throughput over TB4 is ~3 GB/s — which is an order of magnitude above Tailscale's practical ceiling (~1–4 Gbps). The disk is never the bottleneck for inter-node transfers; it just means claw can feed the Tailnet at wire speed instead of being disk-bound.
Five pain points the current all-on-internal setup has, and what fast external storage resolves:
| pain | today | after NVMe |
|---|---|---|
| iCloud FUSE breaks Remotion SSR (ETIMEDOUT / error-70) | workaround: stage to /tmp/remotion-render-staging/, wiped on reboot | staging lives on /Volumes/claw-fast/remotion-cache/staging/ — durable |
| Thumbnail cache evaporates | /tmp/pipeline-viz-thumbs wiped on reboot; first paint of dashboard re-runs ffmpeg for every clip | cache lives on external; dashboard is instantly hydrated cold |
| ApertureDB growth | docker volume on 256-GB boot drive; clip embeddings (1024-d × N) grow without bound | bind-mounted to external; room for ~100 K clips with headroom |
| Colima disk cap | 60 GB, single large image in ~/.colima/; resizing requires rebuild | VM mounts external at /Volumes/claw-fast:w; data dirs live outside the disk image |
| Source VOD cache | scattered between iCloud + /tmp; Twitch 3-hour recordings are 4-6 GB each | /Volumes/claw-fast/source-vods/ — local, fast, rsync-able to mbp over Tailscale |
The disk is invisible to Tailscale. What changes is the shape of what's worth exposing over it:
claw.tail-net.ts.net/clips/) is no longer disk-latency-bound.claw-fast over Tailscale; outputs come back to the same pool. Lightweight render farm, no NAS needed./Volumes/claw-fast/ ← APFS, named "claw-fast" ├── pipeline-viz/ │ ├── workspace/ ← source / moments / _transcripts / clips │ └── thumbs/ ← persistent thumbnail cache (was /tmp) ├── docker/ │ └── aperturedb-data/ ← bind-mounted into the container ├── remotion-cache/ │ ├── bundle/ ← webpack bundle cache │ └── staging/ ← per-render staged sources ├── archive/ ← rendered clips, cold storage by session └── source-vods/ ← long-form recordings ~/pipeline-viz/workspace → symlink → /Volumes/claw-fast/pipeline-viz/workspace /tmp/pipeline-viz-thumbs → symlink → /Volumes/claw-fast/pipeline-viz/thumbs /tmp/remotion-bundle → symlink → /Volumes/claw-fast/remotion-cache/bundle /tmp/remotion-render-staging → symlink → /Volumes/claw-fast/remotion-cache/staging ~/claw-fast-backup/ ← internal rsync mirror (cable-jostle insurance)
Symlinks from legacy paths mean no code changes in ws.ts, watchers, scripts, or skills. Everything keeps working; the data just lives somewhere new.
| part | notes |
|---|---|
| TB4 dock or direct TB4 NVMe enclosure | Dock is convenience (more ports, PD power-through). Direct enclosure is cheaper and hits the same ~3 GB/s. Either works. |
| TB4 NVMe enclosure | Look for genuine Thunderbolt 4, not USB4 40 Gbps (marketing overlaps but performance does not). Brands: OWC, Acasis TBU401, Satechi, Plugable TBT3-NVME2C (TB3 but works on TB4 ports). |
| M.2 2280 NVMe SSD, 2 TB, PCIe 4.0 | WD SN850X, Samsung 990 Pro, Crucial T700. PCIe 4.0 so the enclosure isn't the bottleneck. |
| 1× TB4-to-TB4 cable | Already included with most enclosures. 40 Gbps-rated, keep it short (≤ 1 m) for best signal integrity. |
Pro tip. M4 Mac mini has three TB5 ports — plug the enclosure directly into a port on the back, not through a hub or dock, for the cleanest 40 Gbps path.
du -sh ~/pipeline-viz/workspace ~/.colima /tmp/pipeline-viz-thumbs so you know what's migrating.~/pipeline-viz/server/claw-fast-migrate.sh.launchctl list | grep pipeline-viz should show com.pipeline-viz.tunnel and com.pipeline-viz.youtube-poster.Plug in the enclosure, then:
ssh claw@100.82.244.127
~/pipeline-viz/server/claw-fast-migrate.sh status # always safe
~/pipeline-viz/server/claw-fast-migrate.sh phase1
The script runs diskutil list external physical, prompts for the disk identifier (e.g. disk4), and erases as APFS named claw-fast. If the drive is already formatted with that name, type skip when prompted.
diskutil eraseDisk APFS claw-fast GPT /dev/disk4
Everything on /dev/diskN gets erased. Double-check the identifier against diskutil list before confirming. Script asks twice.
~/pipeline-viz/server/claw-fast-migrate.sh phase2
Creates every directory shown in #layout. Idempotent — safe to re-run.
~/pipeline-viz/server/claw-fast-migrate.sh phase3
What it does:
pkill -f "tsx.*ws.ts").~/pipeline-viz/workspace/ → /Volumes/claw-fast/pipeline-viz/workspace/.workspace.pre-migration.~/pipeline-viz/workspace → /Volumes/claw-fast/... symlink.PIPELINE_ROOT and THUMB_CACHE pointed at external./health to confirm.~/pipeline-viz/server/claw-fast-migrate.sh phase4
Warms the external cache from whatever's in /tmp/pipeline-viz-thumbs, then symlinks the old path to external. Dashboard thumbs now survive reboots.
The most involved phase. Backs up the current docker volume, stops colima, restarts with the external drive mounted into the VM, re-creates the container bind-mounted to a path on the external drive.
~/pipeline-viz/server/claw-fast-migrate.sh phase5
Under the hood:
# 1. Backup existing volume
docker run --rm \
-v aperturedb-data:/data \
-v ~/claw-fast-backup:/backup \
alpine tar czf /backup/aperturedb-data.pre-migration.tgz -C /data .
# 2. Extract into external pool
tar xzf ~/claw-fast-backup/aperturedb-data.pre-migration.tgz \
-C /Volumes/claw-fast/docker/aperturedb-data
# 3. Stop containers + colima
docker rm -f aperturedb catalog
colima stop
# 4. Restart colima with external mount exposed R/W to the VM
colima start --cpu 4 --memory 6 --disk 60 --mount /Volumes/claw-fast:w
# 5. Restart aperturedb bind-mounted to the external path
docker run -d --name aperturedb --restart unless-stopped \
-p 55555:55555 -p 8788:8788 \
-v /Volumes/claw-fast/docker/aperturedb-data:/aperturedb/data \
aperturedata/aperturedb-community
# 6. Restart catalog (unchanged — shares aperturedb's network)
docker run -d --name catalog --restart unless-stopped \
--network container:aperturedb \
-v ~/pipeline-viz/server:/work -w /work \
python:3.12-slim sh -c 'pip install --quiet aperturedb && python catalog.py'
# 7. Kick the SSH tunnel + health check
launchctl kickstart -k gui/$UID/com.pipeline-viz.tunnel
curl -fsS http://localhost:48788/health
Why bind-mount, not named volume. Using -v /Volumes/claw-fast/docker/aperturedb-data:/aperturedb/data puts the data plainly on the external drive — no docker-volume indirection. You can ls the raw contents, rsync them, back them up with Time Machine. Named volumes hide the path inside colima's VM image.
~/pipeline-viz/server/claw-fast-migrate.sh phase6
Symlinks /tmp/remotion-bundle and /tmp/remotion-render-staging to external. Warm renders get faster because the webpack bundle cache survives reboots.
~/pipeline-viz/server/claw-fast-migrate.sh phase7
Installs a launchd agent at ~/Library/LaunchAgents/com.pipeline-viz.nightly-backup.plist that runs at 04:00 every day:
rsync -ah --delete \
/Volumes/claw-fast/docker/aperturedb-data/ \
~/claw-fast-backup/aperturedb-data/
If the TB4 cable jostles loose during a render and the volume corrupts, last night's data is on the internal drive. Cheap insurance; one-way mirror so it won't propagate corruption back.
~/pipeline-viz/server/claw-fast-migrate.sh verify
Runs seven checks:
df -h /Volumes/claw-fast — disk online, free space sanecurl http://localhost:8787/health — WS server alivecurl http://localhost:48788/health — catalog reachable through SSH tunneldocker inspect aperturedb — confirms bind-mount source is external pathcomposite_complete event — writes a 2 s testsrc MP4 into the watched dir and confirms it lands in the queueIf anything goes sideways and you want to undo:
# 1. Stop things that hold open files on external
pkill -f "tsx.*ws.ts"
docker rm -f aperturedb catalog
colima stop
# 2. Restore the pre-migration volume backup
docker volume create aperturedb-data
docker run --rm \
-v aperturedb-data:/data \
-v ~/claw-fast-backup:/backup \
alpine tar xzf /backup/aperturedb-data.pre-migration.tgz -C /data
# 3. Remove symlinks, restore renamed dirs
rm ~/pipeline-viz/workspace /tmp/pipeline-viz-thumbs \
/tmp/remotion-bundle /tmp/remotion-render-staging
mv ~/pipeline-viz/workspace.pre-migration ~/pipeline-viz/workspace
# 4. Relaunch colima WITHOUT external mount + restart containers on named volume
colima start --cpu 4 --memory 6 --disk 60
docker run -d --name aperturedb --restart unless-stopped \
-p 55555:55555 -p 8788:8788 \
-v aperturedb-data:/aperturedb/data \
aperturedata/aperturedb-community
# (+ restart catalog as before)
# 5. Restart WS server with the original (internal) PIPELINE_ROOT
cd ~/pipeline-viz/server
PIPELINE_ROOT=~/pipeline-viz/workspace \
THUMB_CACHE=/tmp/pipeline-viz-thumbs \
PUBLIC_HOST=100.82.244.127 \
CATALOG_URL=http://localhost:48788 \
nohup npx tsx ws.ts > ws.log 2>&1 & disown
--mount /Volumes/claw-fast:w or the VM can't see the drive. --mount only takes effect on colima start, not on a running VM.com.pipeline-viz.tunnel will re-establish its SSH forward automatically, but if the VM's SSH port number changes on restart, re-read ~/.colima/_lima/colima/ssh.config and update the tunnel plist if needed./Volumes/claw-fast, it will try to back up the raw aperturedb-data directory — hundreds of thousands of tiny files. Exclude the docker/ subtree from TM and rely on the nightly rsync instead.