| Field |
Value |
| URL |
https://evidence.sayhellocollege.com |
| Server |
hc-central (62.171.177.227) |
| Stack |
Evidence.dev + DuckDB WASM + SvelteKit + Node.js (bare-metal, no Docker) |
| Deployment |
Blue-green with Traefik routing — two identical slots |
| Slot A |
Port 3001, evidence-a.service, build at /srv/evidence/a/build_live/ |
| Slot B |
Port 3002, evidence-b.service, build at /srv/evidence/b/build_live/ |
| Active slot |
Tracked in /srv/evidence/active ("a" or "b") |
| Data sources |
Postgres (Heroku), HubSpot (via Supabase), Stripe (via Supabase), flat files |
| Data refresh |
Cron every 15 min via scripts/rebuild-prod.sh (rebuilds active prod slot) |
| Code deploy |
scripts/deploy.sh triggered by GitHub Actions on merge to main |
| Swap |
scripts/swap-prod.sh — instant Traefik port swap with health check |
| Repo |
sayhellocollege/hc-evidence |
| Auth |
Token-based (?token= parameter from portal iframe) |
¶ Dev (standby slot)
| Field |
Value |
| URL |
https://dev-evidence.sayhellocollege.com |
| Server |
hc-central (62.171.177.227) |
| Service |
Whichever slot is standby (evidence-a.service or evidence-b.service) |
| Serves from |
/home/dev/hc-evidence/build_live/ (local dev builds) via symlink |
| Auth |
Same token as prod (same .env) |
| Routing |
Traefik → standby slot port |
| Manage |
sudo systemctl restart evidence-a or evidence-b |
| Logs |
sudo journalctl -u evidence-a -f or evidence-b |
┌─ Slot A (port 3001) → /srv/evidence/a/current (symlink)
Browser → Traefik (443) ──┤ evidence.sayhellocollege.com → active slot
│ dev-evidence.sayhellocollege.com → standby slot
└─ Slot B (port 3002) → /srv/evidence/b/current (symlink)
Active slot symlink: /srv/evidence/{a,b}/current → /srv/evidence/{a,b}/build_live/
Standby slot symlink: /srv/evidence/{a,b}/current → /home/dev/hc-evidence/build_live/
- Merge to main → GitHub Actions triggers
scripts/deploy.sh
- deploy.sh: kills any running builds, disables cron, clears Evidence caches
- deploy.sh: git pull →
npm run sources → npm run build → rsync to standby slot
- deploy.sh: calls
swap-prod.sh → Traefik swaps, standby becomes prod
- deploy.sh: re-enables cron on exit (via trap, runs even on failure)
- Old prod becomes dev, its symlink points to
/home/dev/hc-evidence/build_live/
scripts/rebuild-prod.sh runs every 15 min via cron (dev user)
- Kills orphaned build processes, clears caches before each build
- Rebuilds the ACTIVE (prod) slot in place — no swap
- Uses
npm run build (not build:strict — strict kills on non-fatal query errors)
- Skips if lockfile exists (deploy.sh holds lockfile during deploys)
| File |
Purpose |
scripts/deploy.sh |
Full deploy: kill builds → disable cron → clear cache → sources → build → swap → re-enable cron |
scripts/swap-prod.sh |
Standalone Traefik swap with health check (uses sudo tee for Traefik config) |
scripts/rebuild-prod.sh |
Cron data refresh for active prod slot |
docker/server.cjs |
Node.js static file server + token auth + CSP headers |
.github/workflows/deploy.yml |
GitHub Actions: runs deploy.sh on push to main (self-hosted runner) |
sources/postgres_heroku/views/ |
Pre-computed SQL views that run in Postgres at build time |
.env |
All runtime env vars (db connections, tokens). Shared by both slots. |
- Never run two builds concurrently. They share the same working directory and Evidence cache. deploy.sh and rebuild-prod.sh both enforce this via lockfile + process killing.
- Evidence caches corrupt easily. Both scripts clear
.evidence/template/.evidence-queries and .evidence/meta/query-cache before every build.
- The
.svelte-kit cache can also go stale after page changes. If a build produces old content, delete .evidence/template/.svelte-kit and rebuild.
build:strict vs build: Strict mode kills the entire build on any query error (e.g., HubSpot VARCHAR issues). We use build (non-strict) which logs errors but continues.
- Sudoers: The
dev user has passwordless sudo tee /data/coolify/proxy/dynamic/evidence.yaml for Traefik config swaps (configured in /etc/sudoers.d/evidence-deploy).
- Never use
${inputs.X.value} in SQL for frequently-changed inputs. This triggers DuckDB WASM re-execution in the browser (30+ second initialization cost on first interaction).
- Pre-compute heavy JOINs in source views (
sources/postgres_heroku/views/). These run in Postgres at build time and produce small parquet files. Page queries filter/aggregate the flat parquet — instant in the browser.
- For month/tab switching, load all data in one query bound to the date range, then use
{#each} or .filter() in the template. This is pure DOM/JS — no DuckDB.
| Field |
Value |
| URL |
https://dev-portal.sayhellocollege.com |
| Server |
hc-central (62.171.177.227) |
| Service |
dev-portal.service (systemd) |
| Port |
3000 (Next.js dev server) |
| Auth |
Basic auth (alucas) + portal's own login |
| Evidence iframe |
Points to dev-evidence.sayhellocollege.com (via EVIDENCE_BASE_URL in .env) |
| Routing |
Traefik → host.docker.internal:3000 |
| Manage |
sudo systemctl restart dev-portal |
| Logs |
sudo journalctl -u dev-portal -f |
- Production portal runs on Heroku/Vercel, NOT Coolify
- Dev portal on hc-central embeds dev Evidence in its iframe
- The Coolify container on hc-vps (
ud5isswl9iovwqwntiwq9s8b) is an old test — should be removed (HEL-705)
- Semantic search across emails, gdrive, comments, transcripts, Linear tickets, conversations
- Voyage AI embeddings (voyage-4-large)
- Qdrant vector storage
- Cohere reranking
- OAuth for Claude desktop/web integration
| Field |
Value |
| URL |
https://termix.sayhellocollege.com |
| Server |
hc-central (62.171.177.227) |
| Container |
termix (Docker Compose at /opt/termix/) |
| Auth |
Termix own login (RBAC for team members) |
| Routing |
Traefik → termix:8080 (on coolify network) |
| Features |
SSH terminal, file manager, Docker management, server stats |
| Users |
alucas (admin), jrodriguez (view access to both servers) |
| Host |
IP |
SSH User |
| hc-central |
62.171.177.227 |
dev |
| hc-vps |
5.78.149.183 |
root |
| Field |
Value |
| Container |
kbv61zkyp1pb5q56a7h5j5ie |
| Image |
redis:7.2 |
| Port |
6379 (internal only) |
| Auth |
Password protected (see credentials.sops.yaml → redis.hc_vps_coolify) |
| Field |
Value |
| Provider |
Redis Labs (cloud) |
| Host |
redis-12861.c281.us-east-1-2.ec2.cloud.redislabs.com:12861 |
| Used by |
hc-portal .env (REDIS_URL) |
| Field |
Value |
| Container |
r5f9ocp1hph1qn6wijcrxwe6 |
| Image |
redis:7.2 |
Both servers run coolify-proxy (Traefik v3.6) handling SSL termination and routing.
- HTTP/2 + HTTP/3 enabled
- Let's Encrypt auto-renewal via HTTP challenge
- Config: Docker labels +
/traefik/dynamic/*.yaml files
| File |
Routes |
coolify-central.yaml |
coolify-central.sayhellocollege.com → Coolify UI |
evidence.yaml |
evidence.sayhellocollege.com → active slot (3001 or 3002) |
|
dev-evidence.sayhellocollege.com → standby slot |
dev-environments.yaml |
dev-portal.sayhellocollege.com → port 3000 (basic auth) |
|
termix.sayhellocollege.com → termix:8080 (no auth, own login) |
| Runner |
Server |
Status |
hc-portal-vps-2 |
hc-central |
Active |
hc-portal-vps-3 |
hc-central |
Active |
hc-portal-vps-4 |
hc-central |
Active |
Self-hosted runners for the sayhellocollege GitHub org. Used for portal CI/CD and Evidence blue-green deploys.
| Field |
Value |
| URL |
https://docs.sayhellocollege.com |
| Server |
hc-central (62.171.177.227) |
| Container |
wikijs (Docker Compose at /opt/wiki/) |
| Image |
requarks/wiki:2 (v2.5.312) |
| Database |
SQLite at /opt/wiki/data/wiki.sqlite |
| Port |
3000 (internal, routed via Traefik) |
| Routing |
Traefik Docker labels on coolify network |
| Auth |
Local accounts with self-registration restricted to @sayhellocollege.com, @pernixlabs.com, @pernix-solutions.com |
| Admin |
alucas@sayhellocollege.com (see credentials.sops.yaml → wikijs) |
| Features |
Markdown editing, full-text search, tagging, page history |
# Restart
docker restart wikijs
# Logs
docker logs wikijs -f
# Backup database
cp /opt/wiki/data/wiki.sqlite /opt/wiki/data/wiki.sqlite.bak
# Docker compose location
/opt/wiki/docker-compose.yml
Pages are imported via the GraphQL API (/graphql). Use the admin JWT to authenticate. Pages are stored in the SQLite database, not synced from the filesystem. To re-import after updating markdown files in ObVault, use the API or edit directly in the wiki's markdown editor.
| Field |
Value |
| Service |
obsidian-sync.service (systemd) |
| Server |
hc-central |
| Vault |
/home/dev/ObVault |
| Mode |
Continuous bidirectional sync |
| Status |
sudo systemctl status obsidian-sync |