diff --git a/.claude/skills/host-test-memory-leak-hunting/SKILL.md b/.claude/skills/host-test-memory-leak-hunting/SKILL.md index 10f600c980d..439b5f962c9 100644 --- a/.claude/skills/host-test-memory-leak-hunting/SKILL.md +++ b/.claude/skills/host-test-memory-leak-hunting/SKILL.md @@ -66,7 +66,7 @@ Snapshots at `t=10` (warm) and `t=50` give a clean delta over 40 tests. Add `t=9 ### 3. Open a fresh test tab ```sh -ENCODED=$(node -e 'console.log(encodeURIComponent("http://localhost:4200/tests/index.html?hidepassed&filter=card-basics"))') +ENCODED=$(node -e 'console.log(encodeURIComponent("https://localhost:4200/tests/index.html?hidepassed&filter=card-basics"))') curl -sX PUT "http://localhost:9333/json/new?${ENCODED}" ``` diff --git a/.claude/skills/indexing-diagnostics/SKILL.md b/.claude/skills/indexing-diagnostics/SKILL.md index fef030439bc..44b903b9448 100644 --- a/.claude/skills/indexing-diagnostics/SKILL.md +++ b/.claude/skills/indexing-diagnostics/SKILL.md @@ -26,7 +26,7 @@ For UI triage you'll typically read the JSON error response (which surfaces `err ## How to actually run these queries -The SQL examples below are environment-agnostic — they work the same against local dev, staging, or prod. What changes is *how you reach the database*: +The SQL examples below are environment-agnostic — they work the same against local dev, staging, or prod. What changes is _how you reach the database_: - **Local dev**: `psql "$DATABASE_URL"` (or whatever your local boxel server uses) directly. - **Staging / prod**: the RDS instances are private to the cardstack VPC. Use the `aws-access` skill — it covers (a) provisioning a Claude-usable AWS session via `mise run claude-aws `, (b) the SSM port-forward tunnel through the realm-server ECS task to RDS, and (c) connecting via psql as the read-only `claude_readonly_user` (member of `readonly_role`). This skill assumes you've already got that connection working; it doesn't re-document the AWS plumbing. @@ -78,7 +78,7 @@ WHERE url = '' (Or read `error_doc.diagnostics` from the JSON:API error response — same shape.) -Walk the fields per [Classify in one pass](#classify-in-one-pass). The *first* positive signal wins; stop there. +Walk the fields per [Classify in one pass](#classify-in-one-pass). The _first_ positive signal wins; stop there. ## Mode B — an incremental reindex was slow @@ -99,7 +99,7 @@ SELECT max((timing_diagnostics->>'renderElapsedMs')::int) AS slowest_ms FROM boxel_index -WHERE realm_url = 'http://localhost:4201/user/your-realm/' +WHERE realm_url = 'https://localhost:4201/user/your-realm/' AND timing_diagnostics->>'invalidationId' IS NOT NULL GROUP BY 1 ORDER BY last_indexed_at DESC @@ -126,7 +126,7 @@ SELECT to_timestamp((timing_diagnostics->>'indexedAt')::bigint / 1000) AS indexed_at FROM boxel_index -WHERE realm_url = 'http://localhost:4201/user/your-realm/' +WHERE realm_url = 'https://localhost:4201/user/your-realm/' AND timing_diagnostics->>'invalidationId' = '' ORDER BY render_ms DESC NULLS LAST; ``` @@ -136,7 +136,7 @@ ORDER BY render_ms DESC NULLS LAST; - One row dominates (e.g. a dashboard card) and the rest are cheap. The big row is the real target — investigate its `queryLoadsInFlight` / `recentModuleEvaluations` / `cardDocLoadsInFlight`. - All rows share a large `launchMs`. Capacity contention during the reindex, not the cards' fault. - The first row in the batch (min `indexedAt`) has a large `renderElapsedMs` but the rest are cheap — this is the cold-loader tax paid by whichever card was rendered first after `clearCache: true` fired. Expected on any executable invalidation; only worth chasing if the cold cost is disproportionate to the dep closure. -- The `deps` / `types` columns on the same rows tell you *why* each row was invalidated — useful for discovering unintentionally-heavy transitive deps (e.g. a dashboard re-renders because one of its metrics modules has a runtime reference to the changed module). +- The `deps` / `types` columns on the same rows tell you _why_ each row was invalidated — useful for discovering unintentionally-heavy transitive deps (e.g. a dashboard re-renders because one of its metrics modules has a runtime reference to the changed module). **Other useful queries:** @@ -150,7 +150,7 @@ SELECT timing_diagnostics->>'invalidationId' AS group, has_error FROM boxel_index -WHERE realm_url = 'http://localhost:4201/user/your-realm/' +WHERE realm_url = 'https://localhost:4201/user/your-realm/' ORDER BY render_ms DESC NULLS LAST LIMIT 20; @@ -171,12 +171,12 @@ LIMIT 20; ## Mode C — a worker job is stuck or got rejected -Mode A and Mode B both assume `boxel_index` has up-to-date `timing_diagnostics` for the rows you're investigating. That assumption breaks when an indexing job is *in progress* or got rejected mid-flight: nothing has been committed to `boxel_index` yet (the indexer writes to a staging table and only swaps on success — see [Reading partial progress from `boxel_index_working`](#5-reading-partial-progress-from-boxel_index_working) below), so the diagnostics column there is stale or null for the affected rows. +Mode A and Mode B both assume `boxel_index` has up-to-date `timing_diagnostics` for the rows you're investigating. That assumption breaks when an indexing job is _in progress_ or got rejected mid-flight: nothing has been committed to `boxel_index` yet (the indexer writes to a staging table and only swaps on success — see [Reading partial progress from `boxel_index_working`](#5-reading-partial-progress-from-boxel_index_working) below), so the diagnostics column there is stale or null for the affected rows. -For this mode the diagnostic stance flips from "what timed out" (Mode A) or "what was slow" (Mode B) to **"what hasn't happened yet"**. You're reconstructing the work the job *would have done* from three sources together: +For this mode the diagnostic stance flips from "what timed out" (Mode A) or "what was slow" (Mode B) to **"what hasn't happened yet"**. You're reconstructing the work the job _would have done_ from three sources together: -1. **`boxel_index_working`** — the staging table the indexer writes to as it makes progress. On success its rows for the touched URLs are copied into `boxel_index` (`Batch.applyBatchUpdates` in `packages/runtime-common/index-writer.ts`). On failure (worker crash, job timeout, manual cancel) the working rows are left behind, which is exactly the bisection signal you want: any row in `boxel_index_working` that is *not yet* in `boxel_index` (or has a higher `realm_version`) was already processed by the stuck job. -2. **EFS file mtimes** — reachable via the `aws-access` skill's "Browsing the EFS filesystem" path (the `boxel-claude-fs-readonly-` Fargate task). Combined with `boxel_index.last_modified` (the indexer's view of when each file was last processed) this lets you reconstruct what *would* have been invalidated by a from-scratch run, *before* any `boxel_index_working` rows existed. +1. **`boxel_index_working`** — the staging table the indexer writes to as it makes progress. On success its rows for the touched URLs are copied into `boxel_index` (`Batch.applyBatchUpdates` in `packages/runtime-common/index-writer.ts`). On failure (worker crash, job timeout, manual cancel) the working rows are left behind, which is exactly the bisection signal you want: any row in `boxel_index_working` that is _not yet_ in `boxel_index` (or has a higher `realm_version`) was already processed by the stuck job. +2. **EFS file mtimes** — reachable via the `aws-access` skill's "Browsing the EFS filesystem" path (the `boxel-claude-fs-readonly-` Fargate task). Combined with `boxel_index.last_modified` (the indexer's view of when each file was last processed) this lets you reconstruct what _would_ have been invalidated by a from-scratch run, _before_ any `boxel_index_working` rows existed. 3. **Worker logs** in CloudWatch (`ecs-boxel-worker-`) — confirms the job's start, the file it was on at the freeze point, and any partial completion lines. ### 1. Recognising the situation @@ -219,7 +219,7 @@ WHERE job_id = ORDER BY created_at DESC; ``` -(See `packages/runtime-common/realm-index-updater.ts::publishFullIndex` and `update`, `packages/runtime-common/jobs/reindex-realm.ts`, and `packages/postgres/pg-queue.ts` for how these tables are populated and what `unfulfilled` / `resolved` / `rejected` mean. The full-reindex path (`enqueueReindexRealmJob` with `clearLastModified: true`) intentionally nulls `boxel_index.last_modified` *before* enqueuing — relevant to step 3 below.) +(See `packages/runtime-common/realm-index-updater.ts::publishFullIndex` and `update`, `packages/runtime-common/jobs/reindex-realm.ts`, and `packages/postgres/pg-queue.ts` for how these tables are populated and what `unfulfilled` / `resolved` / `rejected` mean. The full-reindex path (`enqueueReindexRealmJob` with `clearLastModified: true`) intentionally nulls `boxel_index.last_modified` _before_ enqueuing — relevant to step 3 below.) ### 2. Distinguishing from-scratch vs incremental @@ -275,7 +275,7 @@ The from-scratch path lives in `IndexRunner.fromScratch` (`packages/runtime-comm invalidationList.push(...deletedUrls); ``` - Reproduce that comparison by hand. To find rows in the index that *would have* been seeded: + Reproduce that comparison by hand. To find rows in the index that _would have_ been seeded: ```sql -- "Stale" rows in boxel_index — anything where the indexer's view of @@ -327,7 +327,7 @@ The fan-out is **iterative**, not a single recursive CTE. `Batch.invalidate(urls 1. For each seed URL, collect concrete-URL matches across `boxel_index_working` (current batch) and `boxel_index` (production) — `urlsMatchingSeed` (lines 776-819). 2. For each matched URL, call `calculateInvalidations(alias)` (line 1066) which finds rows that reference the alias in their `deps` jsonb array, then recurses into those rows' aliases. Recursion is bounded by a `visited` set per `invalidate()` call — there are no fixed iteration counts, the walk continues until `visited` saturates. -3. The single SQL building block is `itemsThatReference(resolvedPath)` (line 978), which on Postgres uses jsonb containment. **Where to read from depends on the question**: at runtime the indexer queries `boxel_index_working` so mid-batch tombstones and rewrites are visible to subsequent fan-out iterations. For *post-mortem* reconstruction of a stuck job, prefer `boxel_index` (committed state) — that gives you the state the runner *started* with, before its own writes confused the picture. If the job partially advanced, probe both tables side-by-side to see what was already redrawn vs. what was still untouched. +3. The single SQL building block is `itemsThatReference(resolvedPath)` (line 978), which on Postgres uses jsonb containment. **Where to read from depends on the question**: at runtime the indexer queries `boxel_index_working` so mid-batch tombstones and rewrites are visible to subsequent fan-out iterations. For _post-mortem_ reconstruction of a stuck job, prefer `boxel_index` (committed state) — that gives you the state the runner _started_ with, before its own writes confused the picture. If the job partially advanced, probe both tables side-by-side to see what was already redrawn vs. what was still untouched. ```sql -- One iteration of consumer fan-out, against the committed state @@ -354,9 +354,9 @@ The fan-out is **iterative**, not a single recursive CTE. `Batch.invalidate(urls When the seed has a `@cardstack/...` "registered prefix" form (catalog modules, etc.), the runtime also probes the unresolved form — `@>` against `["/..."]`. Reproduce by-hand only if your seed URL is one of those (look for `unresolveCardReference` in `card-reference-resolver.ts`). -4. The `invalidationTraversalAlias` rule (line 1095) decides what gets fed into the *next* iteration: +4. The `invalidationTraversalAlias` rule (line 1095) decides what gets fed into the _next_ iteration: - For `type = 'instance'` rows: the row's own `url` (the `.json` URL). - - For executable file rows (`.gts` / `.ts` / `.js` / `.gjs`) with a `file_alias`: the `file_alias` (path with extension trimmed). Executable consumers see the *aliased* URL in `deps`, not the source file with extension. + - For executable file rows (`.gts` / `.ts` / `.js` / `.gjs`) with a `file_alias`: the `file_alias` (path with extension trimmed). Executable consumers see the _aliased_ URL in `deps`, not the source file with extension. - Otherwise (non-executable file rows): the row's `url`. 5. After the loop converges (no new URLs added to `visited`), `tombstoneEntries(invalidations)` (line 684) inserts a `is_deleted = true` row for every invalidated URL into `boxel_index_working` with `realm_version = `, stamped with the batch's current `invalidationId`. **This is the first DB-side write of the batch.** If the worker died before this, `boxel_index_working` will not yet contain partial-progress rows for the new realm version (step 6 will be empty). @@ -450,11 +450,11 @@ ORDER BY last_write DESC LIMIT 10; ``` -The bottom row of the per-`invalidationId` query (max `indexedAt`) is **the most recently completed file**; the file the worker stalled on is most likely the *next* one in the planned visit order (which is sorted in `index-runner.ts::sortInvalidations` — `.json` files visited after their non-`.json` counterparts; otherwise lexical by href). Combine three signals to pin it down: +The bottom row of the per-`invalidationId` query (max `indexedAt`) is **the most recently completed file**; the file the worker stalled on is most likely the _next_ one in the planned visit order (which is sorted in `index-runner.ts::sortInvalidations` — `.json` files visited after their non-`.json` counterparts; otherwise lexical by href). Combine three signals to pin it down: 1. The bottom row's `url` is the last-completed file. -2. The worker log's last `begin fused visit of file ` line for the job (visit-file.ts line 108, `index-runner` logger, debug level) names the file the visit *started* on. If there's no matching `completed fused visit of file ` line, that's where the worker froze. -3. The bottom row's `currentlyEvaluatingModule` / `recentModuleEvaluations[0].url` / `inFlightModuleImports[]` say *which* module inside that visit was the stall point — same field semantics as Mode A. +2. The worker log's last `begin fused visit of file ` line for the job (visit-file.ts line 108, `index-runner` logger, debug level) names the file the visit _started_ on. If there's no matching `completed fused visit of file ` line, that's where the worker froze. +3. The bottom row's `currentlyEvaluatingModule` / `recentModuleEvaluations[0].url` / `inFlightModuleImports[]` say _which_ module inside that visit was the stall point — same field semantics as Mode A. To read which row would have been visited next from the working table (rows already invalidated but not yet written-with-content — these are the tombstones inserted by `Batch.invalidate`): @@ -477,12 +477,12 @@ If `boxel_index_working` has **zero rows** for this batch's `invalidationId`, th The worker logs to `ecs-boxel-worker-` (see the `aws-access` skill's CloudWatch table). The relevant logger names: -| Logger | Defined at | Lines you care about | -|---|---|---| -| `worker` | `packages/runtime-common/worker.ts:80`, `packages/realm-server/worker.ts:22` | `starting from-scratch indexing for job: ` and `starting incremental indexing for job: ` (debug). Includes the full job args — use this to recover the seed for incrementals. | -| `realm-index-updater` | `packages/runtime-common/realm-index-updater.ts:29` | `Realm is starting indexing` (info), `Realm has completed indexing in s: ` (info). Always on at `*=info`; coarse but covers job lifecycle. | -| `index-runner` | `packages/runtime-common/index-runner.ts:48` | `starting from scratch indexing` / `starting from incremental indexing for ` (debug), `discovering invalidations in dir ` (debug), `begin fused visit of file ` / `completed fused visit of file ` per file (debug, both in `visit-file.ts`), `completed from scratch indexing in ms` / `completed incremental indexing for in ms` (debug). **This is the per-file progress channel.** | -| `index-perf` | `packages/runtime-common/index-runner.ts:50`, `packages/runtime-common/index-writer.ts:173` | Per-stage perf timings (debug): `time to get file system mtimes `, `time to invalidate `, `completed getting index mtimes in `, `completed invalidations in `, `completed index visit in `, `completed index finalization in `, `inserted invalidated rows for in `, `time to determine items that reference `. Useful to confirm *which* phase a stuck job is in. | +| Logger | Defined at | Lines you care about | +| --------------------- | ------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `worker` | `packages/runtime-common/worker.ts:80`, `packages/realm-server/worker.ts:22` | `starting from-scratch indexing for job: ` and `starting incremental indexing for job: ` (debug). Includes the full job args — use this to recover the seed for incrementals. | +| `realm-index-updater` | `packages/runtime-common/realm-index-updater.ts:29` | `Realm is starting indexing` (info), `Realm has completed indexing in s: ` (info). Always on at `*=info`; coarse but covers job lifecycle. | +| `index-runner` | `packages/runtime-common/index-runner.ts:48` | `starting from scratch indexing` / `starting from incremental indexing for ` (debug), `discovering invalidations in dir ` (debug), `begin fused visit of file ` / `completed fused visit of file ` per file (debug, both in `visit-file.ts`), `completed from scratch indexing in ms` / `completed incremental indexing for in ms` (debug). **This is the per-file progress channel.** | +| `index-perf` | `packages/runtime-common/index-runner.ts:50`, `packages/runtime-common/index-writer.ts:173` | Per-stage perf timings (debug): `time to get file system mtimes `, `time to invalidate `, `completed getting index mtimes in `, `completed invalidations in `, `completed index visit in `, `completed index finalization in `, `inserted invalidated rows for in `, `time to determine items that reference `. Useful to confirm _which_ phase a stuck job is in. | `LOG_LEVELS` is read once at process start in `packages/realm-server/setup-logger.ts:4`: @@ -503,9 +503,10 @@ In the deployed environments this is an environment variable on the worker ECS t - **Staging / production**: `LOG_LEVELS` is held in AWS SSM Parameter Store at `//boxel/LOG_LEVELS` (e.g. `/staging/boxel/LOG_LEVELS`, `/production/boxel/LOG_LEVELS`). The worker ECS task definition references it via `valueFrom`, so the value is injected as the container's `LOG_LEVELS` env var at task start. To adjust levels: 1. Update the SSM parameter value (AWS Console → Systems Manager → Parameter Store, or `aws ssm put-parameter --name //boxel/LOG_LEVELS --value '' --overwrite` if you have write access — Claude does not). 2. Force a new deployment of `boxel-worker-` from the ECS console (Services → boxel-worker- → Update → "Force new deployment"). The new task picks up the updated SSM value at boot. - 3. The realm-server task reads `LOG_LEVELS` for *its own* logging; in deployed envs the worker is a separate task and only its `LOG_LEVELS` matters for indexing-job logs. If you also want indexing logs that the realm-server emits during invalidation discovery (e.g. for jobs the realm-server queues directly), redeploy `boxel-realm-server-` too. + 3. The realm-server task reads `LOG_LEVELS` for _its own_ logging; in deployed envs the worker is a separate task and only its `LOG_LEVELS` matters for indexing-job logs. If you also want indexing logs that the realm-server emits during invalidation discovery (e.g. for jobs the realm-server queues directly), redeploy `boxel-realm-server-` too. + + Levels apply to subsequently-launched worker processes; a job already in flight keeps the levels it was launched with. So for triage of a _future_ job, update SSM and redeploy first, then trigger the reindex. - Levels apply to subsequently-launched worker processes; a job already in flight keeps the levels it was launched with. So for triage of a *future* job, update SSM and redeploy first, then trigger the reindex. - **Locally**: prepend the env var, e.g. `LOG_LEVELS='*=info,index-runner=debug,index-perf=debug' pnpm start-all` — same as the `prerenderer-reproduce=debug` pattern in the [Reproducing a render interactively](#reproducing-a-render-interactively) section. Sample CloudWatch greps, using `cw` from the `aws-access` skill (substitute `claude-staging` / `claude-prod` and the matching log group): @@ -539,7 +540,7 @@ A short rubric for the most common shapes: - **High confidence the stall is at file X**: the bottom row of `boxel_index_working` (max `indexedAt` for the batch's `invalidationId`) is X **AND** the worker's last `begin fused visit of file X` line has no matching `completed fused visit of file X` line **AND** the bottom row's `recentModuleEvaluations[0].url` (or `currentlyEvaluatingModule` / `inFlightModuleImports[0]`) is a module under X. Treat the row's `timing_diagnostics` as a Mode A capture and walk the [Classify in one pass](#classify-in-one-pass) table. - **Medium confidence**: only two of the three signals agree. Most often the worker log is the dropout — debug-level logging wasn't on. Promote `index-runner` to debug and trigger a follow-up reindex to validate. -- **Low confidence — the runner stalled before any per-file work**: `boxel_index_working` has no rows for this batch's `invalidationId` (no row stamped with the batch UUID, no `is_deleted = TRUE` tombstones at the batch's `realm_version`). The worker is still in **invalidation discovery** — either the mtime walk (no `discovering invalidations in dir` line yet) or the consumer fan-out (the `discovering` line is there but no per-file visit-start lines). Look at the worker's `index-perf` `time to get file system mtimes` / `time to invalidate` lines — if those are missing too, you're stuck in the realm-server fetch (`reader.mtimes()` → `_mtimes` HTTP call) or in `Batch.invalidate`'s own jsonb-containment SQL (`itemsThatReference`). Then go look at what *should* have been in the seed but wasn't — cross-check the EFS file listing against the realm's `boxel_index.last_modified` per step 3. +- **Low confidence — the runner stalled before any per-file work**: `boxel_index_working` has no rows for this batch's `invalidationId` (no row stamped with the batch UUID, no `is_deleted = TRUE` tombstones at the batch's `realm_version`). The worker is still in **invalidation discovery** — either the mtime walk (no `discovering invalidations in dir` line yet) or the consumer fan-out (the `discovering` line is there but no per-file visit-start lines). Look at the worker's `index-perf` `time to get file system mtimes` / `time to invalidate` lines — if those are missing too, you're stuck in the realm-server fetch (`reader.mtimes()` → `_mtimes` HTTP call) or in `Batch.invalidate`'s own jsonb-containment SQL (`itemsThatReference`). Then go look at what _should_ have been in the seed but wasn't — cross-check the EFS file listing against the realm's `boxel_index.last_modified` per step 3. - **Confirm a "rejected" job actually failed cleanly**: `jobs.status = 'rejected'` should pair with the matching reservation's `completed_at IS NOT NULL`. If `completed_at IS NULL`, the worker bailed before its finalize transaction (see `pg-queue.ts` lines 619-696); the reservation's `locked_until` will eventually expire and another worker can claim it. The actual error is in **`jobs.result`** (jsonb). When the worker's `await job.run(...)` throws, `pg-queue.ts:627-628` does `result = serializableError(err); newStatus = 'rejected';` and the finalize UPDATE writes both into the row. Read it directly: @@ -558,8 +559,8 @@ A short rubric for the most common shapes: ### 9. What this mode can't tell you -- If the worker died *before* any DB write — crashed during `discoverInvalidations`, OOM-killed during the mtime walk, or threw inside `Batch.invalidate`'s own SQL — `boxel_index_working` will have no rows for this batch's `invalidationId`. The `Batch` object mints the `invalidationId` in its constructor, but it only lands on disk when the first `updateEntry` or `tombstoneEntries` call runs. Until then the only diagnostic signals are the worker log and the EFS state. Mode C cannot reconstruct *which* file the worker was processing in that case — you need either `index-runner=debug` log output or a Sentry trace. -- The `timing_diagnostics` for partial-progress rows is the **per-render** capture for that row's prerender call. It won't tell you why the *next* render froze. If the bottom-row's diagnostic is clean (low `renderElapsedMs`, no in-flight loads), the stall is between renders — usually `Batch.invalidate` recursion against a tightly-cycled module graph, or DB contention on the `boxel_index_working` upsert. The `index-perf` `time to determine items that reference …` lines are the only fingerprints of that loop. +- If the worker died _before_ any DB write — crashed during `discoverInvalidations`, OOM-killed during the mtime walk, or threw inside `Batch.invalidate`'s own SQL — `boxel_index_working` will have no rows for this batch's `invalidationId`. The `Batch` object mints the `invalidationId` in its constructor, but it only lands on disk when the first `updateEntry` or `tombstoneEntries` call runs. Until then the only diagnostic signals are the worker log and the EFS state. Mode C cannot reconstruct _which_ file the worker was processing in that case — you need either `index-runner=debug` log output or a Sentry trace. +- The `timing_diagnostics` for partial-progress rows is the **per-render** capture for that row's prerender call. It won't tell you why the _next_ render froze. If the bottom-row's diagnostic is clean (low `renderElapsedMs`, no in-flight loads), the stall is between renders — usually `Batch.invalidate` recursion against a tightly-cycled module graph, or DB contention on the `boxel_index_working` upsert. The `index-perf` `time to determine items that reference …` lines are the only fingerprints of that loop. - A `boxel_index` row's `timing_diagnostics` reflects the **last successful** indexing pass, not the in-flight one. Don't confuse a stale `boxel_index` `indexedAt` with the stuck job — always cross-reference against the matching `boxel_index_working` row (same `(url, realm_url)`) before drawing conclusions. ## Mode D — a module render was slow or hung @@ -744,7 +745,7 @@ All ms values are server-observed walltime. - `launchMs` + `renderElapsedMs` ≈ `totalElapsedMs`. A small mismatch (< 100 ms) is capture overhead; a large mismatch means the render-runner retried with `clearCache: true` (you're looking at the final attempt's timings). - `waits.semaphoreMs` + `waits.tabQueueMs` + `waits.tabStartupMs` ≤ `launchMs`. `launchMs` is measured around the full `PagePool.getPage` call; the three sub-waits cover the three awaits (semaphore acquire, affinity-entry selection, standby warmup) but not the synchronous bookkeeping between them (affinity reassignment, LRU touch, standby top-up kickoff). For a healthy fleet the residual is < 5 ms; a large residual is unusual and worth inspecting `PagePool` directly. -- `renderElapsedMs` is wall time *inside* `withTimeout()` — includes host fetches, store settle, and the actual render pass. It hits the configured `RENDER_TIMEOUT_MS` on a timeout. +- `renderElapsedMs` is wall time _inside_ `withTimeout()` — includes host fetches, store settle, and the actual render pass. It hits the configured `RENDER_TIMEOUT_MS` on a timeout. - `stageAgeMs` is host-observed — it's computed as `Date.now() - stageSetAt` at the moment the post-timeout capture ran, so there can be a small read-delay offset vs. `renderElapsedMs`. For triage, `stageAgeMs` represents "how long the render has been stuck in its current stage". - `recentModuleEvaluations[*].ms` are per-module evaluation times measured inside `Loader.evaluate()` via `performance.now()`; they're wall time for the synchronous body of the module (Glimmer compile + top-level init). Sum them to estimate the sync-compile budget eaten by module evaluation on this page. - `queryLoadsInFlight[*].ageMs` is the wall time since that specific search/query-field load started — i.e. how long it's been hanging. @@ -756,23 +757,23 @@ Keep the field names in lock-step with the type in `packages/runtime-common/inde ### Classify in one pass -Walk the fields top-down. The *first* positive signal wins; stop there. - -| Signal | Category | What to look at next | -|---|---|---| -| `waits.semaphoreMs` ≈ `totalElapsedMs` | **Launch stall (capacity)** | Fleet-wide: `prerender-queue-snapshot` lines on every prerender server around that timestamp. Is `totalPending` piled up? Add capacity, don't touch host. | -| `waits.admissionMs` ≈ `totalElapsedMs` (and semaphoreMs small) | **Per-affinity admission stall** | This realm hit its own file-admission cap — the server had capacity but wasn't letting this realm use it. The signal means ≥ cap concurrent file renders on one affinity. Default cap = `affinityTabMax − 1` (4 on the standard 5-tab deployment), so a single realm fanning out to ≥ 4 concurrent renders (typical catalog-sized reindex) already produces this. Grep the queue-snapshot log for `admission=pending=N/cap=N` on the same affinity to confirm waiters were piling up. If the cap looks too tight for the workload and cross-realm fairness isn't the concern, `PRERENDER_AFFINITY_FILE_CONCURRENCY` is the knob (see the tuning-knobs section). | -| `waits.tabQueueMs` ≈ `totalElapsedMs` (and semaphoreMs / admissionMs small) | **Same-affinity contention** | Same realm's batch is serialized on one tab. Check whether `PRERENDER_AFFINITY_TAB_MAX` is 1 for this fleet, or whether a rogue user request is sharing the tab (see CS-10873 for the cancel-on-abort follow-up). | -| `launchMs` small **and** `renderStage` is `null`/`model:start` | **Very early render stall** — transition hadn't yet rendered anything. Usually means the route threw before setting a real stage. Look at `capturedDom` (`` is common) and console errors. | -| `renderStage` ∈ `buildModel:fetching-source` / `buildModel:deriving-type` / `buildModel:hydrating` | **Backend stall during model build** | Usually a slow realm server or cross-realm fetch. Check realm-server logs for the same requestId; check the fetch target from `capturedDom` / `cardDocsInFlight`. | -| `inFlightModuleImports.length > 0` | **Loader stall** | Each URL is a `.gts` / `.ts` we'd already started a `fetchModule(...)` for. Confirm the realm serves those URLs and that there's no import cycle. Often resolves with `clearCache: true` on retry (already in place) — if that's failing check for 500s on the module URL. | -| `queryLoadsInFlight.length > 0` with `fieldName` set | **Query-field stall** | This is the CS-10820 field-driven hot path. Look at the `query`/`realms` fields — is the search hitting a remote realm server that's slow? Check `_federated-search` latency for that realm on the realm-server side. | -| `cardDocsInFlight.length > 0` or `fileMetaDocsInFlight.length > 0` (no query fields) | **Data stall** | Usually linksTo targets that the template pulled on. Prefer `cardDocLoadsInFlight[*].ageMs` / `fileMetaDocLoadsInFlight[*].ageMs` — they tell you which individual URL is the slow one vs. a fan-out. If it's a card from a different realm, that realm may be slow or misconfigured. Also check `recentCardDocLoads` for loads that completed just before the timer fired but still dominated the budget. | -| `renderStage` = `waiting-stability` **AND** `queryLoadsInFlight` has a `search-resource:*` entry **AND** `affinitySnapshot.sameAffinityActivity` contains `{ queue: 'module', state: 'queued' }` entries **on the same affinity as the stuck render** | **Self-referential prerender deadlock — admission invariant broken** | A search that can't resolve a `_cardType` filter without a card definition causes `CachingDefinitionLookup` to fire a same-affinity `prerenderModule` to extract it. The queue-split + admission cap in PagePool is supposed to reserve at least one tab per affinity for `module` / `command` work precisely to prevent this sub-prerender from queuing behind the render that needs it. **Seeing this fingerprint means the invariant didn't hold**: check `PRERENDER_AFFINITY_TAB_MAX >= 2` (PagePool logs a warning at startup if not), verify the admission semaphore is acquired on `'file'` calls (`PagePool.#acquireFileAdmission`), and confirm `disposeAffinity` isn't dropping the admission semaphore mid-flight. The `priority` field on each `sameAffinityActivity` entry sharpens triage: a stuck `priority=10` file render with a queued `priority=10` module sibling on the same affinity is the actual deadlock signature; a `priority=10` file render queued behind `priority>=10` module work that's running on a different tab is just legitimate priority routing — investigate the queued module entry, not the queue mechanism. | -| `tabReused: false` AND `tabStartupMs` ≈ `launchMs` | **Cold-start tax** | This render paid for spawning a fresh tab + warming a BrowserContext rather than reusing an existing same-affinity tab. Common causes: first request on the affinity after a deploy / restart; affinity was evicted by LRU pressure; `disposeAffinity` ran for an unrelated reason. Look at `prerender-queue-snapshot` from the same minute — if many other affinities are also fresh-tab-spawning, the LRU cap (`PRERENDER_SHARED_CONTEXT_CAP`) may be too tight relative to the active affinity count. May be absent on older rows that predate the field. | -| `renderStage` = `waiting-stability` with empty in-flight arrays | **Render stall** | Nothing is loading but settlement never finishes. Classic Glimmer tracking loop — template is invalidating itself. `capturedDom` usually shows the partially-rendered component. `blockedTimerSummary` will list swallowed timers that may hint at a scheduling loop. | -| `currentlyEvaluatingModule` non-null, or `stageAgeMs` large with empty in-flight arrays | **Synchronous browser stall (typically Glimmer compile during module eval)** | `recentModuleEvaluations` shows the worst offenders. A single URL with `ms > 5000` usually means "this module has a giant template that takes forever to compile". Many small entries (say 50+ at 100–500 ms each) summing into the stall budget mean card fan-out where each dependent card contributes a compile. Split the module, lazy-load the template, or reduce the component fan-out. | -| `blockedTimerSummary` populated | Supplementary. Tells you which timer-driven code is fighting the render. Not a root cause on its own. | +Walk the fields top-down. The _first_ positive signal wins; stop there. + +| Signal | Category | What to look at next | +| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `waits.semaphoreMs` ≈ `totalElapsedMs` | **Launch stall (capacity)** | Fleet-wide: `prerender-queue-snapshot` lines on every prerender server around that timestamp. Is `totalPending` piled up? Add capacity, don't touch host. | +| `waits.admissionMs` ≈ `totalElapsedMs` (and semaphoreMs small) | **Per-affinity admission stall** | This realm hit its own file-admission cap — the server had capacity but wasn't letting this realm use it. The signal means ≥ cap concurrent file renders on one affinity. Default cap = `affinityTabMax − 1` (4 on the standard 5-tab deployment), so a single realm fanning out to ≥ 4 concurrent renders (typical catalog-sized reindex) already produces this. Grep the queue-snapshot log for `admission=pending=N/cap=N` on the same affinity to confirm waiters were piling up. If the cap looks too tight for the workload and cross-realm fairness isn't the concern, `PRERENDER_AFFINITY_FILE_CONCURRENCY` is the knob (see the tuning-knobs section). | +| `waits.tabQueueMs` ≈ `totalElapsedMs` (and semaphoreMs / admissionMs small) | **Same-affinity contention** | Same realm's batch is serialized on one tab. Check whether `PRERENDER_AFFINITY_TAB_MAX` is 1 for this fleet, or whether a rogue user request is sharing the tab (see CS-10873 for the cancel-on-abort follow-up). | +| `launchMs` small **and** `renderStage` is `null`/`model:start` | **Very early render stall** | Transition hadn't yet rendered anything. Usually means the route threw before setting a real stage. Look at `capturedDom` (`` is common) and console errors. | +| `renderStage` ∈ `buildModel:fetching-source` / `buildModel:deriving-type` / `buildModel:hydrating` | **Backend stall during model build** | Usually a slow realm server or cross-realm fetch. Check realm-server logs for the same requestId; check the fetch target from `capturedDom` / `cardDocsInFlight`. | +| `inFlightModuleImports.length > 0` | **Loader stall** | Each URL is a `.gts` / `.ts` we'd already started a `fetchModule(...)` for. Confirm the realm serves those URLs and that there's no import cycle. Often resolves with `clearCache: true` on retry (already in place) — if that's failing check for 500s on the module URL. | +| `queryLoadsInFlight.length > 0` with `fieldName` set | **Query-field stall** | This is the CS-10820 field-driven hot path. Look at the `query`/`realms` fields — is the search hitting a remote realm server that's slow? Check `_federated-search` latency for that realm on the realm-server side. | +| `cardDocsInFlight.length > 0` or `fileMetaDocsInFlight.length > 0` (no query fields) | **Data stall** | Usually linksTo targets that the template pulled on. Prefer `cardDocLoadsInFlight[*].ageMs` / `fileMetaDocLoadsInFlight[*].ageMs` — they tell you which individual URL is the slow one vs. a fan-out. If it's a card from a different realm, that realm may be slow or misconfigured. Also check `recentCardDocLoads` for loads that completed just before the timer fired but still dominated the budget. | +| `renderStage` = `waiting-stability` **AND** `queryLoadsInFlight` has a `search-resource:*` entry **AND** `affinitySnapshot.sameAffinityActivity` contains `{ queue: 'module', state: 'queued' }` entries **on the same affinity as the stuck render** | **Self-referential prerender deadlock — admission invariant broken** | A search that can't resolve a `_cardType` filter without a card definition causes `CachingDefinitionLookup` to fire a same-affinity `prerenderModule` to extract it. The queue-split + admission cap in PagePool is supposed to reserve at least one tab per affinity for `module` / `command` work precisely to prevent this sub-prerender from queuing behind the render that needs it. **Seeing this fingerprint means the invariant didn't hold**: check `PRERENDER_AFFINITY_TAB_MAX >= 2` (PagePool logs a warning at startup if not), verify the admission semaphore is acquired on `'file'` calls (`PagePool.#acquireFileAdmission`), and confirm `disposeAffinity` isn't dropping the admission semaphore mid-flight. The `priority` field on each `sameAffinityActivity` entry sharpens triage: a stuck `priority=10` file render with a queued `priority=10` module sibling on the same affinity is the actual deadlock signature; a `priority=10` file render queued behind `priority>=10` module work that's running on a different tab is just legitimate priority routing — investigate the queued module entry, not the queue mechanism. | +| `tabReused: false` AND `tabStartupMs` ≈ `launchMs` | **Cold-start tax** | This render paid for spawning a fresh tab + warming a BrowserContext rather than reusing an existing same-affinity tab. Common causes: first request on the affinity after a deploy / restart; affinity was evicted by LRU pressure; `disposeAffinity` ran for an unrelated reason. Look at `prerender-queue-snapshot` from the same minute — if many other affinities are also fresh-tab-spawning, the LRU cap (`PRERENDER_SHARED_CONTEXT_CAP`) may be too tight relative to the active affinity count. May be absent on older rows that predate the field. | +| `renderStage` = `waiting-stability` with empty in-flight arrays | **Render stall** | Nothing is loading but settlement never finishes. Classic Glimmer tracking loop — template is invalidating itself. `capturedDom` usually shows the partially-rendered component. `blockedTimerSummary` will list swallowed timers that may hint at a scheduling loop. | +| `currentlyEvaluatingModule` non-null, or `stageAgeMs` large with empty in-flight arrays | **Synchronous browser stall (typically Glimmer compile during module eval)** | `recentModuleEvaluations` shows the worst offenders. A single URL with `ms > 5000` usually means "this module has a giant template that takes forever to compile". Many small entries (say 50+ at 100–500 ms each) summing into the stall budget mean card fan-out where each dependent card contributes a compile. Split the module, lazy-load the template, or reduce the component fan-out. | +| `blockedTimerSummary` populated | **Supplementary** | Tells you which timer-driven code is fighting the render. Not a root cause on its own. | ### Special cases @@ -800,7 +801,7 @@ Chrome's DevTools console surfaces the throw as `Uncaught (in promise) ...`, but - A `@field` getter that accesses `undefined.property` because an upstream link didn't materialize. - A template-level `{{#if (someHelper ...)}}` where `someHelper` was renamed or removed. -**False-positive profile.** The detector has four gates that all have to hold simultaneously: `isReady=true`, `model.status='ready'`, DOM attribute === `loading` specifically, and the state persists across a backoff-poll grace window (a microtask drain followed by macrotask hops at 50ms → 200 → 500 → 1000 → 2000, re-checking after each — total ~3.75s of cumulative slack so Backburner's flush has real wallclock time to land even under heavy parallel CI load). The fast path exits at the first hop; only renders that stay desynced through the full series are declared failures. In-flight loads are filtered upstream by `#waitForRenderLoadStability` — by the time the detector runs the loader is quiescent. The one residual scenario is a card whose template runs a multi-second *synchronous* getter that starves the microtask queue beyond the full grace budget; when the getter finishes, the microtask queue drains, the binding flips to `ready`, and on the next hop the detector exits cleanly. So in practice false-positives require Backburner, Glimmer, and the entire JS thread to all be blocked for >3.75s — a state the route can't be in while logically `ready`. +**False-positive profile.** The detector has four gates that all have to hold simultaneously: `isReady=true`, `model.status='ready'`, DOM attribute === `loading` specifically, and the state persists across a backoff-poll grace window (a microtask drain followed by macrotask hops at 50ms → 200 → 500 → 1000 → 2000, re-checking after each — total ~3.75s of cumulative slack so Backburner's flush has real wallclock time to land even under heavy parallel CI load). The fast path exits at the first hop; only renders that stay desynced through the full series are declared failures. In-flight loads are filtered upstream by `#waitForRenderLoadStability` — by the time the detector runs the loader is quiescent. The one residual scenario is a card whose template runs a multi-second _synchronous_ getter that starves the microtask queue beyond the full grace budget; when the getter finishes, the microtask queue drains, the binding flips to `ready`, and on the next hop the detector exits cleanly. So in practice false-positives require Backburner, Glimmer, and the entire JS thread to all be blocked for >3.75s — a state the route can't be in while logically `ready`. **Mitigation if you suspect a false-positive.** Two runtime knobs are exposed via `globalThis`: `__boxelDomDesyncMicrotaskYields` (default 5 microtask yields per hop) and `__boxelDomDesyncSettleHopsMs` (default `[50, 200, 500, 1000, 2000]` — the macrotask backoff series). Stretch either if a specific card family legitimately needs more flush time. The detector module (`packages/host/app/utils/render-desync-detector.ts`) has the full chart and explains why it deliberately avoids `requestAnimationFrame` (RAF + Ember autotrack has a long tail of subtle breakages — microtask + macrotask yields align with how Backburner sequences its own flushes). @@ -825,7 +826,7 @@ The periodic `prerender-queue-snapshot` line does NOT carry requestId (it's a fl prerender-queue-snapshot totalTabs=5 totalPending=7 affinities=3 | realm:acme(tabs=2, pending=5, max=5, busy=file:1/module:1/command:0, priorities=tab:10:1,0:3|adm:0:1) realm:lib(tabs=2, pending=2, max=1, busy=file:1/module:0/command:0, priorities=tab:0:2) user:u-123(tabs=1, pending=0, max=0) ``` -Each affinity with queued waiters gets a `priorities=` segment (skipped when no waiters are queued, even if a render is in flight, to keep the log compact). Format: `::` pairs, comma-separated within a source, sources separated by `|`. `tab:` is the per-tab queue's *queued* waiters; `adm:` is the per-affinity file-admission semaphore's *queued* waiters. Priorities listed highest-first, matching dequeue order — so `tab:10:1,0:3` means "1 priority-10 waiter at the head of the queue, 3 priority-0 waiters behind it." +Each affinity with queued waiters gets a `priorities=` segment (skipped when no waiters are queued, even if a render is in flight, to keep the log compact). Format: `::` pairs, comma-separated within a source, sources separated by `|`. `tab:` is the per-tab queue's _queued_ waiters; `adm:` is the per-affinity file-admission semaphore's _queued_ waiters. Priorities listed highest-first, matching dequeue order — so `tab:10:1,0:3` means "1 priority-10 waiter at the head of the queue, 3 priority-0 waiters behind it." The `pending=` count on the same line includes the in-flight render holding the tab (legacy `pendingCount = held + queued` semantics), but `priorities=` counts queued waiters only. So `pending=4` with `priorities=tab:10:1,0:2` is consistent: 1 in-flight render + 1 priority-10 waiter + 2 priority-0 waiters = 4. Don't expect the priority counts to sum to `pending`. @@ -852,7 +853,7 @@ In practice steps 1-5 catch ~90% of timeouts. 3. **Are `launch_ms` and `waits.semaphoreMs` large across all rows?** If yes, capacity contention during the reindex, not the cards' fault. 4. **Is only the first-indexed row (min `indexedAt`) slow and the rest fast?** That's the cold-loader tax paid by the first render after a `.gts` invalidation (`clearCache: true` fired once for the batch). Expected on any executable invalidation — only worth chasing if the cold cost is disproportionate to the module graph. 5. **Is the sum of `render_ms` wildly larger than the card count × a reasonable per-card budget?** Look for `queryLoadsInFlight` / `recentQueryLoads` entries that repeat across rows — that's a query-field that multiple dependents all wait on. -6. **Is the fan-out bigger than you expected?** The `types` and `deps` columns on the same rows tell you *why* each row was invalidated — useful for discovering unintentionally-heavy transitive deps (e.g. a dashboard re-renders because one of its metrics modules has a runtime reference to the changed module). +6. **Is the fan-out bigger than you expected?** The `types` and `deps` columns on the same rows tell you _why_ each row was invalidated — useful for discovering unintentionally-heavy transitive deps (e.g. a dashboard re-renders because one of its metrics modules has a runtime reference to the changed module). ## When the diagnostics disagree with each other @@ -898,12 +899,12 @@ Why: the seed mints arbitrary user-impersonating tokens with arbitrary permissio ```json { - "mintedAt": "", - "expiresAt": "", // 1d from mintedAt - "user": "@ctse:stack.cards", - "realmUrl": "https://realms-staging.stack.cards/ctse/concrete-mockingbird/", - "jwt": "eyJ...", - "session": "{\"\":\"eyJ...\"}" + "mintedAt": "", + "expiresAt": "", // 1d from mintedAt + "user": "@ctse:stack.cards", + "realmUrl": "https://realms-staging.stack.cards/ctse/concrete-mockingbird/", + "jwt": "eyJ...", + "session": "{\"\":\"eyJ...\"}" } ``` @@ -912,7 +913,7 @@ The host URL isn't in the artifact — Claude derives it from the realm URL when Before using it, Claude must check: - `expiresAt` is in the future -- `mintedAt` is recent enough that this is for the *current* investigation (not a leftover artifact) +- `mintedAt` is recent enough that this is for the _current_ investigation (not a leftover artifact) - `realmUrl` matches the realm of the card you're rendering — different realm = ask for re-mint If any check fails, ask the user to re-run. Do not reuse stale artifacts. @@ -929,14 +930,15 @@ Slot-by-slot: - **``** — derive from the artifact's `realmUrl` host. The boxel-host-app URL (NOT matrix — matrix isn't involved in this flow). Recognised patterns, mirroring the deployed-env Caddy config + local dev / env-mode Traefik labels in `mise-tasks/lib/env-vars.sh`: - | Realm host | Host-app URL | - |---|---| - | `realms-staging.stack.cards` | `https://boxel-host-staging.stack.cards` | - | `realms.stack.cards` | `https://boxel-host.stack.cards` | - | `realm-server..localhost` | `http://host..localhost` (BOXEL_ENVIRONMENT mode) | - | `localhost` or `*.localhost` (standard) | `http://localhost:4200` | + | Realm host | Host-app URL | + | --------------------------------------- | ------------------------------------------------------- | + | `realms-staging.stack.cards` | `https://boxel-host-staging.stack.cards` | + | `realms.stack.cards` | `https://boxel-host.stack.cards` | + | `realm-server..localhost` | `http://host..localhost` (BOXEL_ENVIRONMENT mode) | + | `localhost` or `*.localhost` (standard) | `https://localhost:4200` | If the realm host doesn't match any of these patterns, ask the user — don't guess. Constrain `realms-` matching to `*.stack.cards` so any future deployment using a `realms-` prefix on a different domain isn't silently mapped to a wrong (and possibly non-existent) host. + - **``** — the card's full file URL **including `.json`** (the indexer renders against the .json file, not the bare card-id). `https://realms-staging.stack.cards/ctse/concrete-mockingbird/Environment/demo.json` → `https%3A%2F%2Frealms-staging.stack.cards%2Fctse%2Fconcrete-mockingbird%2FEnvironment%2Fdemo.json`. Omitting `.json` lands you on the host's login page because the route doesn't match. - **``** — any string. The indexer uses a monotonic counter; for manual replays `1` is fine. - **``** — the render-route options object, JSON-encoded then URL-encoded. The shape lives in `packages/runtime-common/render-route-options.ts`. Common values: @@ -1005,7 +1007,7 @@ Then re-mint with `--permissions read,write,realm-owner` (or whatever the column For local dev: matrix `server_name` is `localhost` (`packages/matrix/docker/synapse/dev/homeserver.yaml:1`), so user IDs are `@:localhost`. Two local-dev modes are supported: -- **Standard mode** (no `BOXEL_ENVIRONMENT` set) — realm at `http://localhost:4201/...`, host-app at `http://localhost:4200`. +- **Standard mode** (no `BOXEL_ENVIRONMENT` set) — realm at `https://localhost:4201/...`, host-app at `https://localhost:4200`. - **Environment mode** (`BOXEL_ENVIRONMENT=` set) — realm at `http://realm-server..localhost/...`, host-app at `http://host..localhost` (Traefik routing per `mise-tasks/lib/env-vars.sh`). Both modes share `@:localhost` for the matrix-domain part of user IDs. The host-app URL Claude needs to build the `/render` URL is derived from the realm URL per the table in the URL recipe section above. If you've configured a non-default matrix `server_name`, pass `--user` to the script explicitly. @@ -1037,15 +1039,16 @@ This matches the indexer's `fetchUserPermissions` (`packages/runtime-common/db-q - You want to compare a render under different `RenderRouteOptions` (e.g. with vs without `clearCache`). When this is **not** the right path: -- You need the indexer's exact session at the moment of a *historical* render (cross-realm auth that's since changed, etc.) — use Path B. -- You're triaging a stall during the indexer's own pass and want diagnostics on the *real* indexer's tab — Path A reproduces in a fresh tab; the indexer's tab is its own thing. + +- You need the indexer's exact session at the moment of a _historical_ render (cross-realm auth that's since changed, etc.) — use Path B. +- You're triaging a stall during the indexer's own pass and want diagnostics on the _real_ indexer's tab — Path A reproduces in a fresh tab; the indexer's tab is its own thing. ### Path B — the `prerenderer-reproduce` log channel `packages/realm-server/prerender/render-runner.ts` defines a dedicated logger `prerenderer-reproduce` that emits a line **per card render** with a ready-to-use URL and the exact `boxel-session` value the indexer used (a JSON-stringified map from realm URL to realm-scoped JWT): ``` -manually visit prerendered url at: /render////html/isolated/0 with boxel-session = {"http://localhost:4201/user/my-realm/":"eyJ…","https://cardstack.com/base/":"eyJ…", …} +manually visit prerendered url at: /render////html/isolated/0 with boxel-session = {"https://localhost:4201/user/my-realm/":"eyJ…","https://cardstack.com/base/":"eyJ…", …} ``` This channel is **off** by default. Turn it on by adding `prerenderer-reproduce=debug` to `LOG_LEVELS` when starting the realm server. Example: @@ -1086,15 +1089,15 @@ The per-realm reindex endpoints (`POST _reindex`, `POST _full-rein -d '{ "user": "@user:server", "realms": ["/"] }' ``` - The response carries a map of `{ : }`. **That** is the token you pass on `Authorization` when calling the reindex endpoint in the next section. It's *not* the token the prerender tab uses — that one comes from the `prerenderer-reproduce` log. + The response carries a map of `{ : }`. **That** is the token you pass on `Authorization` when calling the reindex endpoint in the next section. It's _not_ the token the prerender tab uses — that one comes from the `prerenderer-reproduce` log. Three different JWTs float around in this area, so always be explicit about which one you mean: -| Token | Who mints it | Used for | -|---|---|---| -| Realm-server-level JWT | `/_realm-auth` top-level, signed by server secret seed | Server admin endpoints (publish, etc.); *not* accepted by card endpoints | -| Realm-scoped JWT (this section) | Same `/_realm-auth` call, one per realm in the response map | Authenticating as a user to a specific realm — including `POST _full-reindex` | -| Indexer `boxel-session` value (from `prerenderer-reproduce`) | Minted internally by the indexer — a JSON-stringified `{ : }` map, one entry per realm the indexer has auth for | Pasted verbatim into `localStorage['boxel-session']` on the prerender tab | +| Token | Who mints it | Used for | +| ------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ | +| Realm-server-level JWT | `/_realm-auth` top-level, signed by server secret seed | Server admin endpoints (publish, etc.); _not_ accepted by card endpoints | +| Realm-scoped JWT (this section) | Same `/_realm-auth` call, one per realm in the response map | Authenticating as a user to a specific realm — including `POST _full-reindex` | +| Indexer `boxel-session` value (from `prerenderer-reproduce`) | Minted internally by the indexer — a JSON-stringified `{ : }` map, one entry per realm the indexer has auth for | Pasted verbatim into `localStorage['boxel-session']` on the prerender tab | Mix them up and you get 401s with no obvious reason. @@ -1106,7 +1109,7 @@ The render URL format is what the indexer uses and what `prerenderer-reproduce` /render////html/isolated/0 ``` -- `` — `HOST_URL` / whichever host the realm server points its prerender at (usually `http://localhost:4200` locally). +- `` — `HOST_URL` / whichever host the realm server points its prerender at (usually `https://localhost:4200` locally). - `` — `encodeURIComponent(url)`; e.g. `http%3A%2F%2Flocalhost%3A4201%2Fuser%2Fmyrealm%2FProduct%2F1.json`. - `` — monotonically-incremented per prerender call; `1` is fine for manual replays. - `` — `encodeURIComponent(JSON.stringify(renderOptions))`; `%7B%7D` (`{}`) works. @@ -1202,7 +1205,7 @@ curl -X POST \ -H "Accept: application/json" \ -H "Content-Type: application/json" \ -d '{}' \ - "http://localhost:4201/user//_full-reindex" + "https://localhost:4201/user//_full-reindex" # Terminal 1 — grep for the indexer's reproduce line for the card you're chasing grep 'manually visit prerendered url .*' realm-server.log | tail -1 @@ -1218,17 +1221,17 @@ grep 'manually visit prerendered url .*' realm-server.log | tail -1 Two different tokens do two different jobs: the user-minted realm-scoped JWT got you the reindex, the indexer's full session map gets the browser tab past its auth checks for every realm the render touches. -If `GRAFANA_SECRET` is configured on your server, you can skip the user-JWT step and use `curl -X POST -H "Authorization: Bearer $GRAFANA_SECRET" http://localhost:4201/_grafana-full-reindex` instead (no MIME gotcha on the grafana endpoints). In dev the per-realm JWT path is almost always easier. +If `GRAFANA_SECRET` is configured on your server, you can skip the user-JWT step and use `curl -k -X POST -H "Authorization: Bearer $GRAFANA_SECRET" https://localhost:4201/_grafana-full-reindex` instead (no MIME gotcha on the grafana endpoints). In dev the per-realm JWT path is almost always easier. ## Prerender capacity tuning knobs Three env vars control the per-prerender-server shape. They're resolved once at `PagePool` construction; changes require a process restart. -| Env var | Default | What it controls | When to change it | -|---|---|---|---| -| `PRERENDER_PAGE_POOL_MIN` / `_MAX` | unset → fixed pool of `options.maxPages` (5) | Dynamic-pool envelope. The pool boots at MIN, expands up to MAX under saturation, contracts back to MIN after sustained idle. The live capacity is what the server reports to the manager on each heartbeat, which drives warm-vacancy routing. | Fleet capacity. Raise MAX when `waits.semaphoreMs` dominates `launchMs` across rows from all realms (server-wide saturation); lower MAX if you need to reduce memory footprint and you can confirm from snapshots that pending rarely approaches `totalTabs`. Setting MIN === MAX disables expansion/contraction. | -| `PRERENDER_AFFINITY_TAB_MAX` | `5` (clamped to the effective pool max: `PRERENDER_PAGE_POOL_MAX` when set, otherwise fixed `maxPages`) | Max tabs a single affinity (realm or user) can simultaneously hold from the pool. | Rarely. Must be ≥ 2 for the self-referential prerender deadlock to be prevented — PagePool logs a warning at startup when it isn't. Lower only if you want to force multi-realm fairness at the tab-routing level. | -| `PRERENDER_AFFINITY_FILE_CONCURRENCY` | unset → `max(1, PRERENDER_AFFINITY_TAB_MAX − 1)` (the deadlock-safety ceiling) | Cap on concurrent `file` renders within a single affinity. Module and command calls bypass admission; they're never capped by this knob. | Cross-realm fairness. When one realm's fan-out (e.g. a catalog reindex) is stealing render budget from every other realm, lower this below the ceiling to reserve tabs for other affinities. The effective cap is always `min(env, ceiling)` so this can't accidentally break the deadlock-safety invariant. | +| Env var | Default | What it controls | When to change it | +| ------------------------------------- | ------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `PRERENDER_PAGE_POOL_MIN` / `_MAX` | unset → fixed pool of `options.maxPages` (5) | Dynamic-pool envelope. The pool boots at MIN, expands up to MAX under saturation, contracts back to MIN after sustained idle. The live capacity is what the server reports to the manager on each heartbeat, which drives warm-vacancy routing. | Fleet capacity. Raise MAX when `waits.semaphoreMs` dominates `launchMs` across rows from all realms (server-wide saturation); lower MAX if you need to reduce memory footprint and you can confirm from snapshots that pending rarely approaches `totalTabs`. Setting MIN === MAX disables expansion/contraction. | +| `PRERENDER_AFFINITY_TAB_MAX` | `5` (clamped to the effective pool max: `PRERENDER_PAGE_POOL_MAX` when set, otherwise fixed `maxPages`) | Max tabs a single affinity (realm or user) can simultaneously hold from the pool. | Rarely. Must be ≥ 2 for the self-referential prerender deadlock to be prevented — PagePool logs a warning at startup when it isn't. Lower only if you want to force multi-realm fairness at the tab-routing level. | +| `PRERENDER_AFFINITY_FILE_CONCURRENCY` | unset → `max(1, PRERENDER_AFFINITY_TAB_MAX − 1)` (the deadlock-safety ceiling) | Cap on concurrent `file` renders within a single affinity. Module and command calls bypass admission; they're never capped by this knob. | Cross-realm fairness. When one realm's fan-out (e.g. a catalog reindex) is stealing render budget from every other realm, lower this below the ceiling to reserve tabs for other affinities. The effective cap is always `min(env, ceiling)` so this can't accidentally break the deadlock-safety invariant. | **Default invariant**: when `PRERENDER_AFFINITY_FILE_CONCURRENCY` is unset, the effective file-admission cap equals the deadlock-safety ceiling — same behavior as before the knob existed. Changing the knob is an explicit operator decision driven by `admissionMs` telemetry; don't adjust it without data. diff --git a/.github/actions/init/action.yml b/.github/actions/init/action.yml index b8d7b684698..aaff14aea1a 100644 --- a/.github/actions/init/action.yml +++ b/.github/actions/init/action.yml @@ -31,3 +31,20 @@ runs: if: ${{ steps.cache.outputs.cache-hit != 'true' }} shell: bash run: pnpm store prune + + # Provision the mandatory dev cert so the realm-server can speak + # HTTPS+HTTP/2 — the only protocol it supports. Local devs get this + # via `mise run infra:ensure-dev-cert`; CI runs it explicitly here + # so every downstream job starts with the cert in + # ~/.local/share/boxel/dev-certs/ and `NODE_EXTRA_CA_CERTS` pointed + # at mkcert's root. See packages/realm-server/server.ts and + # mise-tasks/infra/ensure-dev-cert. + - name: Install mkcert + shell: bash + run: | + sudo apt-get update + sudo apt-get install -y mkcert libnss3-tools + + - name: Provision dev TLS cert + shell: bash + run: mise run infra:ensure-dev-cert diff --git a/.github/workflows/ci-host.yaml b/.github/workflows/ci-host.yaml index 33833229146..ba2e54c3e0f 100644 --- a/.github/workflows/ci-host.yaml +++ b/.github/workflows/ci-host.yaml @@ -108,18 +108,23 @@ jobs: - name: Disable TCP/UDP network offloading run: sudo ethtool -K eth0 tx off rx off - - name: Start test services (icons + host dist + realm servers) - run: mise run test-services:host | tee -a /tmp/server.log & - - name: Create realm users - run: pnpm register-realm-users - working-directory: packages/matrix - + # Install + restart dbus/upower BEFORE the test services come up. + # `sudo service dbus restart` triggers chrome's NetworkChangeNotifier + # in any already-running chromium (the prerender's standby probe and + # the realm-server's prerender workers), which aborts every in-flight + # h2 stream with ERR_NETWORK_CHANGED and leaves wait-for-host-standby + # stuck waiting for #standby-ready that never lands. - name: Install D-Bus helpers run: | sudo apt-get update sudo apt-get install -y dbus-x11 upower sudo service dbus restart sudo service upower restart + - name: Start test services (icons + host dist + realm servers) + run: mise run test-services:host | tee -a /tmp/server.log & + - name: Create realm users + run: pnpm register-realm-users + working-directory: packages/matrix - name: Live test suite run: dbus-run-session -- pnpm test:live @@ -170,6 +175,18 @@ jobs: # https://github.com/actions/runner-images/issues/1187#issuecomment-686735760 - name: Disable TCP/UDP network offloading run: sudo ethtool -K eth0 tx off rx off + # Install + restart dbus/upower BEFORE the test services come up. + # `sudo service dbus restart` triggers chrome's NetworkChangeNotifier + # in any already-running chromium (the prerender's standby probe and + # the realm-server's prerender workers), which aborts every in-flight + # h2 stream with ERR_NETWORK_CHANGED and leaves wait-for-host-standby + # stuck waiting for #standby-ready that never lands. + - name: Install D-Bus helpers + run: | + sudo apt-get update + sudo apt-get install -y dbus-x11 upower + sudo service dbus restart + sudo service upower restart - name: Start test services (icons + host dist + realm servers) run: mise run test-services:host | tee -a /tmp/server.log & env: @@ -178,13 +195,6 @@ jobs: run: pnpm register-realm-users working-directory: packages/matrix - - name: Install D-Bus helpers - run: | - sudo apt-get update - sudo apt-get install -y dbus-x11 upower - sudo service dbus restart - sudo service upower restart - - name: host test suite (shard ${{ matrix.shardIndex }}) run: | if [ "$PERCY_ENABLED" = "true" ]; then diff --git a/.github/workflows/ci-software-factory.yaml b/.github/workflows/ci-software-factory.yaml index 373b697b30c..2bdad63e1b6 100644 --- a/.github/workflows/ci-software-factory.yaml +++ b/.github/workflows/ci-software-factory.yaml @@ -80,10 +80,17 @@ jobs: if: ${{ matrix.shard.index == 1 }} run: pnpm test:node working-directory: packages/software-factory - - name: Serve test assets (icons + host dist) + - name: Serve boxel-icons run: | - mise run ci:serve-test-assets & - timeout 180 bash -c 'until curl -sf http://localhost:4200 > /dev/null && curl -sf http://localhost:4206 > /dev/null; do sleep 2; done' + # SF Playwright tests use the realm-test-harness, which spins up + # vite / realm-server on its own dynamic ports — it is hermetic + # by design (see packages/software-factory/docs/testing-strategy.md). + # The only external service it expects is the icons server on + # ICONS_URL (defaults to http://localhost:4206/). Do not start + # host-dist on port 4200 here — it collides with the harness + # and masks regressions in the harness's host bring-up code. + mise run services:icons & + timeout 60 bash -c 'until curl -sf http://localhost:4206 > /dev/null; do sleep 1; done' - name: Run Playwright tests run: pnpm test:playwright:shard ${{ matrix.shard.index }}/${{ matrix.shard.total }} working-directory: packages/software-factory diff --git a/.github/workflows/ci.yaml b/.github/workflows/ci.yaml index 81d0e82d5fc..5b3f3b72c91 100644 --- a/.github/workflows/ci.yaml +++ b/.github/workflows/ci.yaml @@ -385,7 +385,7 @@ jobs: - name: Start test services (icons + host dist + base realm) run: | MATRIX_REGISTRATION_SHARED_SECRET='xxxx' mise run test-services:matrix | tee -a /tmp/server.log & - timeout 300 bash -c 'until curl -sf http://localhost:4200 > /dev/null && curl -sf http://localhost:4206 > /dev/null; do sleep 2; done' + timeout 300 bash -c 'until curl -ksf https://localhost:4200 > /dev/null && curl -sf http://localhost:4206 > /dev/null; do sleep 2; done' - name: Run Playwright tests run: pnpm test:group ${{ matrix.shardIndex }}/${{ matrix.shardTotal }} working-directory: packages/matrix @@ -906,7 +906,7 @@ jobs: - name: Serve test assets (icons + host dist) run: | mise run ci:serve-test-assets & - timeout 180 bash -c 'until curl -sf http://localhost:4200 > /dev/null && curl -sf http://localhost:4206 > /dev/null; do sleep 2; done' + timeout 180 bash -c 'until curl -ksf https://localhost:4200 > /dev/null && curl -sf http://localhost:4206 > /dev/null; do sleep 2; done' - name: Start PostgreSQL for tests run: pnpm start:pg | tee -a /tmp/test-services.log & working-directory: packages/realm-server @@ -980,7 +980,13 @@ jobs: - name: Start dev stack (icons + host-dist + base realm + prerenderer) run: | mise run test-services:matrix | tee -a /tmp/server.log & - timeout 600 bash -c 'until curl -sf http://localhost:4200 > /dev/null && curl -sf -H "Accept: application/vnd.api+json" http://localhost:4201/base/_readiness-check > /dev/null; do sleep 2; done' + # `-k` skips cert verification (the local realm-server speaks + # HTTPS with the mkcert leaf cert). `-w '%{http_code}'` + grep + # ensures we treat 3xx as still-not-ready instead of letting + # `-f` return 0 on the dispatcher's redirect response, which + # would race the tests ahead of the base realm finishing its + # initial index. + timeout 600 bash -c 'until curl -sk -o /dev/null -w "%{http_code}" https://localhost:4200/ | grep -qx 200 && curl -sk -o /dev/null -w "%{http_code}" -H "Accept: application/vnd.api+json" https://localhost:4201/base/_readiness-check | grep -qx 200; do sleep 2; done' - name: Run integration tests run: pnpm test:integration working-directory: packages/boxel-cli diff --git a/AGENTS.md b/AGENTS.md index 271f6e84939..e9240f13cab 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -94,10 +94,10 @@ #### Iterating on host tests with the Chrome MCP server -- Start the host app so qunit test runner is available at `http://localhost:4200/tests` (usual `pnpm start` + dependencies). +- Start the host app so qunit test runner is available at `https://localhost:4200/tests` (usual `pnpm start` + dependencies). - Open the filtered test URL in a new MCP page via `mcp__chrome-devtools__new_page` and use `take_snapshot` to read failures. -- Filtered URL structure: `http://localhost:4200/tests?filter=` -- URL structure for isolating to specific tests: `http://localhost:4200/tests?moduleId=&testId=&testId=...` (visible on the “Rerun” links for failing tests). +- Filtered URL structure: `https://localhost:4200/tests?filter=` +- URL structure for isolating to specific tests: `https://localhost:4200/tests?moduleId=&testId=&testId=...` (visible on the “Rerun” links for failing tests). - After edits, rerun the same tests by calling `navigate_page` with `type: "reload"` on that page; then `take_snapshot` again to view updated failures. - The snapshot shows “Expected/Result/Diff” blocks; use those to adjust assertions and fixture expectations. - Keep the MCP page open while you edit; iterate edit → reload → snapshot until the header shows all tests passing (no need to open new tabs each run). diff --git a/QUICKSTART.md b/QUICKSTART.md index 50f9d470403..8ec2437d486 100644 --- a/QUICKSTART.md +++ b/QUICKSTART.md @@ -2,10 +2,17 @@ To build the entire repository and run the application, follow these steps: -1. The 2 main system dependencies to install are: - +1. The system dependencies to install are: - [mise](https://mise.jdx.dev/getting-started.html) - [docker](https://docs.docker.com/get-docker/) + - [mkcert](https://github.com/FiloSottile/mkcert) — provisions the + local TLS cert the realm-server needs to speak HTTPS+HTTP/2 (local + dev has no HTTP fallback). Install with + `sudo apt install -y mkcert libnss3-tools` on Debian/Ubuntu or + `brew install mkcert nss` on macOS. After install, run + `mise run infra:ensure-dev-cert` once before the first + `mise run dev` / `pnpm start:all`; subsequent runs are a no-op. See + the repo-root [README](README.md#local-https-dev-access) for details. 2. Clone the repo: @@ -52,7 +59,7 @@ To build the entire repository and run the application, follow these steps: Note: Ensure that the realm-server is completely started by looking out for tor the test-realm indexing output. ```zsh - Realm http://localhost:4202/test/ has started ({ + Realm https://localhost:4202/test/ has started ({ "instancesIndexed": 8, "instanceErrors": 0, "moduleErrors": 0 @@ -76,19 +83,16 @@ To build the entire repository and run the application, follow these steps: Visit http://localhost:8080. Type in Username = "admin", Password: "password" Homeserver URL: http://localhost:8008 10. Host App - - - Visit http://localhost:4201/ + - Visit https://localhost:4200/ - Enter the registration flow and create a Boxel Account - When prompted for an authentication token, type in "dev-token" 11. Validate email for login - - Visit SMTP UI at http://localhost:5001/ - Validate email - - Go back to Host http://localhost:4201/ and login + - Go back to Host https://localhost:4200/ and login 12. Perform "Setup up Secure Payment Method" flow - - More detailed steps can be found in our [README](README.md) Payment Setup section 13. Run ai bot (Optional): diff --git a/README.md b/README.md index b51ed44cc6d..657fe853fc4 100644 --- a/README.md +++ b/README.md @@ -34,10 +34,10 @@ For a quickstart, see [here](./QUICKSTART.md) Two catalog realms run side by side. The **Cardstack Catalog** (served from [cardstack/boxel-catalog](https://github.com/cardstack/boxel-catalog)) is the source of truth for new development and the destination for community submissions. The **Legacy Catalog** (shipped from this monorepo) remains available during the deprecation window for content that hasn't been migrated upstream. -| Realm | Source | URL path | Purpose | -| ---------------------- | --------------------------------------------------------------------------------------- | ------------------ | -------------------------------------------------------------------------------------------------------------------------------- | -| **Cardstack Catalog** | `packages/catalog` (clones [boxel-catalog](https://github.com/cardstack/boxel-catalog)) | `/catalog/` | Official catalog. New listings and community submissions land here via `pr-listing-create` PRs to `cardstack/boxel-catalog`. | -| **Legacy Catalog** | `packages/catalog-realm` | `/legacy-catalog/` | Historical catalog shipped from this repo. Kept visible in the workspace chooser while existing content migrates upstream. | +| Realm | Source | URL path | Purpose | +| --------------------- | --------------------------------------------------------------------------------------- | ------------------ | ---------------------------------------------------------------------------------------------------------------------------- | +| **Cardstack Catalog** | `packages/catalog` (clones [boxel-catalog](https://github.com/cardstack/boxel-catalog)) | `/catalog/` | Official catalog. New listings and community submissions land here via `pr-listing-create` PRs to `cardstack/boxel-catalog`. | +| **Legacy Catalog** | `packages/catalog-realm` | `/legacy-catalog/` | Historical catalog shipped from this repo. Kept visible in the workspace chooser while existing content migrates upstream. | Both catalogs appear in the workspace chooser by default; the Cardstack Catalog is sorted first, Legacy Catalog last. Both are controlled by the same `SKIP_CATALOG` flag — setting `SKIP_CATALOG=true` skips setup and startup for both. @@ -97,11 +97,11 @@ You can develop the host application locally backed by the staging or production ``` scripts/start-host production … -Build successful (27238ms) – Serving on http://localhost:4200/ +Build successful (27238ms) – Serving on https://localhost:4200/ … ``` -Visit `http://localhost:4200` and log in with your staging or production credentials. +Visit `https://localhost:4200` and log in with your staging or production credentials. ### ember-cli Hosted App @@ -112,7 +112,7 @@ In order to run the ember-cli hosted app: 1. `pnpm build` in the boxel-ui/addon workspace to build the boxel-ui addon. 2. `pnpm start` in the host/ workspace to serve the ember app. -3. `mise run dev` from the repo root to serve the base and experiments realms -- this will also allow you to switch between the app and the tests without having to restart servers). This expects the Ember application to be running at `http://localhost:4200`, if you’re running it elsewhere you can specify it with `HOST_URL=http://localhost:5200 mise run dev`. +3. `mise run dev` from the repo root to serve the base and experiments realms -- this will also allow you to switch between the app and the tests without having to restart servers). This expects the Ember application to be running at `https://localhost:4200`, if you’re running it elsewhere you can specify it with `HOST_URL=http://localhost:5200 mise run dev`. Alternatively, you can run everything with a single command from the repo root: @@ -122,7 +122,7 @@ mise run dev-all This starts the host app first, waits for it to be ready, then starts the realm server and all supporting services. -The app is available at http://localhost:4200. You will be prompted to register an account. To make it easier, you can execute `pnpm register-test-user` in `packages/matrix/`. Now you can sign in with the test user using the credentials `username: user`, `password: password`. +The app is available at https://localhost:4200. You will be prompted to register an account. To make it easier, you can execute `pnpm register-test-user` in `packages/matrix/`. Now you can sign in with the test user using the credentials `username: user`, `password: password`. When you are done running the app you can stop the synapse server: @@ -137,7 +137,7 @@ In order to run the realm server hosted app: 1. `mise run services:host-build` to re-build the host app (this step can be omitted if you do not want host app re-builds) 2. `mise run dev` to serve the base and experiments realms -You can visit the URL of each realm server to view that realm's app. So for instance, the base realm's app is available at `http://localhost:4201/base` and the experiments realm's app is at `http://localhost:4201/experiments`. +The recommended way to view a realm's app is the host vite dev server at `https://localhost:4200` — open it and navigate via the workspace chooser. Both the host (vite) on `:4200` and the realm-server on `:4201` terminate HTTPS+HTTP/2 using the same mkcert leaf (see "Local HTTPS dev access" below for the one-time cert setup), so the in-browser host's realm fetches multiplex per Chrome's HTTP/2 connection rules without any mixed-content concerns. Visiting `https://localhost:4201/` directly also works. Live reloads are not available in this mode, however, if you use start the server with the environment variable `DISABLE_MODULE_CACHING=true` you can just refresh the page to grab the latest code changes if you are running rebuilds (step #1 and #2 above). @@ -191,10 +191,10 @@ Here's what is spun up with `mise run dev`: | Port | Description | `mise run dev` | `mise run services:realm-server-base` | | ----- | --------------------------------------------------------------------------------------------- | -------------- | ------------------------------------- | -| :4201 | `/base` base realm | ✅ | ✅ | +| :4201 | `/base` base realm (HTTPS+HTTP/2; see "Local HTTPS dev access" below) | ✅ | ✅ | | :4201 | `/skills` skills realm | ✅ | 🚫 | | :4201 | `/experiments` experiments realm | ✅ | 🚫 | -| :4202 | `/test` host test realm, `/node-test` node test realm | ✅ | 🚫 | +| :4202 | `/test` host test realm, `/node-test` node test realm (HTTPS+HTTP/2) | ✅ | 🚫 | | :4205 | `/test` realm for matrix client tests (playwright controlled) | 🚫 | 🚫 | | :4206 | Boxel icons server | ✅ | 🚫 | | :4210 | Development Worker Manager (spins up 1 worker by default) | ✅ | 🚫 | @@ -207,9 +207,104 @@ Here's what is spun up with `mise run dev`: | :5435 | Postgres DB | ✅ | 🚫 | | :8008 | Matrix synapse server | ✅ | 🚫 | +#### Local HTTPS dev access + +##### Why + +Heavy aggregator cards (Cohort, dashboards) fan out 80+ federated-search +requests per render. Chrome — including the headless Chromium that +drives the prerender — caps any single origin at 6 concurrent HTTP/1.1 +connections, so the 80+ requests serialize. A single cohort render +takes ~4–5 minutes under that ceiling. HTTP/2 multiplexes them over one +connection and the same render finishes in seconds. Browsers only do +HTTP/2 over TLS, so the local realm-server terminates a leaf cert. + +##### Setup + +``` +mise run infra:ensure-dev-cert +``` + +That task: + +1. Requires [`mkcert`](https://github.com/FiloSottile/mkcert) + (`sudo apt install -y mkcert libnss3-tools` on Debian/Ubuntu; + `brew install mkcert nss` on macOS). If it's missing, the task + prints these instructions and exits non-zero — local dev now + speaks HTTPS only and has no HTTP fallback. +2. Attempts `mkcert -install` once (one-time sudo prompt) so mkcert's + root CA lands in your system trust store. Your normal browser then + silently accepts the cert. If you decline the sudo prompt the cert + is still generated, indexing keeps working, and you can run + `mkcert -install` later when convenient. +3. Generates `~/.local/share/boxel/dev-certs/{localhost.pem, +localhost-key.pem}`. Idempotent — re-runs are a no-op until the + cert is within 7 days of expiry. + +After provisioning, `mise run dev` (and `mise run dev-all`) automatically +brings the realm-server up on `https://localhost:4201` (and +`https://localhost:4202` for the test-realms server). Node clients +(worker, scripts, prerender Node-side) trust the cert via +`NODE_EXTRA_CA_CERTS` (pointed at mkcert's root by `env-vars.sh`); the +prerender Chromium uses `--ignore-certificate-errors` for belt-and- +suspenders coverage. + +##### Migration after pulling this change + +The canonical realm URLs are now `https://localhost:4201/…` and +`https://localhost:4202/…`, so every row that was written under the old +`http://localhost:42XX/…` canonical needs its URLs rewritten in place +— PK columns, FK columns, JSONB documents (`pristine_doc`, `search_doc`, +`error_doc`, `deps`, `value`, `headers`, etc.), and rendered HTML / +markdown payloads. The repo ships an auto-run migration that handles +all of it: + +- `pnpm migrate` (which `mise run dev` runs via `--migrateDB`) picks up + `packages/postgres/migrations/1779100257124_canonical-url-http-to-https.js` + on the next boot. +- That migration walks `information_schema.columns`, finds every + text/varchar/jsonb column on every public table (skipping `modules`, + which the realm-server truncates on startup anyway), and runs an + in-place `REPLACE` for the two canonicals. WHERE-filtered so it only + touches rows that still contain the old URL — idempotent, and a + no-op in production (where the canonical URL is never `localhost`). + +After the migration runs, the realm-server boots normally on +`https://localhost:4201/`. The same-port HTTP→HTTPS dispatcher catches +any lingering `http://localhost:4201/…` requests (e.g. you typed it +into a browser, or a card still has a stale URL in a rendered HTML +attribute) and 301-redirects to the canonical https origin. + +Personal realm files under `realms/localhost_4201/**/*.json` may still +have `id`/`relationships` URLs spelled `http://localhost:4201/…`. The +indexer re-derives canonical URLs from the realm mount root, so those +files index cleanly under the new canonical and the redirect handles +runtime fetches; cleaning up the on-disk strings is optional (a +`sed -i 's|http://localhost:4201|https://localhost:4201|g'` across the +realm dir does it in one shot). + +##### Verify + +``` +curl -kI --http2 https://localhost:4201/_alive +``` + +Look for `HTTP/2 200`. The `mise run dev` log also confirms with +`Realm server listening on port 4201 (https/h2)`. + +##### CI / hermetic test harness + +CI runs the same `mise run infra:ensure-dev-cert` step out of its init +action (see `.github/actions/init/action.yml`) — mkcert is installed +via apt and the cert is provisioned before any test job starts. So CI +realm-servers boot HTTPS+HTTP/2 on `https://localhost:4201/…` exactly +like local dev. Test harnesses that launch their own Chromium pass +`--ignore-certificate-errors` so they don't need the system trust +store; Node clients pick up the cert via `NODE_EXTRA_CA_CERTS`. + #### Using `mise run services:realm-server` -You can also use `mise run services:realm-server` if you want the functionality of `mise run dev`, but without running the test realms. This will enable you to open http://localhost:4201 and allow to select between the cards in the /base and /experiments realm. You must also make sure to run `mise run services:worker` in order to start the workers which are normally started in `mise run dev`. +You can also use `mise run services:realm-server` if you want the functionality of `mise run dev`, but without running the test realms. Visit `https://localhost:4200` (the vite host) to navigate the workspace — the host bundle there fetches realm data over the realm-server's https origin on `:4201`. You must also make sure to run `mise run services:worker` in order to start the workers which are normally started in `mise run dev`. #### Indexing dashboard @@ -247,7 +342,7 @@ Each additional server spawns its own headless Chrome on an OS-assigned port and Prerender server: - REALM_SECRET_SEED (required): Secret used to create session tokens for realms. -- BOXEL_HOST_URL (optional): URL of the host app that serves the /render routes. Defaults to http://localhost:4200 in dev scripts. +- BOXEL_HOST_URL (optional): URL of the host app that serves the /render routes. Defaults to https://localhost:4200 in dev scripts. - PRERENDER_MANAGER_URL (optional): Base URL of the prerender manager to register with. Defaults to http://localhost:4222. - PRERENDER_COUNT (optional): Number of prerender server instances to start. Each gets its own headless Chrome. Default 1. - PRERENDER_PAGE_POOL_MIN (optional): Idle floor for the dynamic page pool. The pool boots at this size and contracts back to it after sustained idle. Default 4 in dev (set in `mise-tasks/lib/env-vars.sh`). @@ -505,7 +600,7 @@ To run the `packages/host/` workspace tests start the following servers: 1. `mise run dev` from the repo root to serve _both_ the base realm and the realm that serves the test cards 2. `pnpm start` in the `packages/host/` workspace to serve ember -The tests are available at `http://localhost:4200/tests` +The tests are available at `https://localhost:4200/tests` ### Realm Server Node tests @@ -610,12 +705,12 @@ BOXEL_ENVIRONMENT=parallel mise run services:ai-bot In environment mode, services are available at: -| Service | Hostname | -| ------------- | ----------------------------------------- | -| Host app | `http://host..localhost` | -| Realm server | `http://realm-server..localhost` | -| Matrix | `http://matrix..localhost` | -| Icons | `http://icons..localhost` | +| Service | Hostname | +| ------------ | -------------------------------------- | +| Host app | `http://host..localhost` | +| Realm server | `http://realm-server..localhost` | +| Matrix | `http://matrix..localhost` | +| Icons | `http://icons..localhost` | Where `` is the lowercased, sanitized form of `BOXEL_ENVIRONMENT` (e.g., `feature/my-branch` becomes `feature-my-branch`). diff --git a/docs/commands-in-headless-chrome.md b/docs/commands-in-headless-chrome.md index 9ceeb7f7b84..d22cf347c68 100644 --- a/docs/commands-in-headless-chrome.md +++ b/docs/commands-in-headless-chrome.md @@ -29,9 +29,9 @@ const event = { type: 'app.boxel.bot-trigger', content: { type: 'show-card', - realm: 'http://localhost:4201/experiments/', + realm: 'https://localhost:4201/experiments/', input: { - cardId: 'http://localhost:4201/experiments/Author/jane-doe', + cardId: 'https://localhost:4201/experiments/Author/jane-doe', format: 'isolated', }, }, @@ -73,9 +73,9 @@ const event = { type: 'app.boxel.bot-trigger', content: { type: 'show-card', - realm: 'http://localhost:4201/experiments/', + realm: 'https://localhost:4201/experiments/', input: { - cardId: 'http://localhost:4201/experiments/Author/jane-doe', + cardId: 'https://localhost:4201/experiments/Author/jane-doe', format: 'isolated', }, }, @@ -91,12 +91,12 @@ const event = { ```json { - "realmURL": "http://localhost:4201/experiments/", + "realmURL": "https://localhost:4201/experiments/", "realmUsername": "@alice:localhost", "runAs": "@alice:localhost", "command": "@cardstack/boxel-host/commands/show-card/default", "commandInput": { - "cardId": "http://localhost:4201/experiments/Author/jane-doe", + "cardId": "https://localhost:4201/experiments/Author/jane-doe", "format": "isolated" } } @@ -125,7 +125,7 @@ type CommandRunnerRouteParams = { const requestId = '6f5508cf-0f10-44a8-a288-0f11f74c4f20'; const command = '@cardstack/boxel-host/commands/show-card/default'; const input = { - cardId: 'http://localhost:4201/experiments/Author/jane-doe', + cardId: 'https://localhost:4201/experiments/Author/jane-doe', format: 'isolated', }; const nonce = '2'; @@ -140,11 +140,11 @@ localStorage.setItem( }), ); -const url = `http://localhost:4200/command-runner/${encodeURIComponent(requestId)}/${encodeURIComponent(nonce)}`; +const url = `https://localhost:4200/command-runner/${encodeURIComponent(requestId)}/${encodeURIComponent(nonce)}`; ``` ```txt -http://localhost:4200/command-runner/6f5508cf-0f10-44a8-a288-0f11f74c4f20/2 +https://localhost:4200/command-runner/6f5508cf-0f10-44a8-a288-0f11f74c4f20/2 ``` ### Host-side consumption behavior @@ -158,7 +158,7 @@ http://localhost:4200/command-runner/6f5508cf-0f10-44a8-a288-0f11f74c4f20/2 Use this when you want to test `command-runner` directly in the browser. 1. Open host in a browser: - - `http://localhost:4200` + - `https://localhost:4200` 2. Open browser devtools console and run: ```js @@ -170,7 +170,7 @@ localStorage.setItem( JSON.stringify({ command: '@cardstack/boxel-host/commands/show-card/default', input: { - cardId: 'http://localhost:4201/experiments/Author/jane-doe', + cardId: 'https://localhost:4201/experiments/Author/jane-doe', format: 'isolated', }, nonce, @@ -180,7 +180,7 @@ localStorage.setItem( ``` 3. Visit this URL (or refresh if already there): - - `http://localhost:4200/command-runner/6f5508cf-0f10-44a8-a288-0f11f74c4f20/2` + - `https://localhost:4200/command-runner/6f5508cf-0f10-44a8-a288-0f11f74c4f20/2` Notes: - `request_id` in the URL must match the localStorage key suffix. diff --git a/mise-tasks/ci/cache-index b/mise-tasks/ci/cache-index index dc4f6f74e48..df1fac73892 100755 --- a/mise-tasks/ci/cache-index +++ b/mise-tasks/ci/cache-index @@ -7,14 +7,20 @@ set -euo pipefail export PATH="./node_modules/.bin:$PATH" READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" -REALM_HOST="${REALM_BASE_URL#http://}" +# Strip whichever scheme is in front; realm-server speaks HTTPS+HTTP/2 in +# local dev (see infra:ensure-dev-cert) so REALM_BASE_URL can be https://. +REALM_HOST="${REALM_BASE_URL#*://}" +case "$REALM_BASE_URL" in + https://*) REALM_READY_SCHEME="https-get" ;; + *) REALM_READY_SCHEME="http-get" ;; +esac # Build readiness URLs for all enabled realms. # boxel-homepage is excluded: its source repo (cardstack/boxel-home) is private # and inaccessible to CI without extra credentials. READINESS_URLS="" for realm in base catalog legacy-catalog skills submissions experiments openrouter software-factory; do - READINESS_URLS="${READINESS_URLS:+${READINESS_URLS}|}http-get://${REALM_HOST}/${realm}/${READY_PATH}" + READINESS_URLS="${READINESS_URLS:+${READINESS_URLS}|}${REALM_READY_SCHEME}://${REALM_HOST}/${realm}/${READY_PATH}" done READINESS_URLS="${READINESS_URLS}|${MATRIX_URL_VAL}|http://localhost:5001|${ICONS_URL}|${HOST_URL}" @@ -48,7 +54,12 @@ pnpm --dir=../matrix register-realm-users ensure_services_running echo "Waiting for realm readiness..." -wait-on -t 1800000 $(echo "$READINESS_URLS" | tr '|' ' ') & +# Probe-only TLS relaxation: the realm-server / host now serve the +# self-signed mkcert leaf on https://localhost:42XX and wait-on's +# in-process axios doesn't pick up NODE_EXTRA_CA_CERTS reliably on CI +# runners. The actual services under test still validate normally. +NODE_TLS_REJECT_UNAUTHORIZED=0 \ + wait-on -t 1800000 $(echo "$READINESS_URLS" | tr '|' ' ') & WAIT_ON_PID=$! while kill -0 "$WAIT_ON_PID" 2>/dev/null; do diff --git a/mise-tasks/ci/serve-test-assets b/mise-tasks/ci/serve-test-assets index 28c5cfb7f9e..a71482cb18b 100755 --- a/mise-tasks/ci/serve-test-assets +++ b/mise-tasks/ci/serve-test-assets @@ -12,14 +12,30 @@ export PATH="./node_modules/.bin:$PATH" # If both services are already running (e.g. local dev), skip startup entirely. # start-server-and-test fails if run-p exits immediately, which happens when # start:icons and start:host-dist both detect existing servers and exit 0. -if curl --fail --silent "$ICONS_URL" >/dev/null 2>&1 && \ - curl --fail --silent "$HOST_URL" >/dev/null 2>&1; then +# -k tolerates the self-signed mkcert leaf the dev stack now serves on HOST_URL. +if curl -k --fail --silent "$ICONS_URL" >/dev/null 2>&1 && \ + curl -k --fail --silent "$HOST_URL" >/dev/null 2>&1; then echo "Icons and host already running, nothing to start" exec sleep 100000000 fi -WAIT_ON_TIMEOUT=180000 NODE_NO_WARNINGS=1 \ +# Build wait-on URLs: prefix `https-get://` (forces GET, scheme variant +# of plain `https://` which would default to HEAD) when HOST_URL is +# HTTPS. Vite preview behind the same-port dispatcher doesn't reliably +# answer HEAD over h2 in CI, so a HEAD-based probe times out even +# though GET succeeds (verified locally with curl). +# +# START_SERVER_AND_TEST_INSECURE=1 also disables wait-on cert validation: +# start-server-and-test passes `strictSSL: true` to wait-on by default +# (overriding the global NODE_TLS_REJECT_UNAUTHORIZED), and this is the +# documented escape hatch. Scopes to the probe only — the actual servers +# under test still present a real cert. +case "$HOST_URL" in + https://*) HOST_READY="https-get://${HOST_URL#https://}" ;; + *) HOST_READY="$HOST_URL" ;; +esac +WAIT_ON_TIMEOUT=180000 NODE_NO_WARNINGS=1 START_SERVER_AND_TEST_INSECURE=1 \ start-server-and-test \ 'run-p -ln start:icons start:host-dist' \ - "${ICONS_URL}|${HOST_URL}" \ + "${ICONS_URL}|${HOST_READY}" \ 'wait' diff --git a/mise-tasks/dev b/mise-tasks/dev index 4f18ad28feb..65b18293936 100755 --- a/mise-tasks/dev +++ b/mise-tasks/dev @@ -4,6 +4,27 @@ . "$(cd "$(dirname "$0")" && pwd)/lib/dev-common.sh" +# Run the dev-cert prerequisite UPFRONT — before the parallel stack +# starts. ensure-dev-cert fails fast with a clear "run +# `mise run infra:trust-dev-cert` once" message when the root CA is +# missing, but inside the multiplexed `start-server-and-test` output +# that message is buried under hundreds of buffered lines from the +# other services and the downstream failure cascade. Run it here so +# the message is the first and only thing on screen if it fails. +echo "[dev] Verifying dev TLS cert prerequisite…" +if ! BOXEL_DEV_INVOKED_AS='mise run dev' mise run infra:ensure-dev-cert; then + exit 1 +fi + +# `mise run dev` sources env-vars.sh once on entry. If +# infra:ensure-dev-cert just provisioned the cert (first-run case), this +# shell's env still lacks REALM_SERVER_TLS_CERT_FILE / _KEY_FILE — the +# downstream services would then start plain-HTTP while HOST_URL is +# already `https://...`, breaking every readiness probe. Re-source +# env-vars.sh so the TLS vars get picked up now that the cert exists. +# Idempotent in the steady-state case. +. "$(cd "$(dirname "$0")" && pwd)/lib/env-vars.sh" + # Enable job control so backgrounded subprocesses run in their own process # group. Without this, Ctrl-C is delivered to the whole foreground group; # npm-run-all2 (`run-p`) tends to exit before propagating SIGINT to its diff --git a/mise-tasks/dev-all b/mise-tasks/dev-all index 0562559e66b..473096eaf48 100755 --- a/mise-tasks/dev-all +++ b/mise-tasks/dev-all @@ -4,6 +4,31 @@ . "$(cd "$(dirname "$0")" && pwd)/lib/dev-common.sh" +# Run the dev-cert prerequisite UPFRONT — before we spawn the host app +# or the parallel server stack. ensure-dev-cert fails fast with a clear +# "run `mise run infra:trust-dev-cert` once" message when the root CA +# is missing, but inside the multiplexed `start-server-and-test` / +# `run-p -ln` output that message is one of hundreds of buffered lines +# the user has to scroll past, and the cascade of downstream failures +# (vite restart, prerender / worker-manager teardown) buries it. Run +# it here so it's the first thing a dev sees when the cert isn't +# installed; if it passes, the inner `mise run` invocations that +# depend on it are fast no-ops (idempotent leaf-cert check). +echo "[dev-all] Verifying dev TLS cert prerequisite…" +if ! BOXEL_DEV_INVOKED_AS='mise run dev-all' mise run infra:ensure-dev-cert; then + exit 1 +fi + +# `mise run dev-all` sources env-vars.sh once on entry. If +# infra:ensure-dev-cert just provisioned the cert (first-run case), this +# shell's env still lacks REALM_SERVER_TLS_CERT_FILE / _KEY_FILE, and +# the next `pnpm --filter @cardstack/host start` inherits that gap and +# brings vite up plain-HTTP — but HOST_URL is already +# `https://localhost:4200`, so the readiness curl probe times out. +# Re-source env-vars.sh so the TLS vars get picked up now that the cert +# exists. Idempotent in the steady-state case. +. "$(cd "$(dirname "$0")" && pwd)/lib/env-vars.sh" + # Enable job control so backgrounded subprocesses run in their own process # group. Without this, Ctrl-C is delivered to the whole foreground group; # npm-run-all2 (`run-p`) tends to exit before propagating SIGINT to its @@ -48,7 +73,7 @@ while true; do echo "ERROR: Host app process died." >&2 exit 1 fi - if [ "$(curl -s -o /dev/null -w '%{http_code}' "$HOST_URL" 2>/dev/null)" = "200" ]; then + if [ "$(curl -ks -o /dev/null -w '%{http_code}' "$HOST_URL" 2>/dev/null)" = "200" ]; then break fi sleep 2 diff --git a/mise-tasks/infra/ensure-dev-cert b/mise-tasks/infra/ensure-dev-cert new file mode 100755 index 00000000000..7656892f2bd --- /dev/null +++ b/mise-tasks/infra/ensure-dev-cert @@ -0,0 +1,184 @@ +#!/usr/bin/env bash +#MISE description="Provision a local dev TLS cert so realm-server can speak HTTP/2" +# +# Generates ~/.local/share/boxel/dev-certs/{localhost.pem, localhost-key.pem} +# using mkcert. Idempotent — re-runs are a no-op if the cert already exists +# and isn't near expiry. +# +# Why this exists +# --------------- +# Heavy aggregator-card prerenders (Cohort, dashboards) fan out 80+ +# federated-search requests in a single Chromium tab. Chrome's HTTP/1.1 +# per-origin connection limit (6) serializes them and turns one cohort +# render into multiple minutes. HTTP/2 multiplexes them over a single +# TCP connection and the same render finishes in seconds. +# +# Browsers only do HTTP/2 over TLS, so the realm-server has to terminate +# a cert. We use mkcert because: +# 1. It can install a local CA into the system trust store (one-time, +# requires sudo) so the dev's normal browser and command-line tools +# silently accept the cert. +# 2. The leaf cert it generates is short-lived and replaceable; it +# lives under the dev's home dir and is never committed. +# +# This task verifies the mkcert root CA is already trusted (system +# store + NSS DB) and fails fast with instructions otherwise. It does +# NOT run `mkcert -install` itself, because that prompts for sudo and +# `mise run dev-all`'s parallel runner multiplexes child stdout into +# one stream where the prompt is unsendable — the whole stack would +# deadlock and SIGTERM-cascade. The companion task +# `mise run infra:trust-dev-cert` handles the one-time install. After +# that, this task is a fast no-op on every invocation. + +set -euo pipefail + +CERT_DIR="${BOXEL_DEV_CERT_DIR:-$HOME/.local/share/boxel/dev-certs}" +CERT_FILE="$CERT_DIR/localhost.pem" +KEY_FILE="$CERT_DIR/localhost-key.pem" + +# mkcert is required even on the idempotent-skip path: env-vars.sh needs +# `mkcert -CAROOT` to populate `NODE_EXTRA_CA_CERTS` so Node clients +# trust the cert. An existing cert without a working mkcert binary leaves +# the trust path half-wired. +if ! command -v mkcert >/dev/null 2>&1; then + cat >&2 <<'EOF' +[ensure-dev-cert] mkcert is required but not installed. + +The realm-server speaks HTTPS+HTTP/2 in every environment — there is +no HTTP fallback. mkcert provisions the cert; install it before +running `mise run dev` (or running CI): + + Linux (Debian/Ubuntu): sudo apt install -y mkcert libnss3-tools + Linux (Fedora/RHEL): sudo dnf install -y mkcert nss-tools + macOS (Homebrew): brew install mkcert nss + +Then re-run `mise run infra:ensure-dev-cert` (or just `mise run dev`). + +See the repo-root README ("Local HTTPS dev access") for the why. +EOF + exit 1 +fi + +# Verify the mkcert root CA is already trusted in both the system store +# and the user's NSS DB. If either is missing, fail fast and point at +# `infra:trust-dev-cert` to install it. Sequenced before the leaf-cert +# idempotent-skip so re-runs still surface a trust regression (e.g. +# someone deleted the cert from /etc/ssl/certs) rather than booting a +# realm-server with a half-wired trust chain. +CAROOT="$(mkcert -CAROOT 2>/dev/null || true)" +ROOT_CA_FILE="" +if [ -n "$CAROOT" ] && [ -f "$CAROOT/rootCA.pem" ]; then + ROOT_CA_FILE="$CAROOT/rootCA.pem" +fi + +system_trust_has_root_ca() { + [ -n "$ROOT_CA_FILE" ] || return 1 + case "$(uname -s)" in + Linux) + # `mkcert -install` adds the CA as a file named like + # mkcert_development_CA_.{crt,pem} under the update-ca- + # certificates source dir, and ca-certificates-symlinker hashes + # it into /etc/ssl/certs/. Check both locations by filename — + # grepping the file contents doesn't work because PEM is base64- + # encoded and the subject string isn't present in plaintext. + ls /etc/ssl/certs/mkcert_development_CA_*.pem >/dev/null 2>&1 && return 0 + ls /etc/ssl/certs/mkcert_development_CA_*.crt >/dev/null 2>&1 && return 0 + ls /usr/local/share/ca-certificates/mkcert_development_CA_*.crt >/dev/null 2>&1 && return 0 + return 1 + ;; + Darwin) + security find-certificate -c 'mkcert' /Library/Keychains/System.keychain >/dev/null 2>&1 + ;; + *) + return 1 + ;; + esac +} + +nss_db_has_root_ca() { + command -v certutil >/dev/null 2>&1 || return 1 + [ -d "$HOME/.pki/nssdb" ] || return 1 + certutil -d "sql:$HOME/.pki/nssdb" -L 2>/dev/null | grep -q 'mkcert' +} + +if ! system_trust_has_root_ca || ! nss_db_has_root_ca; then + # CI runners have passwordless sudo, so we can run the install + # ourselves without ever prompting — that path is load-bearing for + # `./.github/actions/init` to set the cert up on a fresh runner. + # `sudo -n true` succeeds silently when sudo won't prompt for a + # password (passwordless config or a cached credential), and exits + # non-zero otherwise. We only auto-install when it succeeds — on a + # dev's machine where sudo would prompt, we fail fast with a clear + # message pointing at `infra:trust-dev-cert`, because letting the + # prompt fire from inside `mise run dev` / `dev-all`'s parallel + # stack would deadlock the whole stack on unsendable stdin. + if sudo -n true 2>/dev/null; then + echo "[ensure-dev-cert] mkcert root CA missing; running trust-dev-cert (passwordless sudo available)" + mise run infra:trust-dev-cert + else + # `BOXEL_DEV_INVOKED_AS` is set by the dev / dev-all wrappers so we + # can name the outer command exactly in the rerun hint. Falls back + # to ensure-dev-cert itself, which is the most likely command when + # someone invoked this task directly. + invoked_as="${BOXEL_DEV_INVOKED_AS:-mise run infra:ensure-dev-cert}" + if [ "$(uname -s)" = "Linux" ] && ! command -v certutil >/dev/null 2>&1; then + # System store may already be trusted but Chromium needs the NSS + # DB path, which requires certutil. Call this out specifically — + # the generic "not installed" message would have a dev re-run + # `trust-dev-cert` and get the same warning mkcert prints. + cat >&2 <&2 </dev/null 2>&1; then + if openssl x509 -in "$CERT_FILE" -noout -text 2>/dev/null \ + | grep -q 'DNS:\*\.localhost'; then + exit 0 + fi + echo "[ensure-dev-cert] Existing cert at $CERT_FILE is missing *.localhost SAN; regenerating." + else + echo "[ensure-dev-cert] Existing cert at $CERT_FILE is near expiry; regenerating." + fi +fi + +mkdir -p "$CERT_DIR" + +echo "[ensure-dev-cert] Generating cert at $CERT_FILE" +mkcert \ + -cert-file "$CERT_FILE" \ + -key-file "$KEY_FILE" \ + localhost 127.0.0.1 ::1 "*.localhost" published.realm diff --git a/mise-tasks/infra/migrate-matrix-account-data-http-to-https b/mise-tasks/infra/migrate-matrix-account-data-http-to-https new file mode 100755 index 00000000000..ab4c7c777a7 --- /dev/null +++ b/mise-tasks/infra/migrate-matrix-account-data-http-to-https @@ -0,0 +1,19 @@ +#!/usr/bin/env bash +#MISE description="Rewrite app.boxel.realms account_data in every local matrix user from http://localhost:42XX to https://" +#MISE dir="packages/matrix" +# +# One-off companion to the `1779100257124_canonical-url-http-to-https` +# postgres migration. The postgres migration rewrites realm-server's +# own state; this script handles the per-user state stored in synapse +# (the workspace list every Boxel user reads via +# `getAccountDataFromServer('app.boxel.realms')` on boot). Without it, +# already-logged-in dev accounts keep hitting `http://localhost:4201/...`, +# the dispatcher 301-redirects to https://, and the browser blocks +# every CORS preflight ("Redirect is not allowed for a preflight +# request"). +# +# Safe to re-run. Each user that has only https URLs is skipped. + +set -euo pipefail + +pnpm migrate-account-data-http-to-https diff --git a/mise-tasks/infra/migrate-matrix-account-data-https-to-http b/mise-tasks/infra/migrate-matrix-account-data-https-to-http new file mode 100755 index 00000000000..4c7c0e6fdde --- /dev/null +++ b/mise-tasks/infra/migrate-matrix-account-data-https-to-http @@ -0,0 +1,15 @@ +#!/usr/bin/env bash +#MISE description="Reverse-migrate app.boxel.realms account_data from https://localhost:42XX back to http:// (companion to the canonical-url postgres migrate-down)" +#MISE dir="packages/matrix" +# +# Symmetric reverse of `migrate-matrix-account-data-http-to-https`. Used +# when you `pnpm migrate down` the canonical-url-http-to-https postgres +# migration — you also need to flip the per-user synapse account_data +# back to http:// so the browser keeps matching the (now-http) realm +# canonicals in the DB. +# +# Safe to re-run. Each user that already has http URLs is skipped. + +set -euo pipefail + +pnpm migrate-account-data-https-to-http diff --git a/mise-tasks/infra/trust-dev-cert b/mise-tasks/infra/trust-dev-cert new file mode 100755 index 00000000000..c9bb2872cfc --- /dev/null +++ b/mise-tasks/infra/trust-dev-cert @@ -0,0 +1,72 @@ +#!/usr/bin/env bash +#MISE description="Install mkcert's root CA into the system trust store + NSS DB (one-time, prompts for sudo)" +# +# One-time setup partner for `infra:ensure-dev-cert`. ensure-dev-cert +# refuses to invoke `mkcert -install` itself because the sudo prompt +# stalls `mise run dev-all`'s parallel runner — it just verifies the +# root CA is already trusted and fails fast otherwise. This task is +# what the dev runs to satisfy that prerequisite. +# +# What it does: +# 1. Creates ~/.pki/nssdb if missing, so `mkcert -install` lands its +# root CA in Chromium/Firefox's trust store on Linux (mkcert +# silently skips NSS install when the DB doesn't already exist). +# 2. Runs `mkcert -install` — adds the root CA to /etc/ssl/certs +# (Linux) or the system keychain (macOS) and to the NSS DB. +# +# Both steps are idempotent — re-running is safe. + +set -euo pipefail + +if ! command -v mkcert >/dev/null 2>&1; then + cat >&2 <<'EOF' +[trust-dev-cert] mkcert is not installed. Install it first: + + Linux (Debian/Ubuntu): sudo apt install -y mkcert libnss3-tools + Linux (Fedora/RHEL): sudo dnf install -y mkcert nss-tools + macOS (Homebrew): brew install mkcert nss + +Then re-run `mise run infra:trust-dev-cert`. +EOF + exit 1 +fi + +# certutil (from libnss3-tools / nss-tools) is required on Linux for +# mkcert to land its root CA in Chromium/Firefox's NSS trust store. +# Without it, mkcert -install only updates /etc/ssl/certs — Chromium +# on Linux reads NSS, not the system store, so it still rejects the +# leaf cert and every h2 fetch from a test browser hangs. macOS's +# Keychain is the system store for both Safari and Chrome, so the +# requirement only applies to Linux. +if [ "$(uname -s)" = "Linux" ] && ! command -v certutil >/dev/null 2>&1; then + cat >&2 <<'EOF' +[trust-dev-cert] `certutil` is not installed, but it's required on Linux +for mkcert to add its root CA to Chromium/Firefox's NSS trust store. +Without it, Chromium will still reject the dev cert and every h2 fetch +from a test browser will hang. + +Install the NSS tools package, then re-run this task: + + Debian/Ubuntu: sudo apt install -y libnss3-tools + Fedora/RHEL: sudo dnf install -y nss-tools + +Then re-run `mise run infra:trust-dev-cert`. +EOF + exit 1 +fi + +# Create the NSS DB up front so `mkcert -install` adds its root CA to +# Chromium/Firefox. mkcert skips NSS install when the DB doesn't yet +# exist, which on a fresh Linux user account is the default state. +if command -v certutil >/dev/null 2>&1; then + NSSDB_DIR="$HOME/.pki/nssdb" + if [ ! -d "$NSSDB_DIR" ]; then + echo "[trust-dev-cert] Creating empty NSS DB at $NSSDB_DIR" + mkdir -p "$NSSDB_DIR" + certutil -d "sql:$NSSDB_DIR" -N --empty-password >/dev/null 2>&1 || true + fi +fi + +echo "[trust-dev-cert] Running mkcert -install (will prompt for sudo)" +mkcert -install +echo "[trust-dev-cert] Done. Re-run \`mise run dev-all\` to start the dev stack." diff --git a/mise-tasks/lib/dev-common.sh b/mise-tasks/lib/dev-common.sh index 72f75884d68..d030b2b1ac9 100644 --- a/mise-tasks/lib/dev-common.sh +++ b/mise-tasks/lib/dev-common.sh @@ -243,27 +243,40 @@ sweep_orphaned_services() { READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" +# Pick wait-on's protocol prefix based on the realm-server's scheme. Local +# dev runs HTTPS+HTTP/2 by default; tests/CI fall back to plain HTTP when +# `infra:ensure-dev-cert` hasn't run. `${REALM_BASE_URL#*://}` strips +# whichever scheme is in use to feed wait-on's authority-only form. +case "$REALM_BASE_URL" in + https://*) REALM_READY_SCHEME="https-get" ;; + *) REALM_READY_SCHEME="http-get" ;; +esac +case "$REALM_TEST_URL" in + https://*) REALM_TEST_READY_SCHEME="https-get" ;; + *) REALM_TEST_READY_SCHEME="http-get" ;; +esac + # Phase 1 readiness URLs -BASE_REALM_READY="http-get://${REALM_BASE_URL#http://}/base/${READY_PATH}" -SKILLS_READY="http-get://${REALM_BASE_URL#http://}/skills/${READY_PATH}" +BASE_REALM_READY="${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/base/${READY_PATH}" +SKILLS_READY="${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/skills/${READY_PATH}" PHASE1_URLS="${BASE_REALM_READY}|${SKILLS_READY}" if [ -z "${SKIP_CATALOG:-}" ]; then - PHASE1_URLS="${PHASE1_URLS}|http-get://${REALM_BASE_URL#http://}/catalog/${READY_PATH}" - PHASE1_URLS="${PHASE1_URLS}|http-get://${REALM_BASE_URL#http://}/legacy-catalog/${READY_PATH}" + PHASE1_URLS="${PHASE1_URLS}|${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/catalog/${READY_PATH}" + PHASE1_URLS="${PHASE1_URLS}|${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/legacy-catalog/${READY_PATH}" fi if [ -z "${SKIP_BOXEL_HOMEPAGE:-}" ]; then - PHASE1_URLS="${PHASE1_URLS}|http-get://${REALM_BASE_URL#http://}/boxel-homepage/${READY_PATH}" + PHASE1_URLS="${PHASE1_URLS}|${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/boxel-homepage/${READY_PATH}" fi if [ -z "${SKIP_EXPERIMENTS:-}" ]; then - PHASE1_URLS="${PHASE1_URLS}|http-get://${REALM_BASE_URL#http://}/experiments/${READY_PATH}" + PHASE1_URLS="${PHASE1_URLS}|${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/experiments/${READY_PATH}" fi -PHASE1_URLS="${PHASE1_URLS}|http-get://${REALM_BASE_URL#http://}/software-factory/${READY_PATH}" +PHASE1_URLS="${PHASE1_URLS}|${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/software-factory/${READY_PATH}" PHASE1_URLS="${PHASE1_URLS}|${MATRIX_URL_VAL}|http://localhost:5001|${ICONS_URL}" # Phase 2 readiness URL -NODE_TEST_REALM_READY="http-get://${REALM_TEST_URL#http://}/node-test/${READY_PATH}" +NODE_TEST_REALM_READY="${REALM_TEST_READY_SCHEME}://${REALM_TEST_URL#*://}/node-test/${READY_PATH}" # In environment mode, bootstrap infra before starting services if [ -n "$BOXEL_ENVIRONMENT" ]; then diff --git a/mise-tasks/lib/env-vars.sh b/mise-tasks/lib/env-vars.sh index db62961754c..74775d46bd1 100755 --- a/mise-tasks/lib/env-vars.sh +++ b/mise-tasks/lib/env-vars.sh @@ -83,15 +83,17 @@ else # Transitioning from env mode to standard mode in the same shell: # reset derived variables to standard defaults to avoid stale env-mode values. - # Service URLs - export REALM_BASE_URL="http://localhost:4201" - export REALM_TEST_URL="http://localhost:4202" + # Service URLs. Realm-server speaks HTTPS+HTTP/2 in local dev — the + # dev cert is mandatory (see `infra:ensure-dev-cert` and the + # repo-root README "Local HTTPS dev access" section). + export REALM_BASE_URL="https://localhost:4201" + export REALM_TEST_URL="https://localhost:4202" export MATRIX_URL_VAL="http://localhost:8008" export WORKER_MGR_URL="http://localhost:4210" export WORKER_TEST_MGR_URL="http://localhost:4211" export PRERENDER_MGR_URL="http://localhost:4222" export ICONS_URL="http://localhost:4206" - export HOST_URL="http://localhost:4200" + export HOST_URL="https://localhost:4200" # Database export PGDATABASE="boxel" @@ -113,15 +115,18 @@ else # Fresh standard mode or non-env-mode shell: # use :- so production/staging env vars are not clobbered. - # Service URLs — use :- so production/staging env vars are not clobbered - export REALM_BASE_URL="${REALM_BASE_URL:-http://localhost:4201}" - export REALM_TEST_URL="${REALM_TEST_URL:-http://localhost:4202}" + # Service URLs — use :- so production/staging env vars are not + # clobbered. Realm-server speaks HTTPS+HTTP/2 in local dev; the dev + # cert is mandatory (see `infra:ensure-dev-cert` and the repo-root + # README "Local HTTPS dev access"). + export REALM_BASE_URL="${REALM_BASE_URL:-https://localhost:4201}" + export REALM_TEST_URL="${REALM_TEST_URL:-https://localhost:4202}" export MATRIX_URL_VAL="${MATRIX_URL_VAL:-http://localhost:8008}" export WORKER_MGR_URL="${WORKER_MGR_URL:-http://localhost:4210}" export WORKER_TEST_MGR_URL="${WORKER_TEST_MGR_URL:-http://localhost:4211}" export PRERENDER_MGR_URL="${PRERENDER_MGR_URL:-http://localhost:4222}" export ICONS_URL="${ICONS_URL:-http://localhost:4206}" - export HOST_URL="${HOST_URL:-http://localhost:4200}" + export HOST_URL="${HOST_URL:-https://localhost:4200}" # Database export PGDATABASE="${PGDATABASE:-boxel}" @@ -142,4 +147,73 @@ else fi unset _PREV_ENV_MODE + + # Local HTTPS dev access: when the cert provisioned by + # `mise run infra:ensure-dev-cert` is present, expose its paths to + # the realm-server so it terminates HTTPS+HTTP/2 on the canonical + # port, and point Node clients at mkcert's local CA via + # NODE_EXTRA_CA_CERTS so they trust the cert without requiring + # `mkcert -install` to have written it into the system trust store. + # If the cert is missing, the realm-server falls back to plain HTTP + # (tests/CI path). See the repo-root README "Local HTTPS dev access". + _BOXEL_DEV_CERT_DIR="${BOXEL_DEV_CERT_DIR:-$HOME/.local/share/boxel/dev-certs}" + _BOXEL_DEV_CERT_FILE="$_BOXEL_DEV_CERT_DIR/localhost.pem" + _BOXEL_DEV_KEY_FILE="$_BOXEL_DEV_CERT_DIR/localhost-key.pem" + if [ -f "$_BOXEL_DEV_CERT_FILE" ] && [ -f "$_BOXEL_DEV_KEY_FILE" ]; then + export REALM_SERVER_TLS_CERT_FILE="$_BOXEL_DEV_CERT_FILE" + export REALM_SERVER_TLS_KEY_FILE="$_BOXEL_DEV_KEY_FILE" + # Vite's dev server terminates TLS using the same cert (see + # packages/host/vite.config.mjs). Flip HOST_URL to https so every + # consumer (browser, realm-server distURL rewriter, prerender + # standby probe) hits the same scheme — mixing http + https + # between vite and realm-server triggers CORS preflight failures + # ("Redirect is not allowed for a preflight request"). + export HOST_URL="https://localhost:4200" + if command -v mkcert >/dev/null 2>&1; then + _BOXEL_MKCERT_CAROOT="$(mkcert -CAROOT 2>/dev/null || true)" + if [ -n "$_BOXEL_MKCERT_CAROOT" ] && [ -f "$_BOXEL_MKCERT_CAROOT/rootCA.pem" ]; then + # Node's NODE_EXTRA_CA_CERTS accepts a single PEM file path (not + # a colon-separated list). If the dev has already pointed it at + # something, leave their value in place — they presumably have + # mkcert's CA in there already, or know what they're doing. + # Otherwise point at mkcert's rootCA so realm-server fetches + # validate against the local cert without `mkcert -install`. + if [ -z "${NODE_EXTRA_CA_CERTS:-}" ]; then + export NODE_EXTRA_CA_CERTS="$_BOXEL_MKCERT_CAROOT/rootCA.pem" + fi + fi + unset _BOXEL_MKCERT_CAROOT + fi + fi + unset _BOXEL_DEV_CERT_DIR _BOXEL_DEV_CERT_FILE _BOXEL_DEV_KEY_FILE + + # Puppeteer 24.35 (and the lockfile's tree) bundles Chrome 143, which + # has a known h2 stream-window bug that hangs the dev prerender forever + # on the first cold-start fetch of vite's large pre-optimized + # `indexeddb-crypto-store` chunk (matrix-js-sdk). Newer Chrome (148+) + # doesn't hit the bug. Both prerender's BrowserManager and the + # standby-warmup probe (`scripts/wait-for-host-standby.ts`) already + # honor `PUPPETEER_EXECUTABLE_PATH`, so just point them at the system + # chrome when one's installed. Devs without google-chrome installed + # keep the bundled puppeteer chromium — they'll see the hang stall + # longer until vite's optimizer cache warms up. + if [ -z "${PUPPETEER_EXECUTABLE_PATH:-}" ]; then + # Explicit checks (not a for-loop) so the macOS path's embedded space + # doesn't get word-split by /bin/sh — env-vars.sh runs under whatever + # shell mise invokes, and POSIX sh handles backslash-escapes in for- + # loop word lists inconsistently across implementations. + if [ -x /usr/bin/google-chrome ]; then + export PUPPETEER_EXECUTABLE_PATH=/usr/bin/google-chrome + elif [ -x /usr/bin/google-chrome-stable ]; then + export PUPPETEER_EXECUTABLE_PATH=/usr/bin/google-chrome-stable + elif [ -x /usr/bin/chromium ]; then + export PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium + elif [ -x /usr/bin/chromium-browser ]; then + export PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser + elif [ -x "/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" ]; then + export PUPPETEER_EXECUTABLE_PATH="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" + elif [ -x "/Applications/Chromium.app/Contents/MacOS/Chromium" ]; then + export PUPPETEER_EXECUTABLE_PATH="/Applications/Chromium.app/Contents/MacOS/Chromium" + fi + fi fi diff --git a/mise-tasks/lib/test-dev-common.sh b/mise-tasks/lib/test-dev-common.sh index 765eefbd562..be15dd1b1f8 100755 --- a/mise-tasks/lib/test-dev-common.sh +++ b/mise-tasks/lib/test-dev-common.sh @@ -23,8 +23,8 @@ SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)" cd "$SCRIPT_DIR/../../packages/realm-server" # Stub the env-driven globals dev-common.sh references during sourcing so # the script doesn't error out before we get to kill_tree. -: "${REALM_BASE_URL:=http://localhost:4201}" -: "${REALM_TEST_URL:=http://localhost:4202}" +: "${REALM_BASE_URL:=https://localhost:4201}" +: "${REALM_TEST_URL:=https://localhost:4202}" : "${MATRIX_URL_VAL:=http://localhost:8008}" : "${ICONS_URL:=http://localhost:4206}" export REALM_BASE_URL REALM_TEST_URL MATRIX_URL_VAL ICONS_URL diff --git a/mise-tasks/services/prerender b/mise-tasks/services/prerender index 6be0a1c0c2b..1c601be02a8 100755 --- a/mise-tasks/services/prerender +++ b/mise-tasks/services/prerender @@ -17,7 +17,7 @@ LOG_TEE="../observability/scripts/dev-log-tee.sh" if [ -n "$ENV_MODE" ]; then DEFAULT_HOST_URL="${HOST_URL}" else - DEFAULT_HOST_URL="${HOST_URL:-http://localhost:4200}" + DEFAULT_HOST_URL="${HOST_URL:-https://localhost:4200}" fi # Dev mode only: gate startup on the host's `/_standby` route being @@ -42,9 +42,20 @@ if [ -z "$ENV_MODE" ]; then ts-node --transpileOnly ./scripts/wait-for-host-standby.ts || exit $? fi +# In local HTTPS dev (vite on https://localhost:4200 with HTTP/2), +# the prerender's puppeteer chrome triggers vite's first cold-start of +# ~1000 packages over h2. Even when wait-for-host-standby has already +# warmed vite once, the new chrome instance re-downloads everything +# and the initial `_standby` load can exceed the 30s default. Give it +# headroom by default; production / hosted runners can override. +case "${BOXEL_HOST_URL:-$DEFAULT_HOST_URL}" in + https://*) : "${PRERENDER_STANDBY_TIMEOUT_MS:=120000}" ;; +esac + NODE_ENV=development \ NODE_NO_WARNINGS=1 \ BOXEL_HOST_URL="${BOXEL_HOST_URL:-$DEFAULT_HOST_URL}" \ + PRERENDER_STANDBY_TIMEOUT_MS="${PRERENDER_STANDBY_TIMEOUT_MS:-}" \ ts-node \ --transpileOnly prerender/prerender-server \ --port="${PRERENDER_PORT}" \ diff --git a/mise-tasks/services/realm-server b/mise-tasks/services/realm-server index b74c3905ad7..864f3aca844 100755 --- a/mise-tasks/services/realm-server +++ b/mise-tasks/services/realm-server @@ -1,6 +1,6 @@ #!/usr/bin/env bash #MISE description="Start realm development server" -#MISE depends=["infra:ensure-traefik", "infra:ensure-pg", "infra:ensure-db", "infra:wait-for-prerender"] +#MISE depends=["infra:ensure-dev-cert", "infra:ensure-traefik", "infra:ensure-pg", "infra:ensure-db", "infra:wait-for-prerender"] #MISE dir="packages/realm-server" # Propagate realm-server's exit status through the trailing `| dev-log-tee.sh` diff --git a/mise-tasks/services/realm-server-base b/mise-tasks/services/realm-server-base index 65a7fabbc69..c29f10d106a 100755 --- a/mise-tasks/services/realm-server-base +++ b/mise-tasks/services/realm-server-base @@ -1,6 +1,6 @@ #!/bin/sh #MISE description="Start base realm server only" -#MISE depends=["infra:ensure-pg", "infra:wait-for-prerender"] +#MISE depends=["infra:ensure-dev-cert", "infra:ensure-pg", "infra:wait-for-prerender"] #MISE dir="packages/realm-server" if [ -z "$MATRIX_REGISTRATION_SHARED_SECRET" ]; then @@ -29,4 +29,4 @@ NODE_ENV=development \ --path='../base' \ --username='base_realm' \ --fromUrl='https://cardstack.com/base/' \ - --toUrl='http://localhost:4201/base/' + --toUrl='https://localhost:4201/base/' diff --git a/mise-tasks/services/test-realms b/mise-tasks/services/test-realms index c5cdda42541..052ad862acd 100755 --- a/mise-tasks/services/test-realms +++ b/mise-tasks/services/test-realms @@ -1,6 +1,6 @@ #!/bin/sh #MISE description="Start test realm servers" -#MISE depends=["infra:ensure-traefik", "infra:ensure-pg", "infra:wait-for-prerender"] +#MISE depends=["infra:ensure-dev-cert", "infra:ensure-traefik", "infra:ensure-pg", "infra:wait-for-prerender"] #MISE dir="packages/realm-server" SCRIPTS_DIR="./scripts" diff --git a/mise-tasks/services/worker-base b/mise-tasks/services/worker-base index 0f6aa184d66..f7faa8c08ea 100755 --- a/mise-tasks/services/worker-base +++ b/mise-tasks/services/worker-base @@ -18,4 +18,4 @@ NODE_ENV=development \ --prerendererUrl="${PRERENDER_MGR_URL}" \ \ --fromUrl='https://cardstack.com/base/' \ - --toUrl='http://localhost:4201/base/' + --toUrl='https://localhost:4201/base/' diff --git a/mise-tasks/test-services/host b/mise-tasks/test-services/host index 53789498d11..03f5f6cc23f 100755 --- a/mise-tasks/test-services/host +++ b/mise-tasks/test-services/host @@ -1,5 +1,6 @@ #!/bin/sh #MISE description="Start services for host test suite (trimmed catalog)" +#MISE depends=["infra:ensure-dev-cert"] #MISE dir="packages/realm-server" set -eu @@ -33,12 +34,30 @@ done export CATALOG_REALM_PATH="$CATALOG_TEMP_PATH" READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" -BASE_REALM_READY="http-get://${REALM_BASE_URL#http://}/base/${READY_PATH}" -NODE_TEST_REALM_READY="http-get://${REALM_TEST_URL#http://}/node-test/${READY_PATH}" +# Pick wait-on's protocol prefix based on the realm-server's scheme. +# Realm-server speaks HTTPS+HTTP/2 in local dev (see infra:ensure-dev-cert); +# stripping `*://` instead of a hardcoded `http://` lets the same script +# work whether REALM_BASE_URL is https or http. +case "$REALM_BASE_URL" in + https://*) REALM_READY_SCHEME="https-get" ;; + *) REALM_READY_SCHEME="http-get" ;; +esac +case "$REALM_TEST_URL" in + https://*) REALM_TEST_READY_SCHEME="https-get" ;; + *) REALM_TEST_READY_SCHEME="http-get" ;; +esac +BASE_REALM_READY="${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/base/${READY_PATH}" +NODE_TEST_REALM_READY="${REALM_TEST_READY_SCHEME}://${REALM_TEST_URL#*://}/node-test/${READY_PATH}" HOST_TEST_LOG_LEVELS="${HOST_TEST_LOG_LEVELS:-*=info,realm:requests=warn,realm-index-updater=debug,index-runner=debug,index-perf=debug,index-writer=debug,worker=debug,worker-manager=debug}" SKIP_CATALOG="${SKIP_CATALOG:-}" +# START_SERVER_AND_TEST_INSECURE=1 scopes to the wait-on readiness probe. +# The realm-server / vite now serve the self-signed mkcert leaf on +# https://localhost:4201|4202|4200; start-server-and-test forces +# strictSSL:true on wait-on (overriding NODE_TLS_REJECT_UNAUTHORIZED), so +# the documented INSECURE flag is the right escape hatch. Does not affect +# the actual services under test. WAIT_ON_TIMEOUT=900000 \ SKIP_EXPERIMENTS=true \ SKIP_CATALOG="$SKIP_CATALOG" \ @@ -47,6 +66,7 @@ WAIT_ON_TIMEOUT=900000 \ CATALOG_REALM_PATH="$CATALOG_TEMP_PATH" \ LOG_LEVELS="$HOST_TEST_LOG_LEVELS" \ NODE_NO_WARNINGS=1 \ + START_SERVER_AND_TEST_INSECURE=1 \ start-server-and-test \ 'run-p -ln start:icons start:host-dist start:pg start:prerender-dev start:prerender-manager-dev start:matrix start:smtp start:worker-development start:development' \ "${BASE_REALM_READY}|${MATRIX_URL_VAL}|http://localhost:5001|${ICONS_URL}|${HOST_URL}" \ diff --git a/mise-tasks/test-services/matrix b/mise-tasks/test-services/matrix index 9e042c29ad3..b0fc72267cc 100755 --- a/mise-tasks/test-services/matrix +++ b/mise-tasks/test-services/matrix @@ -1,5 +1,6 @@ #!/bin/sh #MISE description="Start services for matrix test suite (base realm only)" +#MISE depends=["infra:ensure-dev-cert"] #MISE dir="packages/realm-server" export PATH="./node_modules/.bin:$PATH" @@ -7,9 +8,22 @@ export PATH="./node_modules/.bin:$PATH" pnpm --dir=../skills-realm skills:setup READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" -BASE_REALM_READY="http-get://${REALM_BASE_URL#http://}/base/${READY_PATH}" +# Pick wait-on's protocol prefix based on the realm-server's scheme. +# Realm-server speaks HTTPS+HTTP/2 in local dev (see infra:ensure-dev-cert); +# stripping `*://` instead of a hardcoded `http://` lets the same script +# work whether REALM_BASE_URL is https or http. +case "$REALM_BASE_URL" in + https://*) REALM_READY_SCHEME="https-get" ;; + *) REALM_READY_SCHEME="http-get" ;; +esac +BASE_REALM_READY="${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/base/${READY_PATH}" +# START_SERVER_AND_TEST_INSECURE=1 scopes to the wait-on readiness probe so +# it can connect to the self-signed mkcert leaf on https://localhost:4200|4201. +# (NODE_TLS_REJECT_UNAUTHORIZED is insufficient — start-server-and-test +# forces strictSSL:true on wait-on, overriding the global env var.) WAIT_ON_TIMEOUT=600000 NODE_NO_WARNINGS=1 SKIP_SUBMISSION=true \ + START_SERVER_AND_TEST_INSECURE=1 \ start-server-and-test \ 'run-p -ln start:pg start:icons start:host-dist start:prerender-dev start:prerender-manager-dev start:worker-base start:base' \ "${BASE_REALM_READY}|${ICONS_URL}|${HOST_URL}" \ diff --git a/mise-tasks/test-services/realm-server b/mise-tasks/test-services/realm-server index 4b5a06f8bd1..0fa94c8c94a 100755 --- a/mise-tasks/test-services/realm-server +++ b/mise-tasks/test-services/realm-server @@ -1,22 +1,42 @@ #!/bin/sh #MISE description="Start services for realm-server test suite (icons + host-dist + dev-minimal services)" +#MISE depends=["infra:ensure-dev-cert"] #MISE dir="packages/realm-server" export PATH="./node_modules/.bin:$PATH" READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" -BASE_REALM_READY="http-get://${REALM_BASE_URL#http://}/base/${READY_PATH}" -NODE_TEST_REALM_READY="http-get://${REALM_TEST_URL#http://}/node-test/${READY_PATH}" +# Pick wait-on's protocol prefix based on the realm-server's scheme. +# Realm-server speaks HTTPS+HTTP/2 in local dev (see infra:ensure-dev-cert); +# stripping `*://` instead of a hardcoded `http://` lets the same script +# work whether REALM_BASE_URL is https or http. +case "$REALM_BASE_URL" in + https://*) REALM_READY_SCHEME="https-get" ;; + *) REALM_READY_SCHEME="http-get" ;; +esac +case "$REALM_TEST_URL" in + https://*) REALM_TEST_READY_SCHEME="https-get" ;; + *) REALM_TEST_READY_SCHEME="http-get" ;; +esac +BASE_REALM_READY="${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/base/${READY_PATH}" +NODE_TEST_REALM_READY="${REALM_TEST_READY_SCHEME}://${REALM_TEST_URL#*://}/node-test/${READY_PATH}" +# START_SERVER_AND_TEST_INSECURE=1 scopes to the wait-on readiness probe. +# The realm-server now serves the self-signed mkcert leaf on +# https://localhost:4201|4202; start-server-and-test forces strictSSL:true +# on wait-on (overriding NODE_TLS_REJECT_UNAUTHORIZED), so the documented +# INSECURE flag is the right escape hatch. Does not affect services +# under test. WAIT_ON_TIMEOUT=900000 \ SKIP_EXPERIMENTS=true \ SKIP_CATALOG=true \ SKIP_BOXEL_HOMEPAGE=true \ SKIP_SUBMISSION=true \ NODE_NO_WARNINGS=1 \ + START_SERVER_AND_TEST_INSECURE=1 \ start-server-and-test \ 'run-p -ln start:icons start:host-dist start:pg start:prerender-dev start:prerender-manager-dev start:matrix start:smtp start:worker-development start:development' \ - "${BASE_REALM_READY}|http-get://${REALM_BASE_URL#http://}/software-factory/${READY_PATH}|${MATRIX_URL_VAL}|http://localhost:5001|${ICONS_URL}|${HOST_URL}" \ + "${BASE_REALM_READY}|${REALM_READY_SCHEME}://${REALM_BASE_URL#*://}/software-factory/${READY_PATH}|${MATRIX_URL_VAL}|http://localhost:5001|${ICONS_URL}|${HOST_URL}" \ 'run-p -ln start:worker-test start:test-realms' \ "${NODE_TEST_REALM_READY}" \ 'wait' diff --git a/package.json b/package.json index e317f41f98a..a3999266a3d 100644 --- a/package.json +++ b/package.json @@ -16,7 +16,7 @@ "deploy:boxel-ui:preview-staging": "pnpm run build-common-deps && cd packages/boxel-ui/test-app && pnpm exec ember deploy s3-preview-staging --verbose", "lint": "pnpm run --filter './packages/**' --if-present -r lint", "lint:fix": "pnpm run --filter './packages/**' --if-present -r lint:fix", - "openrouter:sync": "OPENROUTER_REALM_URL=${OPENROUTER_REALM_URL:-http://localhost:4201/openrouter/} pnpm --filter @cardstack/realm-server sync-openrouter-models", + "openrouter:sync": "OPENROUTER_REALM_URL=${OPENROUTER_REALM_URL:-https://localhost:4201/openrouter/} pnpm --filter @cardstack/realm-server sync-openrouter-models", "prepare-worktree-types": "pnpm --filter @cardstack/boxel-icons build && pnpm --filter @cardstack/boxel-ui build:types" }, "devDependencies": { diff --git a/packages/ai-bot/README.md b/packages/ai-bot/README.md index b4a4790e12e..c4dda479b26 100644 --- a/packages/ai-bot/README.md +++ b/packages/ai-bot/README.md @@ -63,7 +63,7 @@ It will be able to see any cards shared in the chat and can respond using GPT4 i You can deliberately trigger a specific patch by sending a message that starts `debug:patch:` and has the JSON patch you want returned. For example: ``` -debug:patch:{"attributes": {"cardId":"http://localhost:4200/experiments/Author/1", "patch": { "attributes": {"firstName": "David"}}}} +debug:patch:{"attributes": {"cardId":"https://localhost:4200/experiments/Author/1", "patch": { "attributes": {"firstName": "David"}}}} ``` This will return a patch with the ID of the last card you uploaded. This does not hit GPT4 and is useful for testing the integration of the two components without waiting for streaming responses. diff --git a/packages/bot-runner/README.md b/packages/bot-runner/README.md index d8e064d33ac..6b48efe8483 100644 --- a/packages/bot-runner/README.md +++ b/packages/bot-runner/README.md @@ -65,7 +65,7 @@ List registrations: Register via script ```sh -REALM_SERVER_URL="http://localhost:4201" \ +REALM_SERVER_URL="https://localhost:4201" \ REALM_SERVER_JWT="..." \ USERNAME="@submissionbot:localhost" \ ./packages/realm-server/scripts/register-bot.sh @@ -73,7 +73,7 @@ USERNAME="@submissionbot:localhost" \ Defaults and requirements: -- `REALM_SERVER_URL` (default: `http://localhost:4201`) +- `REALM_SERVER_URL` (default: `https://localhost:4201`) - `REALM_SERVER_JWT` (required) - `USERNAME` (default: `@user:localhost`, Matrix user id) diff --git a/packages/boxel-homepage-realm/README.md b/packages/boxel-homepage-realm/README.md index 9143c1224fb..a1a110861f7 100644 --- a/packages/boxel-homepage-realm/README.md +++ b/packages/boxel-homepage-realm/README.md @@ -59,7 +59,7 @@ This workflow is ideal for rapid iteration and testing of the homepage: 1. **Log in as homepage_writer** in the Boxel interface using the credentials above -2. **Visit the Boxel Homepage realm.** It's a hidden workspace accessible via `http://localhost:4200/boxel-homepage` +2. **Visit the Boxel Homepage realm.** It's a hidden workspace accessible via `https://localhost:4200/boxel-homepage` 3. **Edit files directly** in the Boxel interface or locally. - Changes are automatically saved to `packages/boxel-homepage-realm/contents/` diff --git a/packages/catalog-realm/online-product/OnlineProduct/canvas-backpack-travel.json b/packages/catalog-realm/online-product/OnlineProduct/canvas-backpack-travel.json index 365000f8441..61edefc7bf5 100644 --- a/packages/catalog-realm/online-product/OnlineProduct/canvas-backpack-travel.json +++ b/packages/catalog-realm/online-product/OnlineProduct/canvas-backpack-travel.json @@ -33,7 +33,7 @@ "module": "../online-product", "name": "OnlineProduct" }, - "realmURL": "http://localhost:4201/catalog/" + "realmURL": "https://localhost:4201/catalog/" } } } diff --git a/packages/catalog-realm/online-product/OnlineProduct/ceramic-coffee-mug-set.json b/packages/catalog-realm/online-product/OnlineProduct/ceramic-coffee-mug-set.json index 44680de73f7..97a7646fd45 100644 --- a/packages/catalog-realm/online-product/OnlineProduct/ceramic-coffee-mug-set.json +++ b/packages/catalog-realm/online-product/OnlineProduct/ceramic-coffee-mug-set.json @@ -33,7 +33,7 @@ "module": "../online-product", "name": "OnlineProduct" }, - "realmURL": "http://localhost:4201/catalog/" + "realmURL": "https://localhost:4201/catalog/" } } } diff --git a/packages/catalog-realm/online-product/OnlineProduct/eco-water-bottle.json b/packages/catalog-realm/online-product/OnlineProduct/eco-water-bottle.json index a82e34bd803..64d9c1af079 100644 --- a/packages/catalog-realm/online-product/OnlineProduct/eco-water-bottle.json +++ b/packages/catalog-realm/online-product/OnlineProduct/eco-water-bottle.json @@ -33,7 +33,7 @@ "module": "../online-product", "name": "OnlineProduct" }, - "realmURL": "http://localhost:4201/catalog/" + "realmURL": "https://localhost:4201/catalog/" } } } diff --git a/packages/catalog-realm/online-product/OnlineProduct/gaming-keyboard.json b/packages/catalog-realm/online-product/OnlineProduct/gaming-keyboard.json index fb6d4e984f7..f61354619df 100644 --- a/packages/catalog-realm/online-product/OnlineProduct/gaming-keyboard.json +++ b/packages/catalog-realm/online-product/OnlineProduct/gaming-keyboard.json @@ -33,7 +33,7 @@ "module": "../online-product", "name": "OnlineProduct" }, - "realmURL": "http://localhost:4201/catalog/" + "realmURL": "https://localhost:4201/catalog/" } } } diff --git a/packages/catalog-realm/online-product/OnlineProduct/organic-cotton-tshirt.json b/packages/catalog-realm/online-product/OnlineProduct/organic-cotton-tshirt.json index 8a4e93d164e..3f9d83abd18 100644 --- a/packages/catalog-realm/online-product/OnlineProduct/organic-cotton-tshirt.json +++ b/packages/catalog-realm/online-product/OnlineProduct/organic-cotton-tshirt.json @@ -33,7 +33,7 @@ "module": "../online-product", "name": "OnlineProduct" }, - "realmURL": "http://localhost:4201/catalog/" + "realmURL": "https://localhost:4201/catalog/" } } } diff --git a/packages/catalog-realm/online-product/OnlineProduct/smart-fitness-tracker.json b/packages/catalog-realm/online-product/OnlineProduct/smart-fitness-tracker.json index 4e483b15dba..b85bf026218 100644 --- a/packages/catalog-realm/online-product/OnlineProduct/smart-fitness-tracker.json +++ b/packages/catalog-realm/online-product/OnlineProduct/smart-fitness-tracker.json @@ -33,7 +33,7 @@ "module": "../online-product", "name": "OnlineProduct" }, - "realmURL": "http://localhost:4201/catalog/" + "realmURL": "https://localhost:4201/catalog/" } } } diff --git a/packages/catalog-realm/theme-generation/ThemeCreator/f29cce63-ba80-4afd-8a32-8f4e43d62a65.json b/packages/catalog-realm/theme-generation/ThemeCreator/f29cce63-ba80-4afd-8a32-8f4e43d62a65.json index c5ba927bdd2..33b6dc0d5d2 100644 --- a/packages/catalog-realm/theme-generation/ThemeCreator/f29cce63-ba80-4afd-8a32-8f4e43d62a65.json +++ b/packages/catalog-realm/theme-generation/ThemeCreator/f29cce63-ba80-4afd-8a32-8f4e43d62a65.json @@ -8,7 +8,7 @@ }, "type": "card", "attributes": { - "realm": "http://localhost:4201/experiments/", + "realm": "https://localhost:4201/experiments/", "prompt": "Create a theme in deep crimson, with bold contrast, minimal typography, and clean UI surfaces suitable for dashboards.", "codeRef": { "name": "default", diff --git a/packages/experiments-realm/AtomExamples/d7aa387b-6514-47c0-ace2-7de011453e51.json b/packages/experiments-realm/AtomExamples/d7aa387b-6514-47c0-ace2-7de011453e51.json index 4eef71cda70..709dfba079f 100644 --- a/packages/experiments-realm/AtomExamples/d7aa387b-6514-47c0-ace2-7de011453e51.json +++ b/packages/experiments-realm/AtomExamples/d7aa387b-6514-47c0-ace2-7de011453e51.json @@ -93,22 +93,22 @@ }, "trip.country": { "links": { - "self": "http://localhost:4201/experiments/Country/argentina" + "self": "https://localhost:4201/experiments/Country/argentina" } }, "trip.countries.0": { "links": { - "self": "http://localhost:4201/experiments/Country/brazil" + "self": "https://localhost:4201/experiments/Country/brazil" } }, "trip.countries.1": { "links": { - "self": "http://localhost:4201/experiments/Country/4" + "self": "https://localhost:4201/experiments/Country/4" } }, "trip.countries.2": { "links": { - "self": "http://localhost:4201/experiments/Country/1" + "self": "https://localhost:4201/experiments/Country/1" } }, "trips.0.country": { diff --git a/packages/experiments-realm/AuthenticatedImageTester/1.json b/packages/experiments-realm/AuthenticatedImageTester/1.json index 97332fbab9c..70654b8b28a 100644 --- a/packages/experiments-realm/AuthenticatedImageTester/1.json +++ b/packages/experiments-realm/AuthenticatedImageTester/1.json @@ -14,7 +14,7 @@ "summary": null, "cardThumbnailURL": null }, - "imageUrl": "http://localhost:4201/experiments/green-mango.png" + "imageUrl": "https://localhost:4201/experiments/green-mango.png" }, "relationships": { "cardInfo.theme": { diff --git a/packages/experiments-realm/BotRequestDemo/bot-request-demo.json b/packages/experiments-realm/BotRequestDemo/bot-request-demo.json index 34e927920a4..4072f2777de 100644 --- a/packages/experiments-realm/BotRequestDemo/bot-request-demo.json +++ b/packages/experiments-realm/BotRequestDemo/bot-request-demo.json @@ -17,7 +17,7 @@ "cardId": "", "format": "isolated", "roomId": "", - "realm": "http://localhost:4201/experiments/", + "realm": "https://localhost:4201/experiments/", "listingId": "" }, "relationships": { diff --git a/packages/experiments-realm/DemoFields/c8f3842a-4e5d-4660-bab7-6d0266db6a64.json b/packages/experiments-realm/DemoFields/c8f3842a-4e5d-4660-bab7-6d0266db6a64.json index 5a2613c24e3..e414a927c72 100644 --- a/packages/experiments-realm/DemoFields/c8f3842a-4e5d-4660-bab7-6d0266db6a64.json +++ b/packages/experiments-realm/DemoFields/c8f3842a-4e5d-4660-bab7-6d0266db6a64.json @@ -11,7 +11,7 @@ "url": "https://www.pixar.com/finding-nemo", "color": "#ff6d1f", "email": "nemo@email.com", - "realm": "http://localhost:4201/experiments/", + "realm": "https://localhost:4201/experiments/", "address": { "city": "Sydney", "state": "NSW", diff --git a/packages/experiments-realm/IssueTracker/b97d905f-7cd7-4e21-ba07-b7163d188729.json b/packages/experiments-realm/IssueTracker/b97d905f-7cd7-4e21-ba07-b7163d188729.json index 1cdadbaeba4..fafd8573726 100644 --- a/packages/experiments-realm/IssueTracker/b97d905f-7cd7-4e21-ba07-b7163d188729.json +++ b/packages/experiments-realm/IssueTracker/b97d905f-7cd7-4e21-ba07-b7163d188729.json @@ -18,32 +18,32 @@ "boardTitle": null, "placements": [ { - "itemId": "http://localhost:4201/experiments/Issue/hbr-1", + "itemId": "https://localhost:4201/experiments/Issue/hbr-1", "columnKey": "backlog", "sortOrder": 0 }, { - "itemId": "http://localhost:4201/experiments/Issue/hbr-2", + "itemId": "https://localhost:4201/experiments/Issue/hbr-2", "columnKey": "in_progress", "sortOrder": 1 }, { - "itemId": "http://localhost:4201/experiments/Issue/hbr-3", + "itemId": "https://localhost:4201/experiments/Issue/hbr-3", "columnKey": "blocked", "sortOrder": 1 }, { - "itemId": "http://localhost:4201/experiments/Issue/hbr-4", + "itemId": "https://localhost:4201/experiments/Issue/hbr-4", "columnKey": "blocked", "sortOrder": 2 }, { - "itemId": "http://localhost:4201/experiments/Issue/hbr-5", + "itemId": "https://localhost:4201/experiments/Issue/hbr-5", "columnKey": "review", "sortOrder": 4 }, { - "itemId": "http://localhost:4201/experiments/Issue/hbs-6", + "itemId": "https://localhost:4201/experiments/Issue/hbs-6", "columnKey": "done", "sortOrder": 5 } diff --git a/packages/experiments-realm/KanbanBoard/c19a0bfe-d6ee-4e87-8680-927813a59e08.json b/packages/experiments-realm/KanbanBoard/c19a0bfe-d6ee-4e87-8680-927813a59e08.json index 7a077546499..7406e77db0c 100644 --- a/packages/experiments-realm/KanbanBoard/c19a0bfe-d6ee-4e87-8680-927813a59e08.json +++ b/packages/experiments-realm/KanbanBoard/c19a0bfe-d6ee-4e87-8680-927813a59e08.json @@ -44,17 +44,17 @@ "boardTitle": "Generic Kanban Board", "placements": [ { - "itemId": "http://localhost:4201/experiments/Author/0b9c06fd-3833-4947-a0b8-ac24b8e71ee7", + "itemId": "https://localhost:4201/experiments/Author/0b9c06fd-3833-4947-a0b8-ac24b8e71ee7", "columnKey": "author", "sortOrder": 1 }, { - "itemId": "http://localhost:4201/experiments/Author/3a655a91-98b5-4f33-a071-b62d39218b33", + "itemId": "https://localhost:4201/experiments/Author/3a655a91-98b5-4f33-a071-b62d39218b33", "columnKey": "author", "sortOrder": 2 }, { - "itemId": "http://localhost:4201/experiments/Representative/880c1d41-2563-43da-999d-ef577fa3eac9", + "itemId": "https://localhost:4201/experiments/Representative/880c1d41-2563-43da-999d-ef577fa3eac9", "columnKey": "representative", "sortOrder": 1 } diff --git a/packages/experiments-realm/Spec/7705e324-4949-43f8-a6fe-4f3c94c365f5.json b/packages/experiments-realm/Spec/7705e324-4949-43f8-a6fe-4f3c94c365f5.json index 9f72595089a..6ea4737a015 100644 --- a/packages/experiments-realm/Spec/7705e324-4949-43f8-a6fe-4f3c94c365f5.json +++ b/packages/experiments-realm/Spec/7705e324-4949-43f8-a6fe-4f3c94c365f5.json @@ -56,7 +56,7 @@ "attributes": { "ref": { "name": "TravelGoalWithProgress", - "module": "http://localhost:4201/experiments/trip-info" + "module": "https://localhost:4201/experiments/trip-info" }, "readMe": null, "cardInfo": { diff --git a/packages/host/README.md b/packages/host/README.md index 9dc1c66496d..324d08593a6 100644 --- a/packages/host/README.md +++ b/packages/host/README.md @@ -19,8 +19,8 @@ You will need the following things properly installed on your computer. ## Running / Development - `ember serve` -- Visit your app at [http://localhost:4200](http://localhost:4200). -- Visit your tests at [http://localhost:4200/tests](http://localhost:4200/tests). +- Visit your app at [https://localhost:4200](https://localhost:4200). +- Visit your tests at [https://localhost:4200/tests](https://localhost:4200/tests). ### Updating the default SystemCard to add new LLMs diff --git a/packages/host/app/components/operator-mode/publish-realm-modal.gts b/packages/host/app/components/operator-mode/publish-realm-modal.gts index 4631cfa1418..6c8bf1e55e3 100644 --- a/packages/host/app/components/operator-mode/publish-realm-modal.gts +++ b/packages/host/app/components/operator-mode/publish-realm-modal.gts @@ -340,10 +340,12 @@ export default class PublishRealmModal extends Component { } private getProtocol(): string { - const environment = config.environment; - return environment === 'development' || environment === 'test' - ? 'http' - : 'https'; + // The local dev stack speaks HTTPS+HTTP/2 across the board now (the + // realm-server reads the mkcert leaf via REALM_SERVER_TLS_CERT_FILE + // and the host bundle's vite preview / dev server terminates TLS on + // the same cert), so published-realm URLs are https in every + // environment. + return 'https'; } private getMatrixUsername(): string { diff --git a/packages/host/app/services/host-mode-service.ts b/packages/host/app/services/host-mode-service.ts index 643090ac730..e125d9fa26c 100644 --- a/packages/host/app/services/host-mode-service.ts +++ b/packages/host/app/services/host-mode-service.ts @@ -98,9 +98,14 @@ export default class HostModeService extends Service { } get originIsNotMatrixTests() { + // Realm-server speaks https locally now (see infra:ensure-dev-cert); + // test-realms and the matrix-test realm share the same cert and + // bind their respective ports. return ( this.hostModeOrigin !== 'http://localhost:4202' && - this.hostModeOrigin !== 'http://localhost:4205' + this.hostModeOrigin !== 'https://localhost:4202' && + this.hostModeOrigin !== 'http://localhost:4205' && + this.hostModeOrigin !== 'https://localhost:4205' ); } diff --git a/packages/host/config/environment.js b/packages/host/config/environment.js index 3733a97d19c..84404d6b349 100644 --- a/packages/host/config/environment.js +++ b/packages/host/config/environment.js @@ -50,15 +50,23 @@ function getEnvSlug() { function environmentDefaults() { if (!process.env.BOXEL_ENVIRONMENT) { + // Local realm-server speaks HTTPS+HTTP/2 in every environment + // (dev + Host Tests + Live Tests). The dev cert is mandatory (see + // `infra:ensure-dev-cert`); there is no HTTP fallback. Test mode + // still uses these defaults — the host's in-memory test-realm + // registry intercepts `http://test-realm/...` fetches before they + // hit the wire, while fetches to the realm-server's real wire URL + // need to go to https to match the actual listener. See the + // repo-root README "Local HTTPS dev access". return { - realmServerURL: 'http://localhost:4201/', + realmServerURL: 'https://localhost:4201/', realmHost: 'localhost:4201', iconsURL: 'http://localhost:4206', - baseRealmURL: 'http://localhost:4201/base/', - catalogRealmURL: 'http://localhost:4201/catalog/', - legacyCatalogRealmURL: 'http://localhost:4201/legacy-catalog/', - skillsRealmURL: 'http://localhost:4201/skills/', - openRouterRealmURL: 'http://localhost:4201/openrouter/', + baseRealmURL: 'https://localhost:4201/base/', + catalogRealmURL: 'https://localhost:4201/catalog/', + legacyCatalogRealmURL: 'https://localhost:4201/legacy-catalog/', + skillsRealmURL: 'https://localhost:4201/skills/', + openRouterRealmURL: 'https://localhost:4201/openrouter/', }; } let slug = getEnvSlug(); diff --git a/packages/host/config/schema/1779100257123_schema.sql b/packages/host/config/schema/1779100257124_schema.sql similarity index 100% rename from packages/host/config/schema/1779100257123_schema.sql rename to packages/host/config/schema/1779100257124_schema.sql diff --git a/packages/host/docs/live-tests.md b/packages/host/docs/live-tests.md index 6fe8e796fc2..c47896e29e4 100644 --- a/packages/host/docs/live-tests.md +++ b/packages/host/docs/live-tests.md @@ -6,8 +6,8 @@ Live tests run directly against a realm server. Test modules are `*.test.gts` fi Requires servers to already be running. -- Experiments realm: `http://localhost:4200/tests/index.html?liveTest=true&realmURL=http://localhost:4201/experiments/&hidepassed` -- Catalog realm: `http://localhost:4200/tests/index.html?liveTest=true&realmURL=http://localhost:4201/catalog/&hidepassed` +- Experiments realm: `https://localhost:4200/tests/index.html?liveTest=true&realmURL=https://localhost:4201/experiments/&hidepassed` +- Catalog realm: `https://localhost:4200/tests/index.html?liveTest=true&realmURL=https://localhost:4201/catalog/&hidepassed` ## Run as a Script @@ -22,5 +22,5 @@ cd packages/host pnpm test:live # Or target a specific realm via the REALM_URL env var (trailing slash optional) -REALM_URL=http://localhost:4201/experiments/ pnpm test:live +REALM_URL=https://localhost:4201/experiments/ pnpm test:live ``` diff --git a/packages/host/scripts/HEAP_PROBE.md b/packages/host/scripts/HEAP_PROBE.md index 0a147411fe5..de55a5037d2 100644 --- a/packages/host/scripts/HEAP_PROBE.md +++ b/packages/host/scripts/HEAP_PROBE.md @@ -80,7 +80,7 @@ SNAPSHOT_AT="10,50,90" \ nohup node packages/host/scripts/heap-snapshot-runner.js > /tmp/snap-runner.log 2>&1 & # 2. Open a fresh test tab -ENCODED=$(node -e 'console.log(encodeURIComponent("http://localhost:4200/tests/index.html?hidepassed&filter=card-basics"))') +ENCODED=$(node -e 'console.log(encodeURIComponent("https://localhost:4200/tests/index.html?hidepassed&filter=card-basics"))') curl -sX PUT "http://localhost:9333/json/new?${ENCODED}" # 3. After runner exits — diff and trace diff --git a/packages/host/scripts/live-test-wait-for-servers.sh b/packages/host/scripts/live-test-wait-for-servers.sh index f2b5bf5664c..04ef3d9cef1 100755 --- a/packages/host/scripts/live-test-wait-for-servers.sh +++ b/packages/host/scripts/live-test-wait-for-servers.sh @@ -1,12 +1,21 @@ #! /bin/sh READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" -BASE_REALM_READY="http-get://localhost:4201/base/${READY_PATH}" +BASE_REALM_READY="https-get://localhost:4201/base/${READY_PATH}" SYNAPSE_URL="http://localhost:8008" SMTP_4_DEV_URL="http://localhost:5001" +# Pick wait-on's protocol prefix from whichever scheme the caller used. +to_wait_scheme() { + case "$1" in + https://*) printf 'https-get' ;; + *) printf 'http-get' ;; + esac +} + if [ -n "$REALM_URL" ]; then REALM_HOST="$REALM_URL" + REALM_SCHEME="$(to_wait_scheme "$REALM_URL")" case "$REALM_HOST" in http://*) REALM_HOST="${REALM_HOST#http://}" ;; https://*) REALM_HOST="${REALM_HOST#https://}" ;; @@ -15,14 +24,20 @@ if [ -n "$REALM_URL" ]; then */) ;; *) REALM_HOST="${REALM_HOST}/" ;; esac - REALM_READY="http-get://${REALM_HOST}${READY_PATH}" + REALM_READY="${REALM_SCHEME}://${REALM_HOST}${READY_PATH}" READY_URLS="$BASE_REALM_READY|$REALM_READY|$SYNAPSE_URL|$SMTP_4_DEV_URL" else - CATALOG_REALM_READY="http-get://localhost:4201/catalog/${READY_PATH}" + CATALOG_REALM_READY="https-get://localhost:4201/catalog/${READY_PATH}" READY_URLS="$BASE_REALM_READY|$CATALOG_REALM_READY|$SYNAPSE_URL|$SMTP_4_DEV_URL" fi -WAIT_ON_TIMEOUT=600000 NODE_NO_WARNINGS=1 REALM_URL="${REALM_URL:-}" start-server-and-test \ +# See test-wait-for-servers.sh for the rationale on +# START_SERVER_AND_TEST_INSECURE=1 — wait-on against +# https-get://localhost:42XX needs the strictSSL escape hatch under +# start-server-and-test, otherwise the readiness probe flakes against +# the self-signed mkcert leaf. +WAIT_ON_TIMEOUT=600000 NODE_NO_WARNINGS=1 START_SERVER_AND_TEST_INSECURE=1 \ + REALM_URL="${REALM_URL:-}" start-server-and-test \ 'pnpm run wait' \ "$READY_URLS" \ 'ember test --config-file testem-live.js --path ./dist' diff --git a/packages/host/scripts/test-wait-for-servers.sh b/packages/host/scripts/test-wait-for-servers.sh index 56853569046..4f67d49300d 100755 --- a/packages/host/scripts/test-wait-for-servers.sh +++ b/packages/host/scripts/test-wait-for-servers.sh @@ -15,17 +15,17 @@ to_wait_url() { esac } -DEFAULT_BASE_REALM_URL='http://localhost:4201/base/' -DEFAULT_CATALOG_REALM_URL='http://localhost:4201/catalog/' -DEFAULT_SKILLS_REALM_URL='http://localhost:4201/skills/' +DEFAULT_BASE_REALM_URL='https://localhost:4201/base/' +DEFAULT_CATALOG_REALM_URL='https://localhost:4201/catalog/' +DEFAULT_SKILLS_REALM_URL='https://localhost:4201/skills/' BASE_REALM_URL=$(ensure_trailing_slash "${RESOLVED_BASE_REALM_URL:-$DEFAULT_BASE_REALM_URL}") CATALOG_REALM_URL=$(ensure_trailing_slash "${RESOLVED_CATALOG_REALM_URL:-$DEFAULT_CATALOG_REALM_URL}") SKILLS_REALM_URL=$(ensure_trailing_slash "${RESOLVED_SKILLS_REALM_URL:-$DEFAULT_SKILLS_REALM_URL}") SKIP_CATALOG="${SKIP_CATALOG:-}" -NODE_TEST_REALM="http-get://localhost:4202/node-test/" -TEST_REALM="http-get://localhost:4202/test/" +NODE_TEST_REALM="https-get://localhost:4202/node-test/" +TEST_REALM="https-get://localhost:4202/test/" READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" @@ -43,7 +43,16 @@ if [ "$SKIP_CATALOG" != "true" ]; then READY_URLS="$READY_URLS|$CATALOG_REALM_READY" fi -WAIT_ON_TIMEOUT=600000 NODE_NO_WARNINGS=1 start-server-and-test \ +# START_SERVER_AND_TEST_INSECURE=1 disables wait-on's `strictSSL` for the +# https-get://localhost:42XX readiness probes. start-server-and-test +# passes `strictSSL: !isInsecure()` into wait-on options (overriding the +# global NODE_TLS_REJECT_UNAUTHORIZED and even NODE_EXTRA_CA_CERTS when +# the wait-on subprocess inherits them unevenly under CI load) — the +# documented INSECURE flag is the right escape hatch. Scopes to the +# readiness probe only; the test runner itself still validates TLS +# normally. +WAIT_ON_TIMEOUT=600000 NODE_NO_WARNINGS=1 START_SERVER_AND_TEST_INSECURE=1 \ + start-server-and-test \ 'pnpm run wait' \ "$READY_URLS" \ 'ember-test-pre-built' diff --git a/packages/host/scripts/vite-with-traefik.js b/packages/host/scripts/vite-with-traefik.js index 1fa38f0f8ec..3f6968a6cb4 100644 --- a/packages/host/scripts/vite-with-traefik.js +++ b/packages/host/scripts/vite-with-traefik.js @@ -5,13 +5,23 @@ * container can reach it via host.docker.internal), then registers with * Traefik so that `host..localhost` routes here. When BOXEL_ENVIRONMENT * is not set, runs vite on the default port with default host. + * + * Local-HTTPS-dev path: when the mkcert leaf is present and vite is set + * up to terminate TLS (see `vite.config.mjs`'s `devHttpsConfig`), vite + * binds HTTPS on an internal port and we layer a tiny same-port + * dispatcher in front. The dispatcher peeks the first byte of each + * incoming connection: a TLS ClientHello (0x16) gets piped through to + * vite untouched, anything else gets a 308 redirect to the https:// + * URL. Mirrors the realm-server dispatcher pattern. Dev UX: typing + * `http://localhost:4200/foo` now lands on `https://localhost:4200/foo` + * instead of failing with `ERR_CONNECTION_REFUSED`. */ const { spawn } = require('child_process'); const path = require('path'); const net = require('net'); -function runVite({ subcommand, port, allHosts, extraEnv, nodeMemory }) { +function runVite({ subcommand, port, allHosts, host, extraEnv, nodeMemory }) { const args = ['vite']; if (subcommand) args.push(subcommand); args.push('--port', String(port), '--strictPort'); @@ -20,6 +30,8 @@ function runVite({ subcommand, port, allHosts, extraEnv, nodeMemory }) { // vite via host.docker.internal:. Vite's default is 127.0.0.1 // only, which is unreachable from inside the container. args.push('--host'); + } else if (host) { + args.push('--host', host); } const env = { ...process.env, ...(extraEnv || {}) }; if (nodeMemory) { @@ -35,11 +47,172 @@ function runVite({ subcommand, port, allHosts, extraEnv, nodeMemory }) { return child; } +// Returns true when env-vars.sh has exported the mkcert cert path — +// the same signal vite.config.mjs uses to enable `server.https`. +function isLocalHttpsDevModeEnabled() { + return ( + !!process.env.REALM_SERVER_TLS_CERT_FILE && + !!process.env.REALM_SERVER_TLS_KEY_FILE + ); +} + +// Ask the kernel for an unused loopback port. Used to find an internal +// port for vite when the dispatcher owns the public port. +function pickInternalPort() { + return new Promise((resolve, reject) => { + let tester = net.createServer(); + tester.once('error', reject); + tester.listen({ port: 0, host: '127.0.0.1' }, () => { + let port = tester.address().port; + tester.close(() => resolve(port)); + }); + }); +} + +function startSamePortRedirectDispatcher({ publicPort, viteInternalPort }) { + let server = net.createServer({ pauseOnConnect: true }, (socket) => { + socket.on('error', () => socket.destroy()); + socket.once('readable', () => { + let firstByte; + try { + firstByte = socket.read(1); + } catch { + socket.destroy(); + return; + } + if (!firstByte) { + socket.destroy(); + return; + } + + if (firstByte[0] === 0x16) { + // TLS ClientHello — forward raw bytes to vite, which terminates + // TLS itself with the cert it loaded in vite.config.mjs. Write the + // peeked byte explicitly on 'connect' rather than relying on + // socket.unshift()+pipe(): the unshift pattern races the upstream + // socket's connect handshake (the rest of the ClientHello can arrive + // and be written before the unshifted byte gets flushed, leaving + // vite with a corrupt handshake and the client with + // ERR_CONNECTION_CLOSED). + let upstream = net.connect(viteInternalPort, '127.0.0.1'); + upstream.on('error', () => socket.destroy()); + upstream.once('connect', () => { + upstream.write(firstByte); + socket.pipe(upstream); + upstream.pipe(socket); + socket.resume(); + }); + return; + } + + // Plain HTTP — read enough to extract the request-target, then + // 308 to the https:// version on the same authority. The + // request-target lives between the first and second SP on the + // start-line, e.g. `GET /foo HTTP/1.1\r\n`. The peeked byte + // never gets pushed back into the buffer; we just prepend it + // to the buffered chunks here. + // + // 308 (vs 301): preserves the request method and body, so a + // browser POST or fetch() with a body that hits the http:// + // origin gets a clean replay against https:// instead of + // silently downgrading to GET. + let chunks = [firstByte]; + let length = firstByte.length; + let onData = (chunk) => { + chunks.push(chunk); + length += chunk.length; + let buf = Buffer.concat(chunks, length); + let lineEnd = buf.indexOf('\r\n'); + if (lineEnd === -1 && length < 8192) { + return; // wait for more + } + socket.removeListener('data', onData); + let startLine = + lineEnd === -1 + ? buf.toString('utf8') + : buf.slice(0, lineEnd).toString('utf8'); + let parts = startLine.split(' '); + let requestTarget = parts[1] || '/'; + if (!requestTarget.startsWith('/')) requestTarget = '/' + requestTarget; + let body = `The Boxel dev server speaks HTTPS — redirecting to https://localhost:${publicPort}${requestTarget}\n`; + let response = + `HTTP/1.1 308 Permanent Redirect\r\n` + + `Location: https://localhost:${publicPort}${requestTarget}\r\n` + + `Content-Type: text/plain; charset=utf-8\r\n` + + `Content-Length: ${Buffer.byteLength(body)}\r\n` + + `Connection: close\r\n` + + `\r\n` + + body; + socket.end(response); + // Suppress noise from clients that pipeline more bytes after our 308. + socket.on('error', () => {}); + }; + socket.on('data', onData); + socket.resume(); + }); + }); + server.on('error', (err) => { + console.error(`[vite-dispatcher] error binding port ${publicPort}:`, err); + process.exit(1); + }); + server.listen(publicPort, '127.0.0.1', () => { + console.log( + `[vite-dispatcher] Listening on http(s)://localhost:${publicPort} → vite at 127.0.0.1:${viteInternalPort}`, + ); + console.log( + `[vite-dispatcher] Plain http://localhost:${publicPort}/* requests will 308 to https://`, + ); + }); + return server; +} + +async function runViteBehindRedirectDispatcher({ + subcommand, + publicPort, + nodeMemory, +}) { + // Vite binds the internal port; the dispatcher owns the public one. + // Force vite onto 127.0.0.1 to match the dispatcher's upstream + // net.connect target. Without `--host`, vite default-binds to + // `localhost`, which on macOS / Node 17+ resolves to ::1 first — the + // dispatcher then can't reach it on 127.0.0.1 and the TLS handshake + // dies as ERR_CONNECTION_CLOSED in the browser. + let viteInternalPort = await pickInternalPort(); + startSamePortRedirectDispatcher({ publicPort, viteInternalPort }); + runVite({ + subcommand, + port: viteInternalPort, + allHosts: false, + host: '127.0.0.1', + nodeMemory, + }); +} + function startWithTraefik({ subcommand, defaultPort, label, nodeMemory }) { const BOXEL_ENVIRONMENT = process.env.BOXEL_ENVIRONMENT; if (!BOXEL_ENVIRONMENT) { - runVite({ subcommand, port: defaultPort, allHosts: false, nodeMemory }); + // Same-port http→https redirect dispatcher is only useful for `vite` + // (dev) where humans type `http://localhost:4200` in a browser bar. + // For `vite preview` (production build, used by CI and serve:dist), + // skip the dispatcher and let vite bind the public port directly with + // HTTPS. The dispatcher's byte-peek + cross-process TCP pipe pattern + // races chrome's TLS handshake under load and produces + // ERR_CONNECTION_CLOSED in CI prerender probes, while curl over the + // same port succeeds — symptom of an ALPN/h2 framing issue inside + // the pipe that we don't need to solve for the preview path. + if (isLocalHttpsDevModeEnabled() && subcommand !== 'preview') { + runViteBehindRedirectDispatcher({ + subcommand, + publicPort: defaultPort, + nodeMemory, + }).catch((err) => { + console.error('[vite-dispatcher] failed to start:', err); + process.exit(1); + }); + } else { + runVite({ subcommand, port: defaultPort, allHosts: false, nodeMemory }); + } return; } diff --git a/packages/host/testem-live.js b/packages/host/testem-live.js index 4306fbd48f0..bdab7eec041 100644 --- a/packages/host/testem-live.js +++ b/packages/host/testem-live.js @@ -6,7 +6,7 @@ const XunitReporter = require('testem/lib/reporters/xunit_reporter'); const fs = require('fs'); const path = require('path'); -const DEFAULT_REALM_URLS = ['http://localhost:4201/catalog/']; +const DEFAULT_REALM_URLS = ['https://localhost:4201/catalog/']; const realmURLs = process.env.REALM_URL ? [process.env.REALM_URL] @@ -35,6 +35,17 @@ const config = { '--mute-audio', '--remote-debugging-port=0', '--window-size=1440,900', + // Local realm-server speaks HTTPS+HTTP/2 with a mkcert leaf cert + // (see infra:ensure-dev-cert). `mkcert -install` is best-effort + // in CI and may not land the root CA in the headless Chrome + // trust store, so relax cert checks for the realm fetches that + // the live-test runner makes. Safe — the URL is fixed by + // REALM_URL (default https://localhost:4201/catalog/) and the + // connection is loopback. Chrome 144+ requires the + // `--allow-insecure-localhost` companion or it silently demotes + // `--ignore-certificate-errors` and TLS validation still fails. + '--ignore-certificate-errors', + '--allow-insecure-localhost', ].filter(Boolean), }, }, diff --git a/packages/host/testem.js b/packages/host/testem.js index 8eb3ea0ee56..c7fa480fcd1 100644 --- a/packages/host/testem.js +++ b/packages/host/testem.js @@ -30,6 +30,18 @@ if (typeof module !== 'undefined') { '--mute-audio', '--remote-debugging-port=0', '--window-size=1440,900', + // The realm-server speaks HTTPS+HTTP/2 with a mkcert leaf cert + // (see infra:ensure-dev-cert). `mkcert -install` is + // best-effort in CI and doesn't reliably land mkcert's root + // CA in headless Chrome's trust store, so relax cert checks + // for the realm fetches the tests make. Safe — the URL is + // fixed by the host config and the connection is loopback. + // Chrome 144+ silently demotes `--ignore-certificate-errors` + // to a dev-only flag unless paired with + // `--allow-insecure-localhost`; without the pair, every + // realm fetch fails with `TypeError: Failed to fetch`. + '--ignore-certificate-errors', + '--allow-insecure-localhost', ].filter(Boolean), }, }, diff --git a/packages/host/tests/acceptance/code-submode-test.ts b/packages/host/tests/acceptance/code-submode-test.ts index 5a5383163dc..c4e2532b416 100644 --- a/packages/host/tests/acceptance/code-submode-test.ts +++ b/packages/host/tests/acceptance/code-submode-test.ts @@ -1272,12 +1272,12 @@ module('Acceptance | code submode tests', function (_hooks) { }); module('with connection to test realm', function (hooks) { hooks.beforeEach(function () { - setActiveRealms([testRealmURL, 'http://localhost:4202/test/']); + setActiveRealms([testRealmURL, 'https://localhost:4202/test/']); }); test('code submode handles binary files', async function (assert) { await visitOperatorMode({ submode: 'code', - codePath: `http://localhost:4202/test/mango.png`, + codePath: `https://localhost:4202/test/mango.png`, }); await waitFor('[data-test-binary-info]'); diff --git a/packages/host/tests/acceptance/code-submode/file-tree-test.ts b/packages/host/tests/acceptance/code-submode/file-tree-test.ts index 197cad1007b..7978a8f4baa 100644 --- a/packages/host/tests/acceptance/code-submode/file-tree-test.ts +++ b/packages/host/tests/acceptance/code-submode/file-tree-test.ts @@ -477,7 +477,7 @@ module('Acceptance | code submode | file-tree tests', function (hooks) { // go to a file with different realm await fillIn( '[data-test-card-url-bar-input]', - `http://localhost:4202/test/mango.png`, + `https://localhost:4202/test/mango.png`, ); await triggerKeyEvent( '[data-test-card-url-bar-input]', @@ -521,7 +521,7 @@ module('Acceptance | code submode | file-tree tests', function (hooks) { await fillIn( '[data-test-card-url-bar-input]', - `http://localhost:4202/test/mango.png`, + `https://localhost:4202/test/mango.png`, ); await triggerKeyEvent( '[data-test-card-url-bar-input]', @@ -823,7 +823,7 @@ module('Acceptance | code submode | file-tree tests', function (hooks) { await fillIn( '[data-test-card-url-bar-input]', - `http://localhost:4202/test/mango.png`, + `https://localhost:4202/test/mango.png`, ); await triggerKeyEvent( '[data-test-card-url-bar-input]', diff --git a/packages/host/tests/acceptance/code-submode/recent-files-test.ts b/packages/host/tests/acceptance/code-submode/recent-files-test.ts index d15c7055c82..8914e1faffa 100644 --- a/packages/host/tests/acceptance/code-submode/recent-files-test.ts +++ b/packages/host/tests/acceptance/code-submode/recent-files-test.ts @@ -317,7 +317,7 @@ module('Acceptance | code submode | recent files tests', function (hooks) { test('recent file links are shown', async function (assert) { setRecentFiles([ [testRealmURL, 'index.json'], - ['http://localhost:4202/test/', 'français.json'], + ['https://localhost:4202/test/', 'français.json'], // @ts-ignore error on purpose 'a-non-url-to-ignore', ]); @@ -403,7 +403,7 @@ module('Acceptance | code submode | recent files tests', function (hooks) { [testRealmURL, 'index.json', null], [testRealmURL, 'français.json', null], [testRealmURL, 'Person/1.json', null], - ['http://localhost:4202/test/', 'français.json', null], + ['https://localhost:4202/test/', 'français.json', null], ]); }); diff --git a/packages/host/tests/acceptance/commands-test.gts b/packages/host/tests/acceptance/commands-test.gts index 7ddffda0208..9f4ba4029ef 100644 --- a/packages/host/tests/acceptance/commands-test.gts +++ b/packages/host/tests/acceptance/commands-test.gts @@ -397,7 +397,7 @@ module('Acceptance | Commands tests', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/ai-command-example', + module: 'https://localhost:4202/test/ai-command-example', name: 'AiCommandExample', }, }, diff --git a/packages/host/tests/acceptance/host-submode-test.gts b/packages/host/tests/acceptance/host-submode-test.gts index a036a07fb48..ce0eacd77ae 100644 --- a/packages/host/tests/acceptance/host-submode-test.gts +++ b/packages/host/tests/acceptance/host-submode-test.gts @@ -634,7 +634,7 @@ module('Acceptance | host submode', function (hooks) { assert.dom('.publishing-realm-popover').exists(); assert .dom('.publishing-realm-popover') - .containsText(`Publishing to: http://testuser.localhost:4201/test/`); + .containsText(`Publishing to: https://testuser.localhost:4201/test/`); assert.dom('.publishing-realm-popover').exists(); assert.dom('.loading-icon').exists(); @@ -667,7 +667,7 @@ module('Acceptance | host submode', function (hooks) { .dom( '[data-test-publish-realm-modal] [data-test-open-boxel-space-button]', ) - .hasAttribute('href', 'http://testuser.localhost:4201/test/') + .hasAttribute('href', 'https://testuser.localhost:4201/test/') .hasAttribute('target', '_blank'); }); @@ -685,8 +685,8 @@ module('Acceptance | host submode', function (hooks) { let restoreRealmInfo = withUpdatedTestRealmInfo({ lastPublishedAt: { - 'http://testuser.localhost:4201/test/': String(now), - 'http://custom-site-name.localhost:4201/': String(now), + 'https://testuser.localhost:4201/test/': String(now), + 'https://custom-site-name.localhost:4201/': String(now), }, }); @@ -731,7 +731,7 @@ module('Acceptance | host submode', function (hooks) { test('can unpublish realm', async function (assert) { let restoreRealmInfo = withUpdatedTestRealmInfo({ lastPublishedAt: { - ['http://testuser.localhost:4201/test/']: ( + ['https://testuser.localhost:4201/test/']: ( new Date().getTime() - 3 * 24 * 60 * 60 * 1000 ).toString(), @@ -851,7 +851,7 @@ module('Acceptance | host submode', function (hooks) { assert .dom('[data-test-custom-subdomain-details]') .includesText( - 'http://my-boxel-site.localhost:4201/ Not published yet', + 'https://my-boxel-site.localhost:4201/ Not published yet', ); assert.dom('[data-test-unclaim-custom-subdomain-button]').exists(); assert.dom('[data-test-custom-subdomain-checkbox]').isChecked(); @@ -926,7 +926,7 @@ module('Acceptance | host submode', function (hooks) { let now = Date.now(); let restoreRealmInfo = withUpdatedTestRealmInfo({ lastPublishedAt: { - 'http://testuser.localhost:4201/test/': String(now), + 'https://testuser.localhost:4201/test/': String(now), 'https://another-domain.com/realm/': String(now - 1000), }, }); @@ -942,7 +942,7 @@ module('Acceptance | host submode', function (hooks) { .dom('[data-test-open-site-button]') .hasAttribute( 'href', - 'http://testuser.localhost:4201/test/Person/1', + 'https://testuser.localhost:4201/test/Person/1', ) .hasAttribute('target', '_blank'); @@ -970,7 +970,7 @@ module('Acceptance | host submode', function (hooks) { assert .dom( - '[data-test-published-realm-item="http://testuser.localhost:4201/test/Person/1"]', + '[data-test-published-realm-item="https://testuser.localhost:4201/test/Person/1"]', ) .exists(); assert @@ -982,11 +982,11 @@ module('Acceptance | host submode', function (hooks) { // Check that popover buttons have correct href attributes assert .dom( - '[data-test-published-realm-item="http://testuser.localhost:4201/test/Person/1"] [data-test-open-site-button]', + '[data-test-published-realm-item="https://testuser.localhost:4201/test/Person/1"] [data-test-open-site-button]', ) .hasAttribute( 'href', - 'http://testuser.localhost:4201/test/Person/1', + 'https://testuser.localhost:4201/test/Person/1', ) .hasAttribute('target', '_blank'); @@ -1039,7 +1039,7 @@ module('Acceptance | host submode', function (hooks) { assert .dom(`${customDomainOption} .domain-url`) .hasText( - 'http://custom-site-name.localhost:4201/', + 'https://custom-site-name.localhost:4201/', 'shows claimed custom site URL', ); assert @@ -1070,7 +1070,7 @@ module('Acceptance | host submode', function (hooks) { assert .dom(`${customDomainOption} .domain-url`) .hasText( - 'http://custom-site-name.localhost:4201/', + 'https://custom-site-name.localhost:4201/', 'displays placeholder custom site URL after unclaim', ); assert @@ -1162,7 +1162,7 @@ module('Acceptance | host submode', function (hooks) { await click('[data-test-publish-realm-button]'); assert.dom('[data-test-publish-realm-modal]').exists(); - let defaultUrl = 'http://testuser.localhost:4201/test/'; + let defaultUrl = 'https://testuser.localhost:4201/test/'; assert .dom(`[data-test-domain-publish-error="${defaultUrl}"]`) .doesNotExist(); @@ -1198,8 +1198,8 @@ module('Acceptance | host submode', function (hooks) { sourceRealmURL: testRealmURL, }); - let defaultUrl = 'http://testuser.localhost:4201/test/'; - let customUrl = 'http://my-custom-site.localhost:4201/'; + let defaultUrl = 'https://testuser.localhost:4201/test/'; + let customUrl = 'https://my-custom-site.localhost:4201/'; // Mock publish to succeed for default, fail for custom realmServer.publishRealm = async ( @@ -1335,7 +1335,7 @@ module('Acceptance | host submode', function (hooks) { assert .dom('[data-test-open-custom-subdomain-button]') - .hasAttribute('href', 'http://my-custom-site.localhost:4201/') + .hasAttribute('href', 'https://my-custom-site.localhost:4201/') .hasAttribute('target', '_blank'); } finally { realmServer.fetchBoxelClaimedDomain = originalFetchClaimed; @@ -1390,7 +1390,7 @@ module('Acceptance | host submode', function (hooks) { let restoreRealmInfo = withUpdatedTestRealmInfo({ lastPublishedAt: { - ['http://my-custom-site.localhost:4201/']: ( + ['https://my-custom-site.localhost:4201/']: ( new Date().getTime() - 2 * 24 * 60 * 60 * 1000 ).toString(), diff --git a/packages/host/tests/acceptance/interact-submode-test.gts b/packages/host/tests/acceptance/interact-submode-test.gts index 858b3a5e5a0..9642f9b0e26 100644 --- a/packages/host/tests/acceptance/interact-submode-test.gts +++ b/packages/host/tests/acceptance/interact-submode-test.gts @@ -912,7 +912,7 @@ module('Acceptance | interact submode tests', function (hooks) { }); test('visiting 2 stacks from differing realms', async function (assert) { - setActiveRealms([testRealmURL, 'http://localhost:4202/test/']); + setActiveRealms([testRealmURL, 'https://localhost:4202/test/']); await visitOperatorMode({ stacks: [ [ @@ -923,7 +923,7 @@ module('Acceptance | interact submode tests', function (hooks) { ], [ { - id: 'http://localhost:4202/test/hassan', + id: 'https://localhost:4202/test/hassan', format: 'isolated', }, ], diff --git a/packages/host/tests/cards/fadhlan.json b/packages/host/tests/cards/fadhlan.json index 12e8387c018..55cb3dc0e2f 100644 --- a/packages/host/tests/cards/fadhlan.json +++ b/packages/host/tests/cards/fadhlan.json @@ -1,7 +1,7 @@ { "data": { "type": "card", - "id": "http://localhost:4202/test/fadhlan", + "id": "https://localhost:4202/test/fadhlan", "attributes": { "firstName": "Fadhlan", "lastName": "Ridhwanallah" diff --git a/packages/host/tests/cards/mango.json b/packages/host/tests/cards/mango.json index 0ed79ca2f5a..b3871951c6b 100644 --- a/packages/host/tests/cards/mango.json +++ b/packages/host/tests/cards/mango.json @@ -1,7 +1,7 @@ { "data": { "type": "card", - "id": "http://localhost:4202/test/hassan", + "id": "https://localhost:4202/test/hassan", "attributes": { "firstName": "Mango", "lastName": "Abdel-Rahman" diff --git a/packages/host/tests/cards/type-examples.json b/packages/host/tests/cards/type-examples.json index 9abe9d490ec..bf30e49bab7 100644 --- a/packages/host/tests/cards/type-examples.json +++ b/packages/host/tests/cards/type-examples.json @@ -1,7 +1,7 @@ { "data": { "type": "card", - "id": "http://localhost:4202/test/type-examples", + "id": "https://localhost:4202/test/type-examples", "attributes": { "intField": 1, "stringField": "string", diff --git a/packages/host/tests/cards/van-gogh.json b/packages/host/tests/cards/van-gogh.json index c90cd1d6ee8..52349c333a3 100644 --- a/packages/host/tests/cards/van-gogh.json +++ b/packages/host/tests/cards/van-gogh.json @@ -1,7 +1,7 @@ { "data": { "type": "card", - "id": "http://localhost:4202/test/van-gogh", + "id": "https://localhost:4202/test/van-gogh", "attributes": { "firstName": "Van", "lastName": "Gogh" diff --git a/packages/host/tests/helpers/index.gts b/packages/host/tests/helpers/index.gts index 5e7b65a071e..eadd14e01d6 100644 --- a/packages/host/tests/helpers/index.gts +++ b/packages/host/tests/helpers/index.gts @@ -102,7 +102,7 @@ export { setupOperatorModeStateCleanup } from './operator-mode-state'; export * from '@cardstack/runtime-common/helpers'; export * from './indexer'; -export const testModuleRealm = ri('http://localhost:4202/test/'); +export const testModuleRealm = ri('https://localhost:4202/test/'); /** * Build a `RealmResourceIdentifier` for a module in `testModuleRealm`. diff --git a/packages/host/tests/integration/components/card-delete-test.gts b/packages/host/tests/integration/components/card-delete-test.gts index fea6ba30608..bc9c687dafb 100644 --- a/packages/host/tests/integration/components/card-delete-test.gts +++ b/packages/host/tests/integration/components/card-delete-test.gts @@ -567,7 +567,7 @@ module('Integration | card-delete', function (hooks) { test('can delete a card that is a selected item', async function (assert) { setCardInOperatorModeState( [`${testRealmURL}index`], - [`http://localhost:4202/test/`], + [`https://localhost:4202/test/`], ); await renderComponent( class TestDriver extends GlimmerComponent { diff --git a/packages/host/tests/integration/components/operator-mode-card-catalog-test.gts b/packages/host/tests/integration/components/operator-mode-card-catalog-test.gts index a0117a59972..6025772caf8 100644 --- a/packages/host/tests/integration/components/operator-mode-card-catalog-test.gts +++ b/packages/host/tests/integration/components/operator-mode-card-catalog-test.gts @@ -1062,7 +1062,7 @@ module('Integration | operator-mode | card catalog', function (hooks) { await click(`[data-test-open-search-field]`); await fillIn( '[data-test-search-field]', - 'http://localhost:4202/test/nonexistent', + 'https://localhost:4202/test/nonexistent', ); await waitFor(`[data-test-search-label]`); assert.dom('[data-test-search-sheet-empty]').exists(); diff --git a/packages/host/tests/integration/components/serialization-test.gts b/packages/host/tests/integration/components/serialization-test.gts index fb062305af6..0d934c1c0e2 100644 --- a/packages/host/tests/integration/components/serialization-test.gts +++ b/packages/host/tests/integration/components/serialization-test.gts @@ -874,7 +874,7 @@ module('Integration | serialization', function (hooks) { }, }); - let ref = { module: `http://localhost:4202/test/person`, name: 'Person' }; + let ref = { module: `https://localhost:4202/test/person`, name: 'Person' }; let resource = { attributes: { ref, @@ -920,7 +920,7 @@ module('Integration | serialization', function (hooks) { }, }); - let ref = { module: `http://localhost:4202/test/person`, name: 'Person' }; + let ref = { module: `https://localhost:4202/test/person`, name: 'Person' }; let driver = new DriverCard({ ref }); let serializedRef = serializeCard(driver, { includeUnrenderedFields: true }) .data.attributes?.ref; @@ -5745,7 +5745,7 @@ module('Integration | serialization', function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/captain'), + module: rri('https://localhost:4202/test/captain'), name: 'Captain', }, }, @@ -5770,7 +5770,7 @@ module('Integration | serialization', function (hooks) { }, meta: { adoptsFrom: { - module: rri(`http://localhost:4202/test/captain`), + module: rri(`https://localhost:4202/test/captain`), name: 'Boat', }, }, diff --git a/packages/host/tests/integration/enum-field-test.gts b/packages/host/tests/integration/enum-field-test.gts index 96efb50a450..88671c1a949 100644 --- a/packages/host/tests/integration/enum-field-test.gts +++ b/packages/host/tests/integration/enum-field-test.gts @@ -243,7 +243,7 @@ module('Integration | enumField', function (hooks) { let t1b = (await createFromSerialized( doc1.data, doc1, - new URL('http://localhost:4202/test/'), + new URL('https://localhost:4202/test/'), )) as Task; assert.strictEqual(t1b.priority, 'Medium', 'single enum value round-trips'); @@ -253,7 +253,7 @@ module('Integration | enumField', function (hooks) { let t2b = (await createFromSerialized( doc2.data, doc2, - new URL('http://localhost:4202/test/'), + new URL('https://localhost:4202/test/'), )) as Task; assert.ok( Array.isArray(t2b.priorities), @@ -724,7 +724,7 @@ module('Integration | enumField', function (hooks) { let t2 = (await createFromSerialized( doc.data, doc, - new URL('http://localhost:4202/test/'), + new URL('https://localhost:4202/test/'), )) as Task; assert.strictEqual( t2.priority, @@ -840,7 +840,7 @@ module('Integration | enumField', function (hooks) { let t2 = (await createFromSerialized( doc.data, doc, - new URL('http://localhost:4202/test/'), + new URL('https://localhost:4202/test/'), )) as Task; assert.deepEqual( t2.priorities, diff --git a/packages/host/tests/integration/realm-indexing-test.gts b/packages/host/tests/integration/realm-indexing-test.gts index d3035e54b30..7a6d660240e 100644 --- a/packages/host/tests/integration/realm-indexing-test.gts +++ b/packages/host/tests/integration/realm-indexing-test.gts @@ -349,7 +349,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -372,7 +372,7 @@ module(`Integration | realm indexing`, function (hooks) { [ `${testRealmURL}Person/owner`, `${testRealmURL}Person/owner.json`, - 'http://localhost:4202/test/pet', + 'https://localhost:4202/test/pet', ].sort(), 'error deps are correct', ); @@ -390,7 +390,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/person'), + module: rri('https://localhost:4202/test/person'), name: 'Person', }, }, @@ -425,7 +425,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, lastModified: adapter.lastModifiedMap.get( @@ -469,7 +469,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -489,7 +489,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, }, @@ -525,7 +525,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, realmURL: ri('http://test-realm/test/'), @@ -583,7 +583,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, realmURL: ri('http://test-realm/test/'), @@ -623,7 +623,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -643,7 +643,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, }, @@ -695,7 +695,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -714,7 +714,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, }, @@ -741,7 +741,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -762,7 +762,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -798,7 +798,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, lastModified: adapter.lastModifiedMap.get( @@ -841,7 +841,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -862,7 +862,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -896,7 +896,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/pet'), + module: rri('https://localhost:4202/test/pet'), name: 'Pet', }, lastModified: adapter.lastModifiedMap.get( @@ -2298,7 +2298,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/vendor`, + module: `https://localhost:4202/test/vendor`, name: 'Vendor', }, }, @@ -2312,7 +2312,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/chain`, + module: `https://localhost:4202/test/chain`, name: 'Chain', }, }, @@ -2326,7 +2326,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/chain`, + module: `https://localhost:4202/test/chain`, name: 'Chain', }, }, @@ -2393,7 +2393,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri(`http://localhost:4202/test/vendor`), + module: rri(`https://localhost:4202/test/vendor`), name: 'Vendor', }, lastModified: adapter.lastModifiedMap.get( @@ -2439,7 +2439,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri(`http://localhost:4202/test/chain`), + module: rri(`https://localhost:4202/test/chain`), name: 'Chain', }, lastModified: adapter.lastModifiedMap.get( @@ -2481,7 +2481,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri(`http://localhost:4202/test/chain`), + module: rri(`https://localhost:4202/test/chain`), name: 'Chain', }, lastModified: adapter.lastModifiedMap.get( @@ -2523,7 +2523,7 @@ module(`Integration | realm indexing`, function (hooks) { id: `${testRealmURL}Boom/boom`, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/card-with-error', + module: 'https://localhost:4202/test/card-with-error', name: 'Boom', }, }, @@ -2537,7 +2537,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -2606,7 +2606,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -2670,7 +2670,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/post', + module: 'https://localhost:4202/test/post', name: 'Post', }, }, @@ -2690,7 +2690,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/post', + module: 'https://localhost:4202/test/post', name: 'Post', }, }, @@ -2821,7 +2821,7 @@ module(`Integration | realm indexing`, function (hooks) { cardDescription: 'Spec for Booking', specType: 'card', ref: { - module: 'http://localhost:4202/test/booking', + module: 'https://localhost:4202/test/booking', name: 'Booking', }, }, @@ -2844,10 +2844,10 @@ module(`Integration | realm indexing`, function (hooks) { id: `${testRealmURL}Spec/booking`, cardDescription: 'Spec for Booking', specType: 'card', - moduleHref: 'http://localhost:4202/test/booking', + moduleHref: 'https://localhost:4202/test/booking', containedExamples: null, linkedExamples: null, - ref: 'http://localhost:4202/test/booking/Booking', + ref: 'https://localhost:4202/test/booking/Booking', cardTitle: 'Booking', isCard: true, isComponent: false, @@ -3377,7 +3377,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/friend', + module: 'https://localhost:4202/test/friend', name: 'Friend', }, }, @@ -3399,7 +3399,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/friend', + module: 'https://localhost:4202/test/friend', name: 'Friend', }, }, @@ -3422,7 +3422,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/friend', + module: 'https://localhost:4202/test/friend', name: 'Friend', }, }, @@ -3458,7 +3458,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/friend'), + module: rri('https://localhost:4202/test/friend'), name: 'Friend', }, lastModified: adapter.lastModifiedMap.get( @@ -3542,7 +3542,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/friend', + module: 'https://localhost:4202/test/friend', name: 'Friend', }, }, @@ -3568,7 +3568,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/friend', + module: 'https://localhost:4202/test/friend', name: 'Friend', }, }, @@ -3610,7 +3610,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/friend'), + module: rri('https://localhost:4202/test/friend'), name: 'Friend', }, lastModified: adapter.lastModifiedMap.get( @@ -3662,7 +3662,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/friend'), + module: rri('https://localhost:4202/test/friend'), name: 'Friend', }, lastModified: adapter.lastModifiedMap.get( @@ -3760,7 +3760,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/friend'), + module: rri('https://localhost:4202/test/friend'), name: 'Friend', }, lastModified: adapter.lastModifiedMap.get( @@ -3812,7 +3812,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/friend'), + module: rri('https://localhost:4202/test/friend'), name: 'Friend', }, lastModified: adapter.lastModifiedMap.get( @@ -3898,7 +3898,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/friend', + module: 'https://localhost:4202/test/friend', name: 'Friend', }, }, @@ -3940,7 +3940,7 @@ module(`Integration | realm indexing`, function (hooks) { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/test/friend'), + module: rri('https://localhost:4202/test/friend'), name: 'Friend', }, lastModified: adapter.lastModifiedMap.get( @@ -4652,7 +4652,6 @@ module(`Integration | realm indexing`, function (hooks) { // Exclude synthetic imports that encapsulate scoped CSS .filter((ref) => !ref.includes('glimmer-scoped.css')), [ - 'http://localhost:4202/test/person', 'http://localhost:4206/@cardstack/boxel-icons/v1/icons/align-box-left-middle', 'http://localhost:4206/@cardstack/boxel-icons/v1/icons/align-left', 'http://localhost:4206/@cardstack/boxel-icons/v1/icons/arrow-left', @@ -4718,6 +4717,7 @@ module(`Integration | realm indexing`, function (hooks) { 'https://cardstack.com/base/string', 'https://cardstack.com/base/text-input-validator', 'https://cardstack.com/base/watched-array', + 'https://localhost:4202/test/person', 'https://packages/@cardstack/boxel-host/commands/copy-and-edit', 'https://packages/@cardstack/boxel-host/commands/copy-card', 'https://packages/@cardstack/boxel-host/commands/copy-card-as-markdown', @@ -4803,7 +4803,6 @@ module(`Integration | realm indexing`, function (hooks) { // Exclude synthetic imports that encapsulate scoped CSS .filter((ref) => !ref.includes('glimmer-scoped.css')), [ - 'http://localhost:4202/test/person', 'http://localhost:4206/@cardstack/boxel-icons/v1/icons/align-box-left-middle', 'http://localhost:4206/@cardstack/boxel-icons/v1/icons/align-left', 'http://localhost:4206/@cardstack/boxel-icons/v1/icons/apps', @@ -4881,6 +4880,7 @@ module(`Integration | realm indexing`, function (hooks) { 'https://cardstack.com/base/string', 'https://cardstack.com/base/text-input-validator', 'https://cardstack.com/base/watched-array', + 'https://localhost:4202/test/person', 'https://packages/@cardstack/boxel-host/commands/copy-and-edit', 'https://packages/@cardstack/boxel-host/commands/copy-card', 'https://packages/@cardstack/boxel-host/commands/copy-card-as-markdown', diff --git a/packages/host/tests/integration/realm-test.gts b/packages/host/tests/integration/realm-test.gts index 5b64674948c..b9f60c3d03d 100644 --- a/packages/host/tests/integration/realm-test.gts +++ b/packages/host/tests/integration/realm-test.gts @@ -153,7 +153,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -178,7 +178,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -224,7 +224,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, lastModified: adapter.lastModifiedMap.get( @@ -258,7 +258,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, lastModified: adapter.lastModifiedMap.get( @@ -291,13 +291,13 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { - self: `http://localhost:4202/test/hassan`, + self: `https://localhost:4202/test/hassan`, }, }, }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -328,12 +328,12 @@ module('Integration | realm', function (hooks) { ); assert.strictEqual( json.data.relationships.owner.links.self, - 'http://localhost:4202/test/hassan', + 'https://localhost:4202/test/hassan', 'owner self link points to other realm', ); assert.strictEqual( json.data.relationships.owner.data.id, - 'http://localhost:4202/test/hassan', + 'https://localhost:4202/test/hassan', 'owner data id points to other realm', ); assert.strictEqual( @@ -356,7 +356,7 @@ module('Integration | realm', function (hooks) { let hassan = included[0]; assert.strictEqual( hassan.id, - 'http://localhost:4202/test/hassan', + 'https://localhost:4202/test/hassan', 'included hassan id', ); assert.strictEqual( @@ -627,7 +627,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -657,7 +657,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -699,7 +699,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, lastModified: adapter.lastModifiedMap.get( @@ -733,7 +733,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, lastModified: adapter.lastModifiedMap.get( @@ -770,7 +770,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -860,7 +860,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -885,7 +885,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -942,7 +942,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -980,7 +980,7 @@ module('Integration | realm', function (hooks) { let { data: cards } = await queryEngine.searchCards({ filter: { on: { - module: rri(`http://localhost:4202/test/person`), + module: rri(`https://localhost:4202/test/person`), name: 'Person', }, eq: { firstName: 'Van Gogh' }, @@ -1007,7 +1007,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/booking', + module: 'https://localhost:4202/test/booking', name: 'Booking', }, }, @@ -1038,7 +1038,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/booking', + module: 'https://localhost:4202/test/booking', name: 'Booking', }, }, @@ -1085,7 +1085,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/booking', + module: 'https://localhost:4202/test/booking', name: 'Booking', }, lastModified: adapter.lastModifiedMap.get( @@ -1147,7 +1147,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1160,7 +1160,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1172,7 +1172,7 @@ module('Integration | realm', function (hooks) { attributes: { firstName: 'Hassan', lastName: 'Abdel-Rahman' }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -1190,7 +1190,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1219,7 +1219,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1353,7 +1353,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1374,7 +1374,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1387,7 +1387,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1399,7 +1399,7 @@ module('Integration | realm', function (hooks) { attributes: { firstName: 'Hassan', lastName: 'Abdel-Rahman' }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -1417,7 +1417,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1444,7 +1444,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1495,7 +1495,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, lastModified: adapter.lastModifiedMap.get(`${testRealmURL}jackie.json`), @@ -1517,7 +1517,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1530,7 +1530,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1545,7 +1545,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1572,7 +1572,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1611,7 +1611,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, lastModified: adapter.lastModifiedMap.get(`${testRealmURL}jackie.json`), @@ -1748,7 +1748,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1761,7 +1761,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1779,7 +1779,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1800,7 +1800,7 @@ module('Integration | realm', function (hooks) { relationships: { pets: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1834,7 +1834,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, lastModified: adapter.lastModifiedMap.get(`${testRealmURL}jackie.json`), @@ -1856,7 +1856,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -1868,7 +1868,7 @@ module('Integration | realm', function (hooks) { attributes: { firstName: 'Hassan', lastName: 'Abdel-Rahman' }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -1880,7 +1880,7 @@ module('Integration | realm', function (hooks) { attributes: { firstName: 'Burcu' }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -1898,7 +1898,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1925,7 +1925,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -1966,7 +1966,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, lastModified: adapter.lastModifiedMap.get(`${testRealmURL}jackie.json`), @@ -1988,7 +1988,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -2001,7 +2001,7 @@ module('Integration | realm', function (hooks) { relationships: { owner: { links: { self: null } } }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet`, + module: `https://localhost:4202/test/pet`, name: 'Pet', }, }, @@ -2013,7 +2013,7 @@ module('Integration | realm', function (hooks) { attributes: { firstName: 'Hassan', lastName: 'Abdel-Rahman' }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -2025,7 +2025,7 @@ module('Integration | realm', function (hooks) { attributes: { firstName: 'Burcu' }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -2043,7 +2043,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -2072,7 +2072,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, }, @@ -2112,7 +2112,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: `http://localhost:4202/test/pet-person`, + module: `https://localhost:4202/test/pet-person`, name: 'PetPerson', }, lastModified: adapter.lastModifiedMap.get(`${testRealmURL}jackie.json`), @@ -2136,7 +2136,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -2151,7 +2151,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/person', + module: 'https://localhost:4202/test/person', name: 'Person', }, }, @@ -2174,7 +2174,7 @@ module('Integration | realm', function (hooks) { }, meta: { adoptsFrom: { - module: 'http://localhost:4202/test/pet', + module: 'https://localhost:4202/test/pet', name: 'Pet', }, }, @@ -3195,7 +3195,7 @@ module('Integration | realm', function (hooks) { }); test('included card uses correct module path when realm is mounted', async function (assert) { - let mountedRealmURL = 'http://localhost:4201/mounted-test/'; + let mountedRealmURL = 'https://localhost:4201/mounted-test/'; let spreadsheet1Id = 'spreadsheet-1'; let spreadsheet2Id = 'spreadsheet-2'; diff --git a/packages/host/vite.config.mjs b/packages/host/vite.config.mjs index bb2e85ec853..2b422948717 100644 --- a/packages/host/vite.config.mjs +++ b/packages/host/vite.config.mjs @@ -5,6 +5,7 @@ import { ember, } from '@embroider/vite'; import { babel } from '@rollup/plugin-babel'; +import { readFileSync } from 'node:fs'; import { readFile, writeFile } from 'node:fs/promises'; import { createRequire } from 'node:module'; import path from 'node:path'; @@ -12,6 +13,32 @@ import { fileURLToPath } from 'node:url'; import { scopedCSS } from 'glimmer-scoped-css/rollup'; import { boxelUIChecksumPlugin } from './lib/build/boxel-ui-checksum-plugin.mjs'; +// Local HTTPS dev access: the realm-server speaks HTTPS+HTTP/2 in local +// dev (see `infra:ensure-dev-cert`), and the browser hits both Vite and +// the realm-server in the same page. Mixing schemes triggers CORS +// preflight failures ("Redirect is not allowed for a preflight +// request" when the http→https redirect runs) and mixed-content +// blocking. When the same TLS cert/key the realm-server reads via +// REALM_SERVER_TLS_CERT_FILE / _KEY_FILE is available, terminate TLS +// in Vite too so http://localhost:4200 becomes https://localhost:4200 +// and both origins share the scheme. `env-vars.sh` exports those env +// vars whenever the cert exists; absent the cert, the dev stack stays +// on HTTP end-to-end and this falls through to Vite's default. +function devHttpsConfig() { + let certPath = process.env.REALM_SERVER_TLS_CERT_FILE; + let keyPath = process.env.REALM_SERVER_TLS_KEY_FILE; + if (!certPath || !keyPath) return undefined; + try { + return { + cert: readFileSync(certPath), + key: readFileSync(keyPath), + }; + } catch { + return undefined; + } +} +const _devHttps = devHttpsConfig(); + const __dirname = path.dirname(fileURLToPath(import.meta.url)); const require = createRequire(import.meta.url); @@ -274,8 +301,10 @@ export default defineConfig(({ mode }) => ({ 'Cache-Control': 'no-store', }, ...(envHostname ? { allowedHosts: [envHostname] } : {}), + ...(_devHttps ? { https: _devHttps } : {}), }, server: { + ...(_devHttps ? { https: _devHttps } : {}), // Pre-warm the dep optimizer at server boot so the prerender's first // `/_standby` navigation doesn't race a cold Vite optimize. The host // transitive graph is ~1000 packages, and a cold optimize routinely diff --git a/packages/matrix/docker/synapse/dev/homeserver.yaml b/packages/matrix/docker/synapse/dev/homeserver.yaml index 6555716a619..ce4daa0ea2d 100644 --- a/packages/matrix/docker/synapse/dev/homeserver.yaml +++ b/packages/matrix/docker/synapse/dev/homeserver.yaml @@ -101,7 +101,7 @@ email: app_name: "Boxel" enable_notifs: true notif_for_new_users: false - client_base_url: "http://localhost:4200" + client_base_url: "https://localhost:4200" validation_token_lifetime: "1h" subjects: email_validation: "%(app)s Email Validation" diff --git a/packages/matrix/helpers/index.ts b/packages/matrix/helpers/index.ts index ec87953e035..b2ebb3b62fb 100644 --- a/packages/matrix/helpers/index.ts +++ b/packages/matrix/helpers/index.ts @@ -16,7 +16,7 @@ import { appURL, BasicSQLExecutor } from './isolated-realm-server'; import { APP_BOXEL_MESSAGE_MSGTYPE } from './matrix-constants'; import { randomUUID } from 'crypto'; -export const testHost = 'http://localhost:4205/test'; +export const testHost = 'https://localhost:4205/test'; export const mailHost = 'http://localhost:5001'; export const initialRoomName = 'New AI Assistant Chat'; export const REGISTRATION_TOKEN = 'abc123'; @@ -111,12 +111,12 @@ export async function setRealmRedirects(page: Page) { await registerRealmRedirect( page, 'http://localhost:4201/skills/', - 'http://localhost:4205/skills/', + 'https://localhost:4205/skills/', ); await registerRealmRedirect( page, 'http://localhost:4201/base/', - 'http://localhost:4205/base/', + 'https://localhost:4205/base/', ); } diff --git a/packages/matrix/helpers/isolated-realm-server.ts b/packages/matrix/helpers/isolated-realm-server.ts index de361ffa43d..3a8d6712b3f 100644 --- a/packages/matrix/helpers/isolated-realm-server.ts +++ b/packages/matrix/helpers/isolated-realm-server.ts @@ -9,6 +9,18 @@ import type { SynapseInstance } from '../docker/synapse'; setGracefulCleanup(); +// The isolated realm-server / worker stack matches production: +// HTTPS+HTTP/2 on `https://localhost:4205`. URL maps, realm registry +// entries, and the Playwright `baseURL` all hardcode `https://`, and +// the spawned child processes inherit `REALM_SERVER_TLS_CERT_FILE` / +// `_KEY_FILE` from `mise-tasks/lib/env-vars.sh` so the same mkcert +// leaf the parent dev stack uses on :4201/:4202 also terminates TLS +// on :4205. Keeping the wire protocol identical to prod means the +// matrix suite acts as a regression guard on the h2 framing changes +// elsewhere in this PR (`setContextResponse` h1-only-header filter, +// `fetchRequestFromContext` pseudo-header strip, the HEAD-stream +// `writable` patch, and the hand-rolled `proxyAsset` forwarder). + const testRealmCards = resolve( join(__dirname, '..', '..', 'host', 'tests', 'cards'), ); @@ -18,7 +30,7 @@ const skillsRealmDir = resolve( ); const baseRealmDir = resolve(join(__dirname, '..', '..', 'base')); const matrixDir = resolve(join(__dirname, '..')); -export const appURL = 'http://localhost:4205/test'; +export const appURL = 'https://localhost:4205/test'; const DEFAULT_PRERENDER_PORT = 4231; const DEFAULT_WORKER_MANAGER_READY_TIMEOUT_MS = 120_000; @@ -53,11 +65,7 @@ function parseTimeoutMs( return Number.isFinite(parsed) && parsed > 0 ? parsed : fallbackMs; } -function pushOutputTail( - output: string[], - prefix: string, - data: Buffer, -): void { +function pushOutputTail(output: string[], prefix: string, data: Buffer): void { for (let line of data.toString().split(/\r?\n/)) { if (!line.trim()) { continue; @@ -109,7 +117,9 @@ function buildStartupFailure( diagnostics: Record, ): Error { let message = - reason instanceof Error ? reason.message : `Startup failed: ${String(reason)}`; + reason instanceof Error + ? reason.message + : `Startup failed: ${String(reason)}`; let error = new Error( `${message}\nStartup diagnostics:\n${JSON.stringify( { @@ -234,7 +244,24 @@ export async function startPrerenderServer( ...process.env, NODE_ENV: process.env.NODE_ENV ?? 'development', NODE_NO_WARNINGS: '1', - BOXEL_HOST_URL: process.env.HOST_URL ?? 'http://localhost:4200', + // The mkcert leaf for the isolated stack covers `*.localhost`, but + // Node's `tls.checkServerIdentity` hardcodes-disallows wildcard + // matching against TLDs (it treats `localhost` as a TLD per RFC + // 6125 strict interpretation), so worker fetches to + // `https://publish-realm-XXX.localhost:4205/...` fail with + // ERR_TLS_CERT_ALTNAME_INVALID. Relax cert validation in the + // harness's spawned Node children — the wire is loopback only and + // the cert is still being validated end-to-end against the mkcert + // root via NODE_EXTRA_CA_CERTS, just without strict SAN matching + // on subdomains. + NODE_TLS_REJECT_UNAUTHORIZED: '0', + // vite preview always serves HTTPS on :4200 in this harness + // (vite.config.mjs reads the mkcert leaf, which mise activates via + // infra:ensure-dev-cert before boot). Hardcode the canonical here + // rather than reading process.env.HOST_URL — a shell that + // mise-activated before the cert existed leaks a stale http://... + // value and sends the prerender to a port that doesn't speak HTTP. + BOXEL_HOST_URL: 'https://localhost:4200', LOG_LEVELS: process.env.TEST_HARNESS_PRERENDER_LOG_LEVELS ?? process.env.LOG_LEVELS, }; @@ -320,7 +347,10 @@ export async function startServer({ process.env.TEST_HARNESS_REALM_SERVER_START_TIMEOUT_MS, DEFAULT_REALM_SERVER_START_TIMEOUT_MS, ); - let workerManagerMetadataFile = join(dir.name, 'worker-manager-metadata.json'); + let workerManagerMetadataFile = join( + dir.name, + 'worker-manager-metadata.json', + ); let realmServerMetadataFile = join(dir.name, 'realm-server-metadata.json'); let workerManagerOutput: string[] = []; let realmServerOutput: string[] = []; @@ -346,16 +376,16 @@ export async function startServer({ `--prerendererUrl='${prerenderURL}'`, `--migrateDB`, - `--fromUrl='http://localhost:4205/test/'`, - `--toUrl='http://localhost:4205/test/'`, + `--fromUrl='https://localhost:4205/test/'`, + `--toUrl='https://localhost:4205/test/'`, ]; workerArgs = workerArgs.concat([ `--fromUrl='@cardstack/skills/'`, - `--toUrl='http://localhost:4205/skills/'`, + `--toUrl='https://localhost:4205/skills/'`, ]); workerArgs = workerArgs.concat([ `--fromUrl='https://cardstack.com/base/'`, - `--toUrl='http://localhost:4205/base/'`, + `--toUrl='https://localhost:4205/base/'`, ]); let workerManager = spawn('ts-node', workerArgs, { @@ -363,6 +393,11 @@ export async function startServer({ stdio: ['pipe', 'pipe', 'pipe', 'ipc'], env: { ...process.env, + // See the prerender spawn above for why this is needed (Node's + // `tls.checkServerIdentity` doesn't honor `*.localhost` wildcard + // SANs, so publish-realm subdomain fetches from the spawned + // worker fail with ERR_TLS_CERT_ALTNAME_INVALID). + NODE_TLS_REJECT_UNAUTHORIZED: '0', TEST_HARNESS_WORKER_START_TIMEOUT_MS: String(workerStartTimeoutMs), TEST_HARNESS_WORKER_MANAGER_METADATA_FILE: workerManagerMetadataFile, }, @@ -447,20 +482,20 @@ export async function startServer({ `--path='${testRealmDir}'`, `--username='test_realm'`, - `--fromUrl='http://localhost:4205/test/'`, - `--toUrl='http://localhost:4205/test/'`, + `--fromUrl='https://localhost:4205/test/'`, + `--toUrl='https://localhost:4205/test/'`, ]; serverArgs = serverArgs.concat([ `--username='skills_realm'`, `--path='${skillsRealmDir}'`, `--fromUrl='@cardstack/skills/'`, - `--toUrl='http://localhost:4205/skills/'`, + `--toUrl='https://localhost:4205/skills/'`, ]); serverArgs = serverArgs.concat([ `--username='base_realm'`, `--path='${baseRealmDir}'`, `--fromUrl='https://cardstack.com/base/'`, - `--toUrl='http://localhost:4205/base/'`, + `--toUrl='https://localhost:4205/base/'`, ]); console.log(`realm server database: ${testDBName}`); @@ -470,6 +505,18 @@ export async function startServer({ stdio: ['pipe', 'pipe', 'pipe', 'ipc'], env: { ...process.env, + // See the prerender spawn for why this is needed (Node's + // `tls.checkServerIdentity` doesn't honor `*.localhost` wildcard + // SANs, so publish-realm subdomain fetches from the spawned + // realm-server fail with ERR_TLS_CERT_ALTNAME_INVALID). + NODE_TLS_REJECT_UNAUTHORIZED: '0', + // Override HOST_URL explicitly: main.ts reads it as `distURL` (the + // URL the realm-server fetches index.html from at boot). A stale + // HOST_URL=http leaking in from a shell that mise-activated before + // the cert existed would land the boot fetch on a port that doesn't + // speak HTTP, and the realm-server would exit -2 before any test + // can run. The harness boots vite preview as HTTPS on :4200. + HOST_URL: 'https://localhost:4200', // Matrix tests don't exercise GitHub PR creation, so disable that route // to avoid pulling Octokit into the realm server startup path. DISABLE_GITHUB_PR_ROUTE: 'true', @@ -581,8 +628,8 @@ export async function startServer({ // metadata backfill trims on first boot. await server.executeSQL( `INSERT INTO realm_metadata (url, show_as_catalog) VALUES - ('http://localhost:4205/test/', true), - ('http://localhost:4205/skills/', true) + ('https://localhost:4205/test/', true), + ('https://localhost:4205/skills/', true) ON CONFLICT (url) DO UPDATE SET show_as_catalog = true`, ); diff --git a/packages/matrix/package.json b/packages/matrix/package.json index c4e4fd7fb5e..2e51a4151ad 100644 --- a/packages/matrix/package.json +++ b/packages/matrix/package.json @@ -41,6 +41,8 @@ "register-skills-writer": "MATRIX_USERNAME=skills_writer MATRIX_PASSWORD=password ts-node --transpileOnly ./scripts/register-test-user.ts", "register-homepage-writer": "MATRIX_USERNAME=homepage_writer MATRIX_PASSWORD=password ts-node --transpileOnly ./scripts/register-test-user.ts", "register-realm-users": "ts-node --transpileOnly ./scripts/register-matrix-users.ts realms-only", + "migrate-account-data-http-to-https": "ts-node --transpileOnly ./scripts/migrate-account-data-http-to-https.ts", + "migrate-account-data-https-to-http": "ts-node --transpileOnly ./scripts/migrate-account-data-http-to-https.ts --reverse", "register-test-admin": "MATRIX_IS_ADMIN=TRUE MATRIX_USERNAME=admin MATRIX_PASSWORD=password ts-node --transpileOnly ./scripts/register-test-user.ts", "register-test-admin-and-token": "pnpm register-test-admin && ts-node --transpileOnly ./scripts/register-test-token.ts", "register-all": "ts-node --transpileOnly ./scripts/register-matrix-users.ts all", diff --git a/packages/matrix/playwright.config.ts b/packages/matrix/playwright.config.ts index adb45301874..6f5386ca09f 100644 --- a/packages/matrix/playwright.config.ts +++ b/packages/matrix/playwright.config.ts @@ -14,7 +14,8 @@ export default defineConfig({ reporter: process.env.CI ? 'blob' : 'html', /* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */ use: { - baseURL: 'http://localhost:4205/test', + baseURL: 'https://localhost:4205/test', + ignoreHTTPSErrors: true, /* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */ trace: 'retry-with-trace', @@ -36,8 +37,19 @@ export default defineConfig({ args: [ // Simulate resolving a custom workspace domain to a realm server '--host-resolver-rules=MAP published.realm 127.0.0.1:4205', + // The mkcert leaf's SAN is `localhost` only — the published + // realm subdomain (`https://published.realm:4205/`) and the + // tenant-style subdomains under `*.localhost:4205` that + // publish-realm.spec.ts exercises fail strict cert + // validation. Pair --ignore-certificate-errors with + // --allow-insecure-localhost so chrome 144+ actually honors + // the relaxation (Chrome silently demoted + // --ignore-certificate-errors to a dev-only flag without + // --allow-insecure-localhost). + '--ignore-certificate-errors', + '--allow-insecure-localhost', // Allow iframe to request storage access depsite being considered insecure - '--unsafely-treat-insecure-origin-as-secure=http://published.realm', + '--unsafely-treat-insecure-origin-as-secure=https://published.realm', ], // devtools: true, }, diff --git a/packages/matrix/scripts/migrate-account-data-http-to-https.ts b/packages/matrix/scripts/migrate-account-data-http-to-https.ts new file mode 100644 index 00000000000..f4be81f26dc --- /dev/null +++ b/packages/matrix/scripts/migrate-account-data-http-to-https.ts @@ -0,0 +1,255 @@ +// One-off migration: rewrite `app.boxel.realms` account_data entries +// that reference `http://localhost:42XX/...` to the new canonical +// `https://` scheme — or the reverse if `--reverse` is passed. +// Companion to the `1779100257124_canonical-url-http-to-https` +// postgres migration — that one rewrites the realm-server DB; this +// one rewrites the per-user state synapse holds for every Boxel user +// (the list of workspaces the host bundle reads via +// `getAccountDataFromServer` on app boot). Without this migration, a +// logged-in user's app keeps fetching the http:// realm URLs, the +// realm-server's dispatcher 301-redirects every request to https://, +// and the browser blocks the CORS preflight ("Redirect is not allowed +// for a preflight request"). +// +// The script logs in as the local synapse admin user, lists every +// user, admin-impersonates each one to get an access token (the +// standard account_data endpoint requires the user's own token — +// synapse admin can read but not write other users' account_data), +// reads `app.boxel.realms`, rewrites any matching URLs in-place, and +// PUTs the updated list back. +// +// Safe to re-run: rows already in the target scheme are left +// untouched, and the PUT only fires when at least one URL changed. + +import { getSynapseURL } from '../helpers/environment-config'; + +const ADMIN_USERNAME = 'admin'; +const ADMIN_PASSWORD = 'password'; +const ACCOUNT_DATA_TYPE = 'app.boxel.realms'; + +// Default direction is http → https (forward). `--reverse` flips it to +// https → http, e.g. for `pnpm migrate down` on the postgres migration. +const REVERSE = process.argv.includes('--reverse'); +const FROM_SCHEME = REVERSE ? 'https' : 'http'; +const TO_SCHEME = REVERSE ? 'http' : 'https'; + +// Only flip the three known localhost realm-server canonicals (mirrors +// the postgres migration `1779100257124_canonical-url-http-to-https.js`, +// which covers :4201, :4202, and :4205). Production / staging realm URLs +// are real hostnames and would never appear in a local synapse, so a +// broader regex would just create the opportunity to corrupt unrelated +// data. +const URL_PREFIXES_TO_FLIP = [ + `${FROM_SCHEME}://localhost:4201/`, + `${FROM_SCHEME}://localhost:4202/`, + `${FROM_SCHEME}://localhost:4205/`, +]; + +interface LoginResponse { + access_token: string; + user_id: string; +} + +async function loginAsAdmin(synapseURL: string): Promise { + let response = await fetch(`${synapseURL}/_matrix/client/r0/login`, { + method: 'POST', + body: JSON.stringify({ + type: 'm.login.password', + user: ADMIN_USERNAME, + password: ADMIN_PASSWORD, + }), + }); + if (!response.ok) { + throw new Error( + `Failed to log in as admin: HTTP ${response.status} ${await response.text()}`, + ); + } + let body = (await response.json()) as LoginResponse; + return body.access_token; +} + +async function listAllUsers( + synapseURL: string, + adminToken: string, +): Promise { + let userIds: string[] = []; + let from: string | undefined; + // Paginate via `next_token`. + for (;;) { + let url = new URL(`${synapseURL}/_synapse/admin/v2/users`); + url.searchParams.set('limit', '100'); + url.searchParams.set('guests', 'false'); + url.searchParams.set('deactivated', 'false'); + if (from) url.searchParams.set('from', from); + let response = await fetch(url, { + headers: { Authorization: `Bearer ${adminToken}` }, + }); + if (!response.ok) { + throw new Error( + `Failed to list users: HTTP ${response.status} ${await response.text()}`, + ); + } + let body = (await response.json()) as { + users: Array<{ name: string }>; + next_token?: string; + }; + for (let u of body.users) userIds.push(u.name); + if (!body.next_token) break; + from = body.next_token; + } + return userIds; +} + +async function impersonate( + synapseURL: string, + adminToken: string, + userId: string, +): Promise { + // Synapse admin endpoint that returns an access token for any user. + let response = await fetch( + `${synapseURL}/_synapse/admin/v1/users/${encodeURIComponent( + userId, + )}/login`, + { + method: 'POST', + headers: { Authorization: `Bearer ${adminToken}` }, + body: JSON.stringify({}), + }, + ); + if (!response.ok) { + throw new Error( + `Failed to impersonate ${userId}: HTTP ${response.status} ${await response.text()}`, + ); + } + let body = (await response.json()) as { access_token: string }; + return body.access_token; +} + +async function getRealmsAccountData( + synapseURL: string, + userId: string, + userToken: string, +): Promise<{ realms?: string[] } | null> { + let response = await fetch( + `${synapseURL}/_matrix/client/v3/user/${encodeURIComponent( + userId, + )}/account_data/${ACCOUNT_DATA_TYPE}`, + { headers: { Authorization: `Bearer ${userToken}` } }, + ); + if (response.status === 404) { + // User has no `app.boxel.realms` yet — nothing to migrate. + return null; + } + if (!response.ok) { + throw new Error( + `Failed to GET ${ACCOUNT_DATA_TYPE} for ${userId}: HTTP ${response.status} ${await response.text()}`, + ); + } + return (await response.json()) as { realms?: string[] }; +} + +async function putRealmsAccountData( + synapseURL: string, + userId: string, + userToken: string, + content: unknown, +): Promise { + let response = await fetch( + `${synapseURL}/_matrix/client/v3/user/${encodeURIComponent( + userId, + )}/account_data/${ACCOUNT_DATA_TYPE}`, + { + method: 'PUT', + headers: { + Authorization: `Bearer ${userToken}`, + 'Content-Type': 'application/json', + }, + body: JSON.stringify(content), + }, + ); + if (!response.ok) { + throw new Error( + `Failed to PUT ${ACCOUNT_DATA_TYPE} for ${userId}: HTTP ${response.status} ${await response.text()}`, + ); + } +} + +function rewriteURLs(urls: string[]): { urls: string[]; changedCount: number } { + let changedCount = 0; + let rewritten = urls.map((url) => { + for (let prefix of URL_PREFIXES_TO_FLIP) { + if (url.startsWith(prefix)) { + changedCount++; + return `${TO_SCHEME}://${url.slice(`${FROM_SCHEME}://`.length)}`; + } + } + return url; + }); + return { urls: rewritten, changedCount }; +} + +async function main(): Promise { + let synapseURL = getSynapseURL(); + console.log( + `[migrate-account-data] Connecting to ${synapseURL} (${FROM_SCHEME} → ${TO_SCHEME})`, + ); + + let adminToken = await loginAsAdmin(synapseURL); + let userIds = await listAllUsers(synapseURL, adminToken); + console.log(`[migrate-account-data] Found ${userIds.length} users`); + + let migratedUsers = 0; + let totalURLsChanged = 0; + let skippedNoData = 0; + let skippedAlreadyOnTargetScheme = 0; + + for (let userId of userIds) { + // The admin can't impersonate itself ("Cannot use admin API to login + // as self"). It also has no realm list of its own, so skip it. + if (userId === `@${ADMIN_USERNAME}:localhost`) { + continue; + } + let userToken: string; + try { + userToken = await impersonate(synapseURL, adminToken, userId); + } catch (e) { + console.warn( + `[migrate-account-data] Skipping ${userId}: ${(e as Error).message}`, + ); + continue; + } + + let data = await getRealmsAccountData(synapseURL, userId, userToken); + if (!data || !Array.isArray(data.realms) || data.realms.length === 0) { + skippedNoData++; + continue; + } + + let { urls: rewritten, changedCount } = rewriteURLs(data.realms); + if (changedCount === 0) { + skippedAlreadyOnTargetScheme++; + continue; + } + + await putRealmsAccountData(synapseURL, userId, userToken, { + ...data, + realms: rewritten, + }); + migratedUsers++; + totalURLsChanged += changedCount; + console.log( + `[migrate-account-data] ${userId}: rewrote ${changedCount} URL${changedCount === 1 ? '' : 's'}`, + ); + } + + console.log(`[migrate-account-data] Done.`); + console.log(` Users migrated: ${migratedUsers}`); + console.log(` URLs rewritten: ${totalURLsChanged}`); + console.log(` Skipped (no data): ${skippedNoData}`); + console.log(` Skipped (${TO_SCHEME.padEnd(5)}):${' '.repeat(8 - TO_SCHEME.length)}${skippedAlreadyOnTargetScheme}`); +} + +main().catch((err) => { + console.error('[migrate-account-data] FAILED:', err); + process.exit(1); +}); diff --git a/packages/matrix/scripts/test.sh b/packages/matrix/scripts/test.sh index 8eeda67de3a..de971ccc393 100755 --- a/packages/matrix/scripts/test.sh +++ b/packages/matrix/scripts/test.sh @@ -2,13 +2,16 @@ shard_flag=${1:+--shard} echo "running tests: ${1}" -BASE_REALM="http-get://localhost:4201/base/" +BASE_REALM="https-get://localhost:4201/base/" READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" BASE_REALM_READY="$BASE_REALM$READY_PATH" -WAIT_ON_TIMEOUT=600000 start-server-and-test \ +# START_SERVER_AND_TEST_INSECURE=1: wait-on against https-get://localhost:4201 +# needs the strictSSL escape hatch because start-server-and-test pins +# strictSSL:true on the in-process axios used for the readiness probe. +WAIT_ON_TIMEOUT=600000 START_SERVER_AND_TEST_INSECURE=1 start-server-and-test \ 'pnpm run wait' \ "$BASE_REALM_READY" \ "pnpm playwright test ${shard_flag} ${1}" diff --git a/packages/matrix/tests/head-tags.spec.ts b/packages/matrix/tests/head-tags.spec.ts index 391e9537aab..51dff3da431 100644 --- a/packages/matrix/tests/head-tags.spec.ts +++ b/packages/matrix/tests/head-tags.spec.ts @@ -82,7 +82,7 @@ test.describe('Head tags', () => { }) => { await publishDefaultRealm(page); - let publishedRealmURLString = `http://${user.username}.localhost:4205/new-workspace/index`; + let publishedRealmURLString = `https://${user.username}.localhost:4205/new-workspace/index`; await page.goto(publishedRealmURLString); @@ -270,7 +270,7 @@ test.describe('Head tags', () => { await page.locator('[data-test-publish-button]').click(); await page.waitForSelector('[data-test-unpublish-button]'); - let publishedRealmURL = `http://${user.username}.localhost:4205/${realmName}/`; + let publishedRealmURL = `https://${user.username}.localhost:4205/${realmName}/`; let defaultCardURL = `${publishedRealmURL}default-head-card.json`; await page.goto(defaultCardURL); diff --git a/packages/matrix/tests/host-mode.spec.ts b/packages/matrix/tests/host-mode.spec.ts index eee131ead35..7deec360ee4 100644 --- a/packages/matrix/tests/host-mode.spec.ts +++ b/packages/matrix/tests/host-mode.spec.ts @@ -185,7 +185,7 @@ test.describe('Host mode', () => { await page.reload(); await page.locator('[data-test-host-mode-isolated]').waitFor(); - publishedRealmURL = `http://published.localhost:4205/${username}/${realmName}/`; + publishedRealmURL = `https://published.localhost:4205/${username}/${realmName}/`; await page.evaluate( async ({ realmURL, publishedRealmURL }) => { @@ -197,7 +197,7 @@ test.describe('Host mode', () => { throw new Error(`No session token found for ${realmURL}`); } - let response = await fetch('http://localhost:4205/_publish-realm', { + let response = await fetch('https://localhost:4205/_publish-realm', { method: 'POST', headers: { Accept: 'application/json', @@ -222,7 +222,7 @@ test.describe('Host mode', () => { publishedCardURL = `${publishedRealmURL}index.json`; publishedWhitePaperCardURL = `${publishedRealmURL}white-paper.json`; publishedMyCardURL = `${publishedRealmURL}my-card.json`; - connectRouteURL = `http://localhost:4205/connect/${encodeURIComponent( + connectRouteURL = `https://localhost:4205/connect/${encodeURIComponent( publishedRealmURL, )}`; @@ -318,7 +318,7 @@ test.describe('Host mode', () => { page, }) => { let response = await page.goto( - 'http://localhost:4205/connect/http%3A%2F%2Fexample.com', + 'https://localhost:4205/connect/http%3A%2F%2Fexample.com', ); expect(response?.status()).toBe(404); diff --git a/packages/matrix/tests/publish-realm.spec.ts b/packages/matrix/tests/publish-realm.spec.ts index 725e51c3033..6ad907848be 100644 --- a/packages/matrix/tests/publish-realm.spec.ts +++ b/packages/matrix/tests/publish-realm.spec.ts @@ -60,11 +60,11 @@ test.describe('Publish realm', () => { await newTab.waitForLoadState(); await expect(newTab).toHaveURL( - `http://${user.username}.localhost:4205/new-workspace/`, + `https://${user.username}.localhost:4205/new-workspace/`, ); await expect( newTab.locator( - `[data-test-card="http://${user.username}.localhost:4205/new-workspace/index"]`, + `[data-test-card="https://${user.username}.localhost:4205/new-workspace/index"]`, ), ).toBeVisible(); await newTab.close(); @@ -119,11 +119,11 @@ test.describe('Publish realm', () => { await newTab.waitForLoadState(); await expect(newTab).toHaveURL( - 'http://acceptable-subdomain.localhost:4205/', + 'https://acceptable-subdomain.localhost:4205/', ); await expect( newTab.locator( - '[data-test-card="http://acceptable-subdomain.localhost:4205/index"]', + '[data-test-card="https://acceptable-subdomain.localhost:4205/index"]', ), ).toBeVisible(); await newTab.close(); @@ -452,7 +452,7 @@ test.describe('Publish realm', () => { await newTab.waitForLoadState(); await expect(newTab).toHaveURL( - `http://${user.username}.localhost:4205/new-workspace/`, + `https://${user.username}.localhost:4205/new-workspace/`, ); await newTab.close(); await page.bringToFront(); @@ -482,7 +482,7 @@ test.describe('Publish realm', () => { await newTab.waitForLoadState(); await expect(newTab).toHaveURL( - `http://${user.username}.localhost:4205/new-workspace/`, + `https://${user.username}.localhost:4205/new-workspace/`, ); await newTab.close(); await page.bringToFront(); diff --git a/packages/matrix/tests/registration-with-token.spec.ts b/packages/matrix/tests/registration-with-token.spec.ts index 8937cb7133e..0d44909736c 100644 --- a/packages/matrix/tests/registration-with-token.spec.ts +++ b/packages/matrix/tests/registration-with-token.spec.ts @@ -269,7 +269,7 @@ test.describe('User Registration w/ Token', () => { APP_BOXEL_REALMS_EVENT_TYPE, ); expect(realms).toEqual({ - realms: [`http://localhost:4205/${firstUser.username}/personal/`], + realms: [`https://localhost:4205/${firstUser.username}/personal/`], }); }); diff --git a/packages/observability/scripts/apply.sh b/packages/observability/scripts/apply.sh index a2c720d96cd..8b5bf5ac7af 100755 --- a/packages/observability/scripts/apply.sh +++ b/packages/observability/scripts/apply.sh @@ -111,7 +111,7 @@ cp -R ./grafanactl/resources/. "$rendered/" case "$env_name" in local) - realm_server_url="${REALM_SERVER_URL:-http://localhost:4201/}" + realm_server_url="${REALM_SERVER_URL:-https://localhost:4201/}" # Matches the dev default in packages/software-factory/src/harness/shared.ts # and the matrix test harness, so local Grafana buttons authenticate # against a freshly started realm-server with no extra env config. diff --git a/packages/observability/scripts/diff.sh b/packages/observability/scripts/diff.sh index 6c2e05230c6..4b3dcd1fc13 100755 --- a/packages/observability/scripts/diff.sh +++ b/packages/observability/scripts/diff.sh @@ -153,7 +153,7 @@ source ./scripts/grafanactl-env.sh "$env_name" # `__REALM_SERVER_URL__` placeholder substitution below produces the # same value diff.sh expects to find in the live (pulled) state. CI # sources REALM_SERVER_URL from SSM in observability-diff.yml; locally -# we default to apply.sh's hardcoded http://localhost:4201/. For +# we default to apply.sh's hardcoded https://localhost:4201/. For # staging/production ad-hoc runs, the operator must export the same # value apply.sh uses (CI fetches it from //boxel-grafana/realm_server_url # — see observability-apply-${env_name}.yml). @@ -170,7 +170,7 @@ source ./scripts/grafanactl-env.sh "$env_name" # fixed placeholder before diffing — see the `grafana_secret` arm of # JQ_NORMALIZE below. case "$env_name" in - local) realm_server_url="${REALM_SERVER_URL:-http://localhost:4201/}" ;; + local) realm_server_url="${REALM_SERVER_URL:-https://localhost:4201/}" ;; *) [[ -n "${REALM_SERVER_URL:-}" ]] \ || { echo "error: REALM_SERVER_URL not set; CI fetches it from /${env_name}/boxel-grafana/realm_server_url in observability-diff.yml — for a local hosted run, export it manually first (same SSM path apply-${env_name}.yml uses)" >&2; exit 1; } diff --git a/packages/postgres/migrations/1779100257124_canonical-url-http-to-https.js b/packages/postgres/migrations/1779100257124_canonical-url-http-to-https.js new file mode 100644 index 00000000000..1ae416c5582 --- /dev/null +++ b/packages/postgres/migrations/1779100257124_canonical-url-http-to-https.js @@ -0,0 +1,128 @@ +'use strict'; + +// Local realm-server flipped from http://localhost:42XX to +// https://localhost:42XX as the canonical scheme when the realm-server +// terminates HTTPS+HTTP/2. Every row that was indexed/persisted under +// the old canonical needs its URLs rewritten in place so URL-keyed +// lookups (PK matches, JSONB references, transpiled module imports, +// etc.) continue resolving without depending on the wire-level +// HTTP→HTTPS redirect. +// +// Strategy: auto-discover every text-like and JSONB column on every +// public table (except `modules`, which the realm-server truncates on +// startup, so any rewrite there would be immediately wiped), then +// REPLACE substrings of the two known localhost canonicals in place. +// The WHERE filter restricts to rows actually containing the old URL, +// so the migration is idempotent and a no-op in production (where the +// realm canonicals are real hostnames, never `http://localhost:42XX`). +// +// The text/JSONB scope covers anywhere a URL might appear, including: +// - PK / FK columns (`url`, `realm_url`, `source_url`) +// - JSONB documents (`pristine_doc`, `search_doc`, `error_doc`, +// `deps`, `value`, `headers`, etc.) +// - Rendered HTML/markdown payloads (`isolated_html`, `atom_html`, +// `fitted_html`, `embedded_html`, `icon_html`, `head_html`, +// `markdown`, `body`) +// - Anything else a future column adds — the loop picks it up +// automatically as long as its type is text/varchar/jsonb. +// +// Excluded: +// - The `modules` table — truncated on every realm-server boot. +// - `pgmigrations` / `migrations` (the migration tracker tables). +// - Identity columns and timestamps fall outside text/varchar/jsonb, +// so the type filter excludes them implicitly. + +exports.shorthands = undefined; + +// Cheap pre-check: realm_user_permissions is a small table seeded with +// the canonical localhost realm URLs by earlier migrations +// (1726671342065_backfill-realm-owners + siblings). If no row there +// matches the old localhost canonicals, no other table will either, so +// we exit before touching the larger tables. Avoids full-column scans +// on production/staging databases where the canonical realm URLs are +// real hostnames and `localhost` never appears. (We can't use +// realm_registry — it's populated by the realm-server's runtime +// bootstrap, not by migrations, so on a fresh install it's empty when +// this migration runs.) +// Build the in-place REPLACE block. `oldScheme` and `newScheme` flip +// between 'http' and 'https' depending on direction. The pre-check +// gates on `realm_user_permissions` containing rows that match the +// `oldScheme` localhost canonicals; production realms use real +// hostnames (never `localhost`) so the pre-check is always false there +// and the body is a no-op in either direction. +function rewriteBlock({ oldScheme, newScheme }) { + return ` +DO $$ +DECLARE + rec RECORD; + patterns text[][] := ARRAY[ + ARRAY['${oldScheme}://localhost:4201', '${newScheme}://localhost:4201'], + ARRAY['${oldScheme}://localhost:4202', '${newScheme}://localhost:4202'], + ARRAY['${oldScheme}://localhost:4205', '${newScheme}://localhost:4205'] + ]; + i int; +BEGIN + IF NOT EXISTS ( + SELECT 1 FROM realm_user_permissions + WHERE realm_url LIKE '${oldScheme}://localhost:4201/%' + OR realm_url LIKE '${oldScheme}://localhost:4202/%' + OR realm_url LIKE '${oldScheme}://localhost:4205/%' + LIMIT 1 + ) THEN + RETURN; + END IF; + + FOR rec IN + SELECT table_name, column_name, data_type, udt_name + FROM information_schema.columns + WHERE table_schema = 'public' + AND table_name NOT IN ('modules', 'pgmigrations', 'migrations') + AND is_generated = 'NEVER' + AND ( + data_type IN ('text', 'character varying', 'character') + OR udt_name = 'jsonb' + ) + LOOP + FOR i IN 1 .. array_length(patterns, 1) LOOP + IF rec.udt_name = 'jsonb' THEN + EXECUTE format( + 'UPDATE %I SET %I = REPLACE(%I::text, %L, %L)::jsonb WHERE %I::text LIKE %L', + rec.table_name, + rec.column_name, + rec.column_name, + patterns[i][1], + patterns[i][2], + rec.column_name, + '%' || patterns[i][1] || '%' + ); + ELSE + EXECUTE format( + 'UPDATE %I SET %I = REPLACE(%I, %L, %L) WHERE %I LIKE %L', + rec.table_name, + rec.column_name, + rec.column_name, + patterns[i][1], + patterns[i][2], + rec.column_name, + '%' || patterns[i][1] || '%' + ); + END IF; + END LOOP; + END LOOP; +END $$; +`; +} + +exports.up = (pgm) => { + pgm.sql(rewriteBlock({ oldScheme: 'http', newScheme: 'https' })); +}; + +// Symmetric reverse: rewrite https://localhost:42XX → http://localhost:42XX +// for the same set of text/JSONB columns. Same `realm_user_permissions` +// pre-check (looking for https rows this time) means production is still +// a no-op — production realms never have `localhost` in their canonicals. +// Required for the Postgres Migration CI job, which validates that every +// migration is reversible (up → down → up). +exports.down = (pgm) => { + pgm.sql(rewriteBlock({ oldScheme: 'https', newScheme: 'http' })); +}; diff --git a/packages/postgres/pg-adapter.ts b/packages/postgres/pg-adapter.ts index 88a0f1c7113..a24b620e094 100644 --- a/packages/postgres/pg-adapter.ts +++ b/packages/postgres/pg-adapter.ts @@ -485,10 +485,15 @@ export class PgAdapter implements DBAdapter { await client.connect(); let realmServerUrl = `http://realm-server.${slug}.localhost`; let realmTestUrl = `http://realm-test.${slug}.localhost`; + // Match both http and https canonicals — realm-server speaks HTTPS in + // local dev now, so a DB seeded after the CS-11114 flip stores + // `https://localhost:42XX/...` permission rows; older rows can still + // be on `http://`. The regex collapses both into the env-mode + // Traefik hostname. let result = await client.query( `UPDATE realm_user_permissions - SET realm_url = regexp_replace(realm_url, '^http://localhost:4201/', $1) - WHERE realm_url LIKE 'http://localhost:4201/%'`, + SET realm_url = regexp_replace(realm_url, '^https?://localhost:4201/', $1) + WHERE realm_url ~ '^https?://localhost:4201/'`, [`${realmServerUrl}/`], ); if (result.rowCount && result.rowCount > 0) { @@ -498,8 +503,8 @@ export class PgAdapter implements DBAdapter { } let result2 = await client.query( `UPDATE realm_user_permissions - SET realm_url = regexp_replace(realm_url, '^http://localhost:4202/', $1) - WHERE realm_url LIKE 'http://localhost:4202/%'`, + SET realm_url = regexp_replace(realm_url, '^https?://localhost:4202/', $1) + WHERE realm_url ~ '^https?://localhost:4202/'`, [`${realmTestUrl}/`], ); if (result2.rowCount && result2.rowCount > 0) { diff --git a/packages/postgres/scripts/ensure-db-exists.sh b/packages/postgres/scripts/ensure-db-exists.sh index 44a95789c69..d6b7f34c702 100755 --- a/packages/postgres/scripts/ensure-db-exists.sh +++ b/packages/postgres/scripts/ensure-db-exists.sh @@ -6,7 +6,7 @@ wait_for_postgres() { MAX_ATTEMPTS="${PG_WAIT_ATTEMPTS:-60}" while [ "$ATTEMPTS" -lt "$MAX_ATTEMPTS" ]; do - if docker exec boxel-pg pg_isready -U postgres >/dev/null 2>&1; then + if docker exec boxel-pg pg_isready -h localhost -p 5432 -U postgres >/dev/null 2>&1; then return 0 fi @@ -22,9 +22,18 @@ if ! wait_for_postgres; then exit 1 fi -if docker exec boxel-pg psql -U postgres -w -lqt | cut -d \| -f 1 | grep -qw "$PGDATABASE"; then +# Force TCP to localhost inside the container: postgres:16.3's image does +# not always create the /var/run/postgresql unix socket, so a bare +# `psql -U postgres` from inside the container fails with +# `connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed`. +# Postgres listens on `*:5432` inside the container (POSTGRES_HOST_AUTH_METHOD +# trust), so `-h localhost -p 5432` works regardless of socket availability. +# Also exit non-zero when CREATE DATABASE fails so the migrate step doesn't +# silently move on to a missing database. +set -e +if docker exec boxel-pg psql -h localhost -p 5432 -U postgres -w -lqt | cut -d \| -f 1 | grep -qw "$PGDATABASE"; then echo "Database $PGDATABASE exists" else - docker exec boxel-pg psql -U postgres -w -c "CREATE DATABASE $PGDATABASE" + docker exec boxel-pg psql -h localhost -p 5432 -U postgres -w -c "CREATE DATABASE $PGDATABASE" echo "created database $PGDATABASE" fi diff --git a/packages/realm-server/lib/dev-service-registry.ts b/packages/realm-server/lib/dev-service-registry.ts index 1e7013673a5..9b7d1df9052 100644 --- a/packages/realm-server/lib/dev-service-registry.ts +++ b/packages/realm-server/lib/dev-service-registry.ts @@ -2,8 +2,7 @@ import { execSync } from 'child_process'; import { writeFileSync, renameSync, unlinkSync, readdirSync } from 'fs'; import { join, resolve } from 'path'; import { logger } from '@cardstack/runtime-common'; -import type { Server } from 'http'; -import type { AddressInfo } from 'net'; +import type { AddressInfo, Server } from 'net'; import yaml from 'yaml'; const log = logger('dev-service-registry'); diff --git a/packages/realm-server/lib/is-https-loopback.ts b/packages/realm-server/lib/is-https-loopback.ts new file mode 100644 index 00000000000..44fdedac936 --- /dev/null +++ b/packages/realm-server/lib/is-https-loopback.ts @@ -0,0 +1,23 @@ +// True for `https://localhost[:port]`, `https://127.0.0.1[:port]`, or +// `https://[::1][:port]` — the only origins where the local mkcert +// leaf is expected. Used to gate `--ignore-certificate-errors` / +// `--allow-insecure-localhost` on puppeteer chrome launches so the +// relaxation fires only in local dev / CI; production hits real +// hostnames with real CA-signed certs and must keep strict +// validation. +export function isHttpsLoopback(url: string | undefined): boolean { + if (!url) return false; + let parsed: URL; + try { + parsed = new URL(url); + } catch { + return false; + } + if (parsed.protocol !== 'https:') return false; + return ( + parsed.hostname === 'localhost' || + parsed.hostname === '127.0.0.1' || + parsed.hostname === '[::1]' || + parsed.hostname === '::1' + ); +} diff --git a/packages/realm-server/main.ts b/packages/realm-server/main.ts index b7855d84a9e..04375ee3c0d 100644 --- a/packages/realm-server/main.ts +++ b/packages/realm-server/main.ts @@ -126,10 +126,10 @@ let { serviceName = 'realm-server', serverURL = isEnvironmentMode() ? serviceURL(serviceName) - : `http://localhost:${port}`, + : `https://localhost:${port}`, distURL = isEnvironmentMode() ? serviceURL('host') - : (process.env.HOST_URL ?? 'http://localhost:4200'), + : (process.env.HOST_URL ?? 'https://localhost:4200'), path: paths, fromUrl: fromUrls, toUrl: toUrls, @@ -523,7 +523,14 @@ const getIndexHTML = async () => { if (isEnvironmentMode()) { deregisterEnvironment(); } - httpServer.closeAllConnections(); + // Both the plain `http.Server` and the TLS-mode `net.Server` + // dispatcher (see `server.ts`) expose `closeAllConnections()`. The + // dispatcher's mirror force-closes in-flight TLS / HTTP/2 / + // keep-alive sessions instead of waiting for peers to release them + // — without it `close()` can hang for a tab-keep-alive lifetime. + if (typeof (httpServer as any).closeAllConnections === 'function') { + (httpServer as any).closeAllConnections(); + } httpServer.close(() => { (async () => { await Promise.all([ diff --git a/packages/realm-server/middleware/index.ts b/packages/realm-server/middleware/index.ts index 2fd6f0687db..49a01db2304 100644 --- a/packages/realm-server/middleware/index.ts +++ b/packages/realm-server/middleware/index.ts @@ -1,4 +1,5 @@ -import proxy from 'koa-proxies'; +import http from 'http'; +import https from 'https'; import type { ResponseWithNodeStream } from '@cardstack/runtime-common'; import { logger as getLogger, @@ -20,6 +21,22 @@ import { const REQUEST_BODY_STATE = 'requestBody'; +// HTTP/2 forbids connection-specific (hop-by-hop) headers (RFC 9113 +// §8.2.2). Sending any of them on an h2 response makes Node's http2 +// compat layer either strip them silently or — worse — drop the stream +// mid-flight. We also strip them from the upstream-asset response in +// `proxyAsset` for the same reason: even when the host-dist upstream +// is plain HTTP/1.1, we re-emit its response through the realm-server's +// (potentially h2) response and the forbidden list applies there too. +const H2_FORBIDDEN_RESPONSE_HEADERS = new Set([ + 'connection', + 'keep-alive', + 'transfer-encoding', + 'upgrade', + 'proxy-connection', + 'http2-settings', +]); + interface ProxyOptions { requestHeaders?: Record; responseHeaders?: Record; @@ -31,25 +48,74 @@ export function proxyAsset( opts?: ProxyOptions, ): Koa.Middleware { let filename = from.split('/').pop()!; - return proxy(from, { - target: assetsURL.href.replace(/$\//, ''), - changeOrigin: true, - rewrite: () => { - return `/${filename}`; - }, - events: { - proxyReq: (proxyReq) => { - for (let [key, value] of Object.entries(opts?.requestHeaders ?? {})) { - proxyReq.setHeader(key, value); - } - }, - proxyRes: (_proxyRes, _req, res) => { - for (let [key, value] of Object.entries(opts?.responseHeaders ?? {})) { - res.setHeader(key, value); - } + let upstreamPath = `${assetsURL.pathname.replace(/\/$/, '')}/${filename}`; + let client = assetsURL.protocol === 'https:' ? https : http; + // Direct upstream proxy. Replaces the previous koa-proxies + http-proxy + // stack which forwarded `req.headers` verbatim into Node's + // `http.ClientRequest`; under HTTP/2 that included pseudo-headers + // (`:method`, `:path`, …) and tripped `ERR_INVALID_HTTP_TOKEN`. By + // building the upstream request ourselves we choose exactly which + // headers to forward, so the h2 / h1 callers share one code path. + // + // GET-only — `upstreamReq.end()` fires without a body. Add request- + // body piping if you need to reuse this for POST/PUT/PATCH; the only + // current caller is the host-dist asset hand-off (`/auth-service-worker.js`). + return async (ctxt, next) => { + if (ctxt.path !== from) { + return next(); + } + + let forwardedHeaders: Record = {}; + for (let [name, value] of Object.entries(ctxt.req.headers)) { + if (name.startsWith(':')) continue; + // Node's http.ClientRequest rejects connection-specific hop-by-hop + // headers when targeting an HTTP/1.1 upstream. + if (name === 'host') continue; + if (typeof value === 'string') { + forwardedHeaders[name] = value; + } else if (Array.isArray(value)) { + forwardedHeaders[name] = value.join(', '); + } + } + for (let [key, value] of Object.entries(opts?.requestHeaders ?? {})) { + forwardedHeaders[key] = value; + } + + let upstreamRes = await new Promise( + (resolve, reject) => { + let upstreamReq = client.request( + { + method: ctxt.method, + hostname: assetsURL.hostname, + // `assetsURL.port` is the empty string for default-port URLs; + // fall through to the protocol default. + port: assetsURL.port || (client === https ? 443 : 80), + path: upstreamPath, + headers: forwardedHeaders, + }, + resolve, + ); + upstreamReq.on('error', reject); + upstreamReq.end(); }, - }, - }); + ); + + ctxt.status = upstreamRes.statusCode ?? 502; + for (let [name, value] of Object.entries(upstreamRes.headers)) { + if (value == null) continue; + // Strip hop-by-hop headers (Node manages them per-connection) plus + // anything else the h2 response layer will reject. `host` is + // irrelevant on the response side. + if (H2_FORBIDDEN_RESPONSE_HEADERS.has(name.toLowerCase())) { + continue; + } + ctxt.set(name, Array.isArray(value) ? value.map(String) : String(value)); + } + for (let [key, value] of Object.entries(opts?.responseHeaders ?? {})) { + ctxt.set(key, value); + } + ctxt.body = upstreamRes; + }; } // Add middleware to handle method override for QUERY @@ -127,11 +193,30 @@ function isLoopbackAddress(address: string | undefined): boolean { } export function fullRequestURL(ctxt: Koa.Context): URL { + // Three protocol signals, checked in order: + // 1. `x-forwarded-proto: https` — set by a TLS-terminating proxy in front + // of us (ALB, Traefik, etc.). Trust it ahead of the socket check because + // the proxy may have negotiated TLS even when our socket is plain HTTP. + // 2. The TLS socket flag — set when we terminate TLS ourselves (the local + // dev HTTPS/h2 listener). `tls.TLSSocket#encrypted` is true here; plain + // http.IncomingMessage sockets do not have the property. + // 3. Default to http. + let socket = ctxt.req.socket as { encrypted?: boolean } | undefined; let protocol = - ctxt.req.headers['x-forwarded-proto'] === 'https' ? 'https' : 'http'; - let computedURL = new URL( - `${protocol}://${ctxt.req.headers.host}${ctxt.req.url}`, - ); + ctxt.req.headers['x-forwarded-proto'] === 'https' || socket?.encrypted + ? 'https' + : 'http'; + // HTTP/2 carries the authority in the `:authority` pseudo-header rather + // than the legacy `Host` header. Node's http2 compat layer normally + // populates `headers.host` from `:authority`, but only when the value + // is set; falling back to `:authority` makes URL construction robust to + // h2 clients (and proxies) that may omit `host`. + let h2Headers = ctxt.req.headers as Record; + let host = + typeof h2Headers.host === 'string' && h2Headers.host + ? h2Headers.host + : (h2Headers[':authority'] ?? ''); + let computedURL = new URL(`${protocol}://${host}${ctxt.req.url}`); let forwardedURL = ctxt.req.headers['x-boxel-forwarded-url']; if ( process.env.BOXEL_TRUST_FORWARDED_URL === 'true' && @@ -173,9 +258,24 @@ export async function fetchRequestFromContext( } let url = fullRequestURL(ctxt).href; + // HTTP/2's compat layer presents pseudo-headers (`:method`, `:scheme`, + // `:path`, `:authority`) alongside the regular headers. WHATWG `Headers` + // rejects names starting with `:` as invalid, so the raw `ctxt.req.headers` + // object cannot be passed to `new Request()` on h2 requests. Strip the + // pseudo-headers — the URL and method are already extracted above, and + // `:authority` is folded back into `host` by `fullRequestURL`. + let headers: Record = {}; + for (let [name, value] of Object.entries(ctxt.req.headers)) { + if (name.startsWith(':')) continue; + if (typeof value === 'string') { + headers[name] = value; + } else if (Array.isArray(value)) { + headers[name] = value.join(', '); + } + } return new Request(url, { method: ctxt.method, - headers: ctxt.req.headers as { [name: string]: string }, + headers, ...(reqBody !== undefined ? { body: reqBody as BodyInit } : {}), }); } @@ -315,6 +415,7 @@ export async function setContextResponse( ctxt.status = status; ctxt.message = statusText; for (let [header, value] of headers.entries()) { + if (H2_FORBIDDEN_RESPONSE_HEADERS.has(header.toLowerCase())) continue; ctxt.set(header, value); } if (!headers.get('content-type')) { diff --git a/packages/realm-server/package.json b/packages/realm-server/package.json index 10534df4137..8f5277682ae 100644 --- a/packages/realm-server/package.json +++ b/packages/realm-server/package.json @@ -103,7 +103,7 @@ "start:host-dist": "./scripts/start-host-dist.sh", "start:pg": "./scripts/start-pg.sh", "stop:pg": "./scripts/stop-pg.sh", - "test:wait-for-servers": "WAIT_ON_TIMEOUT=900000 NODE_NO_WARNINGS=1 start-server-and-test 'pnpm run wait' 'http-get://localhost:4201/base/_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson' 'pnpm run wait' 'http-get://localhost:4202/node-test/_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson|http://localhost:8008|http://localhost:5001' 'test'", + "test:wait-for-servers": "WAIT_ON_TIMEOUT=900000 NODE_NO_WARNINGS=1 START_SERVER_AND_TEST_INSECURE=1 start-server-and-test 'pnpm run wait' 'https-get://localhost:4201/base/_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson' 'pnpm run wait' 'https-get://localhost:4202/node-test/_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson|http://localhost:8008|http://localhost:5001' 'test'", "setup:base-in-deployment": "mkdir -p /persistent/base && rsync --dry-run --itemize-changes --checksum --recursive --delete ../base/. /persistent/base/ && rsync --checksum --recursive --delete ../base/. /persistent/base/", "setup:experiments-in-deployment": "mkdir -p /persistent/experiments && rsync --dry-run --itemize-changes --checksum --recursive ../experiments-realm/. /persistent/experiments/ && rsync --checksum --recursive ../experiments-realm/. /persistent/experiments/", "setup:catalog-in-deployment": "mkdir -p /persistent/catalog && pnpm --dir=../catalog catalog:update && rsync --dry-run --itemize-changes --checksum --recursive --delete ../catalog/contents/. /persistent/catalog/ && rsync --checksum --recursive --delete ../catalog/contents/. /persistent/catalog/", diff --git a/packages/realm-server/prerender/browser-manager.ts b/packages/realm-server/prerender/browser-manager.ts index 5fc18397f4f..fd48aaa1488 100644 --- a/packages/realm-server/prerender/browser-manager.ts +++ b/packages/realm-server/prerender/browser-manager.ts @@ -6,6 +6,8 @@ import puppeteer, { type Browser } from 'puppeteer'; import { execFile } from 'child_process'; import { promisify } from 'util'; +import { isHttpsLoopback } from '../lib/is-https-loopback'; + const log = logger('prerenderer'); const PUPPETEER_PROFILE_PREFIX = 'puppeteer_dev_chrome_profile-'; const USER_DATA_MAX_AGE_MS = 60 * 60 * 1000; @@ -29,6 +31,30 @@ export class BrowserManager { launchArgs.push('--no-sandbox', '--disable-setuid-sandbox'); } + // When the realm-server speaks HTTPS (local dev with a mkcert leaf + // cert), Chromium needs to be told to accept it. mkcert's root CA + // may or may not be in the system trust store depending on whether + // the dev ran `mkcert -install`. Puppeteer's bundled Chromium uses + // its own NSS DB that mkcert doesn't always touch, so we relax cert + // checks unconditionally for the prerender path. Safe: the origin + // is fixed by REALM_BASE_URL and the connection is loopback-only. + // + // Chrome 144+ silently demotes `--ignore-certificate-errors` to a + // dev-only flag unless paired with `--allow-insecure-localhost`. + // Without the pair, every TLS connection to localhost gets + // terminated with ERR_CONNECTION_CLOSED (visible upstream as a + // hung wait-for-host-standby probe). + // + // Gated on https + a loopback hostname so the relaxation only + // fires in local dev / CI. Production hits real hostnames with + // real CA-signed certs and must keep strict validation. + if (isHttpsLoopback(process.env.REALM_BASE_URL)) { + launchArgs.push( + '--ignore-certificate-errors', + '--allow-insecure-localhost', + ); + } + let extraArgs = process.env.PUPPETEER_CHROME_ARGS?.split(/\s+/).filter(Boolean); if (extraArgs && extraArgs.length > 0) { diff --git a/packages/realm-server/prerender/prerenderer.ts b/packages/realm-server/prerender/prerenderer.ts index b09de2ca277..d1a121fbcce 100644 --- a/packages/realm-server/prerender/prerenderer.ts +++ b/packages/realm-server/prerender/prerenderer.ts @@ -34,7 +34,7 @@ import { const log = logger('prerenderer'); const defaultHostURL = isEnvironmentMode() ? serviceURL('host') - : 'http://localhost:4200'; + : 'https://localhost:4200'; const boxelHostURL = process.env.BOXEL_HOST_URL ?? defaultHostURL; const DEFAULT_AFFINITY_IDLE_EVICT_MS = 12 * 60 * 60 * 1000; @@ -85,12 +85,22 @@ export class Prerenderer { let maxPages = options.maxPages ?? 5; this.#semaphore = new AsyncSemaphore(maxPages); this.#browserManager = new BrowserManager(); + // Local HTTPS dev (vite on https://localhost:4200 with HTTP/2) needs + // a more generous standby navigation timeout than the 30s default. + // The host bundle's first cold-start over h2 multiplexes ~1000+ + // module requests through vite's optimizer; on a cold runner the + // initial `_standby` load can comfortably exceed 30s even though + // the server is healthy. Configurable via env so production / + // hosted runners can keep the tighter default. + let standbyTimeoutMs = + parseInt(process.env.PRERENDER_STANDBY_TIMEOUT_MS ?? '', 10) || undefined; this.#pagePool = new PagePool({ maxPages, serverURL: options.serverURL, browserManager: this.#browserManager, boxelHostURL, renderSemaphore: this.#semaphore, + ...(standbyTimeoutMs ? { standbyTimeoutMs } : {}), onAffinityDisposed: (affinityKey) => { // Affinity tear-down implies the warm loader is gone, so any // owner entry for that affinity is now meaningless. Clear it diff --git a/packages/realm-server/scripts/full-reindex.sh b/packages/realm-server/scripts/full-reindex.sh index a188bf45a71..39966f6874d 100755 --- a/packages/realm-server/scripts/full-reindex.sh +++ b/packages/realm-server/scripts/full-reindex.sh @@ -1,13 +1,17 @@ #! /bin/sh +# Local realm-server speaks HTTPS+HTTP/2 (see infra:ensure-dev-cert). +# `-k` skips cert verification — fine for a localhost script; in dev the +# cert is the mkcert leaf, and Node clients pick up trust via +# NODE_EXTRA_CA_CERTS but curl uses its own store. AUTH_HEADER="Authorization: Bearer shhh! it's a secret" echo "Starting full reindex of all realms on 4201..." -response=$(curl -s -X POST -H "$AUTH_HEADER" "http://localhost:4201/_grafana-full-reindex") +response=$(curl -sk -X POST -H "$AUTH_HEADER" "https://localhost:4201/_grafana-full-reindex") echo "Indexing started for realms:" -echo "$response" | grep -o '"http://[^"]*"' | sed 's/"//g' +echo "$response" | grep -oE '"https?://[^"]*"' | sed 's/"//g' echo "Starting full reindex of all realms on 4202..." -response=$(curl -s -X POST -H "$AUTH_HEADER" "http://localhost:4202/_grafana-full-reindex") +response=$(curl -sk -X POST -H "$AUTH_HEADER" "https://localhost:4202/_grafana-full-reindex") echo "Indexing started for realms:" -echo "$response" | grep -o '"http://[^"]*"' | sed 's/"//g' +echo "$response" | grep -oE '"https?://[^"]*"' | sed 's/"//g' diff --git a/packages/realm-server/scripts/full-reset.sh b/packages/realm-server/scripts/full-reset.sh index fc119e0d5e8..56349da22df 100755 --- a/packages/realm-server/scripts/full-reset.sh +++ b/packages/realm-server/scripts/full-reset.sh @@ -30,5 +30,5 @@ cd "${CURRENT_DIR}" echo " WARNING: Any matrix server authorization tokens cached in the browser's localstorage are now invalid. Make sure to clear browser localstorage. Also make sure to execute the following in the browser after logging in as 'user' to add the experiments realm: -window['@cardstack/host'].lookup('service:matrix-service')._client.setAccountData('app.boxel.realms', {realms: ['http://localhost:4201/experiments/']}) +window['@cardstack/host'].lookup('service:matrix-service')._client.setAccountData('app.boxel.realms', {realms: ['https://localhost:4201/experiments/']}) " diff --git a/packages/realm-server/scripts/register-bot.sh b/packages/realm-server/scripts/register-bot.sh index 0b0033fce0a..4ddfc1ee11d 100755 --- a/packages/realm-server/scripts/register-bot.sh +++ b/packages/realm-server/scripts/register-bot.sh @@ -1,14 +1,17 @@ #!/bin/sh set -e -REALM_SERVER_URL="${REALM_SERVER_URL:-http://localhost:4201}" +REALM_SERVER_URL="${REALM_SERVER_URL:-https://localhost:4201}" if [ -z "${REALM_SERVER_JWT}" ]; then echo "REALM_SERVER_JWT is required" >&2 exit 1 fi USERNAME="${USERNAME:-@user:localhost}" -curl -sS -X POST "${REALM_SERVER_URL}/_bot-registration" \ +# `-k` skips cert verification — the local realm-server's HTTPS cert is +# mkcert-signed (see infra:ensure-dev-cert) and curl doesn't pick up the +# trust the way Node does via NODE_EXTRA_CA_CERTS. +curl -sSk -X POST "${REALM_SERVER_URL}/_bot-registration" \ -H "Authorization: Bearer ${REALM_SERVER_JWT}" \ -H "Accept: application/vnd.api+json" \ -H "Content-Type: application/vnd.api+json" \ diff --git a/packages/realm-server/scripts/start-host-dist.sh b/packages/realm-server/scripts/start-host-dist.sh index 670f62a9345..5b7f05e61f6 100755 --- a/packages/realm-server/scripts/start-host-dist.sh +++ b/packages/realm-server/scripts/start-host-dist.sh @@ -2,9 +2,13 @@ # Start host dist server, skipping if already running. # Mirrors the pattern in start-icons.sh. -HOST_URL="${HOST_URL:-http://localhost:4200}" +HOST_URL="${HOST_URL:-https://localhost:4200}" -if curl --fail --silent --show-error "$HOST_URL" >/dev/null 2>&1; then +# Vite serves HTTPS in local dev when the mkcert leaf is present +# (vite.config.mjs reads REALM_SERVER_TLS_CERT_FILE / _KEY_FILE). curl +# trusts that cert via NODE_EXTRA_CA_CERTS / system trust; -k keeps the +# readiness probe simple if either trust path isn't wired up yet. +if curl -k --fail --silent --show-error "$HOST_URL" >/dev/null 2>&1; then echo "host already running on $HOST_URL, skipping startup" exit 0 fi diff --git a/packages/realm-server/scripts/start-without-matrix.sh b/packages/realm-server/scripts/start-without-matrix.sh index 10e750aa2b3..1a2b4a10af7 100755 --- a/packages/realm-server/scripts/start-without-matrix.sh +++ b/packages/realm-server/scripts/start-without-matrix.sh @@ -1,9 +1,9 @@ #! /bin/sh -BASE_REALM="http-get://localhost:4201/base/" -EXPERIMENTS_REALM="http-get://localhost:4201/experiments/" -NODE_TEST_REALM="http-get://localhost:4202/node-test/" -TEST_REALM="http-get://localhost:4202/test/" +BASE_REALM="https-get://localhost:4201/base/" +EXPERIMENTS_REALM="https-get://localhost:4201/experiments/" +NODE_TEST_REALM="https-get://localhost:4202/node-test/" +TEST_REALM="https-get://localhost:4202/test/" READY_PATH="_readiness-check?acceptHeader=application%2Fvnd.api%2Bjson" @@ -16,7 +16,11 @@ SYNAPSE_URL="http://localhost:8008" SMTP_4_DEV_URL="http://localhost:5001" ICONS_URL="http://localhost:4206" -WAIT_ON_TIMEOUT=900000 SKIP_BOXEL_HOMEPAGE=true NODE_NO_WARNINGS=1 start-server-and-test \ +# START_SERVER_AND_TEST_INSECURE=1: see test-wait-for-servers.sh — wait-on +# needs strictSSL relaxation for the mkcert leaf on +# https-get://localhost:4201|4202. +WAIT_ON_TIMEOUT=900000 SKIP_BOXEL_HOMEPAGE=true NODE_NO_WARNINGS=1 \ + START_SERVER_AND_TEST_INSECURE=1 start-server-and-test \ 'run-p start:pg start:prerender-dev start:prerender-manager-dev start:worker-development start:development' \ "$BASE_REALM_READY|$EXPERIMENTS_REALM_READY|$SYNAPSE_URL|$SMTP_4_DEV_URL|$ICONS_URL" \ 'run-p start:worker-test start:test-realms' \ diff --git a/packages/realm-server/scripts/wait-for-host-standby.ts b/packages/realm-server/scripts/wait-for-host-standby.ts index 5e673ca8289..1f41c6b6b3c 100644 --- a/packages/realm-server/scripts/wait-for-host-standby.ts +++ b/packages/realm-server/scripts/wait-for-host-standby.ts @@ -38,13 +38,19 @@ const PER_ATTEMPT_TIMEOUT_MS = 30_000; const MAX_BACKOFF_MS = 5_000; const TOTAL_TIMEOUT_MS = 600_000; +import { isHttpsLoopback } from '../lib/is-https-loopback'; + const log = (msg: string) => console.log(`[wait-for-host-standby] ${msg}`); const sleep = (ms: number) => new Promise((r) => setTimeout(r, ms)); const elapsedSec = (start: number) => Math.round((Date.now() - start) / 1000); async function main() { + // Vite serves HTTPS on localhost:4200 in local dev (the realm-server + // requires the mkcert leaf and vite reads the same cert). Default + // accordingly so a stale shell that hasn't re-exported HOST_URL + // still probes the right scheme. let hostUrl = - process.argv[2] || process.env.HOST_URL || 'http://localhost:4200'; + process.argv[2] || process.env.HOST_URL || 'https://localhost:4200'; let standbyUrl = `${hostUrl}/_standby`; let launchArgs: string[] = []; @@ -54,6 +60,23 @@ async function main() { ) { launchArgs.push('--no-sandbox', '--disable-setuid-sandbox'); } + // Match the prerender server's BrowserManager: relax cert checks for + // the local mkcert leaf. Chrome 144+ silently demotes + // `--ignore-certificate-errors` to a dev-only flag — pair it with + // `--allow-insecure-localhost` so the dev cert is actually accepted + // (otherwise the TLS handshake closes with ERR_CONNECTION_CLOSED and + // every retry times out with no obvious explanation in the log). + // + // Gated on https + a loopback hostname so the relaxation only fires + // in local dev / CI (where the cert is the mkcert leaf). Production + // hits a real hostname with a real CA-signed cert, where we want + // strict validation. + if (isHttpsLoopback(hostUrl)) { + launchArgs.push( + '--ignore-certificate-errors', + '--allow-insecure-localhost', + ); + } log(`probing ${standbyUrl} (max ${TOTAL_TIMEOUT_MS / 1000}s)...`); let start = Date.now(); @@ -78,10 +101,44 @@ async function main() { 1, Math.min(PER_ATTEMPT_TIMEOUT_MS, TOTAL_TIMEOUT_MS - (Date.now() - start)), ); + // Verbose mode forwards every chrome console message + every failed + // network request from the standby probe page to our own stdout, so + // when the probe hangs we can see what URL the page is choking on + // (TLS-handshake failures, h2 stream resets, cross-origin denials, + // etc.). Off by default — healthy runs don't need the noise. Flip + // `WAIT_FOR_HOST_STANDBY_VERBOSE=1` when investigating a probe hang. + let verbose = process.env.WAIT_FOR_HOST_STANDBY_VERBOSE === '1'; try { while (Date.now() - start < TOTAL_TIMEOUT_MS) { attempt++; let page = await browser.newPage(); + if (verbose) { + page.on('console', (msg) => + log(`[chrome console.${msg.type()}] ${msg.text()}`), + ); + page.on('pageerror', (err: unknown) => + log( + `[chrome pageerror] ${ + err instanceof Error ? err.message : String(err) + }`, + ), + ); + page.on('requestfailed', (req) => + log( + `[chrome requestfailed] ${req.method()} ${req.url()} — ${ + req.failure()?.errorText ?? 'unknown' + }`, + ), + ); + page.on('response', (resp) => { + if (resp.status() >= 400) { + log(`[chrome response ${resp.status()}] ${resp.url()}`); + } + }); + page.on('framedetached', (frame) => + log(`[chrome framedetached] url=${frame.url()}`), + ); + } try { // Mirror page-pool.ts's #loadStandbyPage: each phase gets its own // PER_ATTEMPT_TIMEOUT_MS budget. The goto budget only covers @@ -89,14 +146,17 @@ async function main() { // `#standby-ready` is a separate clock because on a cold vite // cache the script tag's module fetch can spin while the // optimizer is still bundling its dep graph. + if (verbose) log(`attempt ${attempt}: page.goto(${standbyUrl})`); let response = await page.goto(standbyUrl, { waitUntil: 'domcontentloaded', timeout: phaseBudgetMs(), }); let status = response?.status(); + if (verbose) log(`attempt ${attempt}: goto resolved status=${status}`); if (status != null && status >= 400) { throw new Error(`HTTP ${status}`); } + if (verbose) log(`attempt ${attempt}: waiting for #standby-ready`); await page.waitForFunction( () => !!document.querySelector('#standby-ready'), { timeout: phaseBudgetMs() }, diff --git a/packages/realm-server/server.ts b/packages/realm-server/server.ts index 7d89dec8b2f..1e466c11e78 100644 --- a/packages/realm-server/server.ts +++ b/packages/realm-server/server.ts @@ -1,6 +1,10 @@ import Koa from 'koa'; import cors from '@koa/cors'; import { Memoize } from 'typescript-memoize'; +import http from 'http'; +import http2 from 'http2'; +import net from 'net'; +import { readFileSync } from 'fs'; import type { DefinitionLookup, Realm, @@ -34,6 +38,7 @@ import { ecsMetadata, setContextResponse, fetchRequestFromContext, + fullRequestURL, methodOverrideSupport, proxyAsset, } from './middleware'; @@ -66,6 +71,230 @@ import { import { sanitizeHeadHTMLToString } from '@cardstack/runtime-common'; import { JSDOM } from 'jsdom'; +// When both env vars point to readable cert files, the realm server +// speaks HTTPS with HTTP/2 (and HTTP/1.1 fallback over the same TLS port +// via ALPN). Otherwise it stays on plain HTTP/1.1 — that's the path +// in-process tests and any environment without a provisioned cert take. +// In local dev `mise run infra:ensure-dev-cert` provisions the cert and +// `mise-tasks/lib/env-vars.sh` exports the env vars so the dev stack +// always boots HTTPS. The h2 lift fixes Chrome's HTTP/1.1 6-per-origin +// connection ceiling for heavy aggregator-card prerender fan-outs. +const TLS_CERT_FILE_ENV = 'REALM_SERVER_TLS_CERT_FILE'; +const TLS_KEY_FILE_ENV = 'REALM_SERVER_TLS_KEY_FILE'; + +export type RealmHttpServer = + | http.Server + | http2.Http2SecureServer + | net.Server; + +// Node's HTTP/2 compat layer reports Http2Stream.writable === false on +// server-side streams whose request method is HEAD (the protocol forbids a +// body, so the stream is marked non-writable up front). Koa's +// `ctx.writable` getter delegates to `res.socket.writable`, so for HEAD +// over h2 it sees `false` and `respond()` bails silently — the response +// headers never get sent and the client hangs until its timeout. +// Patching the prototype getter to recognise HEAD-over-h2 streams as +// writable (when they are otherwise healthy) restores normal HEAD +// semantics over h2 without disturbing GET/POST or HTTP/1.1. Exported so +// tests that build their own Koa app pick up the same fix. +let koaResponsePatchedForH2 = false; +export function patchKoaResponseForH2Head() { + if (koaResponsePatchedForH2) return; + // Construct a throwaway Koa instance just to find the prototype — Koa's + // response prototype isn't exported directly. + let proto = Object.getPrototypeOf(new Koa().response) as object; + let descriptor = Object.getOwnPropertyDescriptor(proto, 'writable'); + let origWritable = descriptor?.get; + if (!origWritable) return; + Object.defineProperty(proto, 'writable', { + configurable: true, + get(this: Koa.Response) { + let res = this.res as unknown as { + writableEnded?: boolean; + req?: { method?: string }; + stream?: { destroyed?: boolean; closed?: boolean }; + }; + if (res?.writableEnded) return false; + let stream = res?.stream; + if ( + res?.req?.method === 'HEAD' && + stream && + !stream.destroyed && + !stream.closed + ) { + return true; + } + return origWritable!.call(this); + }, + }); + koaResponsePatchedForH2 = true; +} + +// In TLS mode the realm-server binds a single net.Server that peeks each +// connection's first byte and routes TLS handshakes (0x16) to the HTTP/2 +// secure server and plain-text HTTP to a tiny 301-redirect server. This +// gives http://localhost:4201 → https://localhost:4201 the same-port +// redirect UX without running two listeners on different ports. +// Exported for tests in `tests/listener-dispatcher-test.ts` — the +// production caller is `RealmServer.listen()` below. +export function createListener( + log: ReturnType, + app: { callback: Koa['callback'] }, +): { server: RealmHttpServer; proto: 'http' | 'https/h2' } { + let certFile = process.env[TLS_CERT_FILE_ENV]; + let keyFile = process.env[TLS_KEY_FILE_ENV]; + if (!certFile || !keyFile) { + return { server: http.createServer(app.callback()), proto: 'http' }; + } + // We only need the patch on the h2 path — but it's idempotent and + // cheap, so we apply it unconditionally once cert/key are present. + patchKoaResponseForH2Head(); + let cert: Buffer; + let key: Buffer; + try { + cert = readFileSync(certFile); + key = readFileSync(keyFile); + } catch (e) { + log.warn( + `Unable to read TLS cert/key (%s, %s): %s — falling back to HTTP/1.1`, + certFile, + keyFile, + (e as Error).message, + ); + return { server: http.createServer(app.callback()), proto: 'http' }; + } + let tlsServer: http2.Http2SecureServer; + try { + tlsServer = http2.createSecureServer( + { cert, key, allowHTTP1: true }, + app.callback(), + ); + } catch (e) { + log.warn( + `Unable to construct HTTPS/h2 server (malformed cert?): %s — falling back to HTTP/1.1`, + (e as Error).message, + ); + return { server: http.createServer(app.callback()), proto: 'http' }; + } + let redirectServer = http.createServer(redirectToHttps); + // Track every accepted socket so shutdown can force-close them. Without + // this, `dispatcher.close()` waits for active HTTP/2 sessions and + // keep-alive HTTP/1 connections to end on their own — a single open + // browser tab can keep the realm-server from ever shutting down. Mirror + // the API surface (`closeAllConnections`) so main.ts's existing typeof + // guard picks this up without a special-case branch. + let activeSockets = new Set(); + let dispatcher = net.createServer({ pauseOnConnect: true }, (socket) => { + activeSockets.add(socket); + socket.once('close', () => activeSockets.delete(socket)); + // Attach a per-socket error listener BEFORE doing any I/O. A peer that + // RSTs the connection mid-handshake (or in the half-open window before + // we route it) emits `'error'` on this raw socket; without a listener + // Node escalates that to an uncaught exception and the realm-server + // would crash. Logging + best-effort destroy is sufficient — the + // dispatcher is the realm-server's single inbound listener and must + // survive hostile or unlucky clients. + socket.on('error', (e) => { + log.warn(`dispatcher socket error: %s`, e.message); + socket.destroy(); + }); + socket.once('readable', () => { + let firstByte: Buffer | null; + try { + firstByte = socket.read(1); + } catch { + socket.destroy(); + return; + } + if (firstByte == null) { + // Connection opened then closed without data — release the socket + // promptly instead of letting it idle in CLOSE_WAIT until the OS + // reaps it. Cheap defense against half-open-connection accumulators + // (port scanners, eager load balancers, etc.). + socket.destroy(); + return; + } + socket.unshift(firstByte); + // 0x16 is the TLS ClientHello record type. Anything else is treated + // as plain HTTP (ASCII verb byte) and gets the redirect path. + if (firstByte[0] === 0x16) { + tlsServer.emit('connection', socket); + } else { + redirectServer.emit('connection', socket); + } + socket.resume(); + }); + }); + // Server-level errors (e.g. `EADDRINUSE` at `listen()` time). Per-socket + // errors are handled inside the connection callback above. + dispatcher.on('error', (e) => { + log.warn(`dispatcher server error: %s`, e.message); + }); + // Mirror http.Server's `closeAllConnections()` so shutdown can force- + // close in-flight TLS / HTTP/2 / keep-alive sockets without waiting for + // peers to close them. main.ts feature-detects this method. + ( + dispatcher as net.Server & { closeAllConnections: () => void } + ).closeAllConnections = () => { + for (let s of activeSockets) { + try { + s.destroy(); + } catch { + // best-effort + } + } + activeSockets.clear(); + }; + return { server: dispatcher, proto: 'https/h2' }; +} + +// Same-port 308 redirect for plain-text HTTP requests that land on the +// HTTPS port. The dispatcher binds a single port so the inbound and +// target ports agree; we just rewrite the scheme. Parses via URL so +// bracketed IPv6 authorities (`[::1]:4201`) round-trip cleanly instead +// of being mangled by string-level regex. +// +// 308 (vs 301): preserves the request method and body across the +// redirect. Local scripts that POST to `http://localhost:4201/...` +// (matrix registration/setup writes `/_server-session`, `/_user`, +// webhook endpoints) need that — a 301 makes fetch downgrade the +// follow-up to GET and drops the body, breaking those calls. 308 is +// also semantically correct: this redirect is a permanent property of +// the wire protocol, not a temporary handler decision. +function redirectToHttps( + req: http.IncomingMessage, + res: http.ServerResponse, +): void { + let hostHeader = typeof req.headers.host === 'string' ? req.headers.host : ''; + let path = req.url ?? '/'; + let authority: string; + try { + let parsed = new URL(`http://${hostHeader || hostFromSocket(req)}`); + // `url.host` preserves brackets around IPv6 literals and the port if + // present, which is exactly the form we want in the redirect target. + authority = parsed.host; + } catch { + authority = hostFromSocket(req); + } + let location = `https://${authority}${path}`; + res.writeHead(308, { + Location: location, + 'Content-Type': 'text/plain; charset=utf-8', + }); + res.end(`Redirecting to ${location}\n`); +} + +// Best-effort fallback when the inbound request has no Host header +// (HTTP/1.0 client). Uses the dispatcher's bound `localAddress:localPort` +// so the redirect goes to the actual listener instead of guessing port +// 443. Brackets IPv6 literals to match URL `host` formatting. +function hostFromSocket(req: http.IncomingMessage): string { + let addr = req.socket.localAddress ?? 'localhost'; + let port = req.socket.localPort; + let bracketed = addr.includes(':') ? `[${addr}]` : addr; + return port ? `${bracketed}:${port}` : bracketed; +} + export class RealmServer { private log = logger('realm-server'); private headLog = logger('realm-server:head'); @@ -272,12 +501,17 @@ export class RealmServer { return app; } - listen(port: number) { - let instance = this.app.listen(port); + listen(port: number): RealmHttpServer { + let { server: instance, proto } = createListener(this.log, this.app); + instance.listen(port); instance.on('listening', () => { let actualPort = (instance.address() as import('net').AddressInfo | null)?.port ?? port; - this.log.info(`Realm server listening on port %s\n`, actualPort); + this.log.info( + `Realm server listening on port %s (%s)\n`, + actualPort, + proto, + ); }); return instance; } @@ -339,9 +573,7 @@ export class RealmServer { let includesVndMimeType = lowerAcceptHeader.includes('application/vnd.'); let includesHtmlMimeType = lowerAcceptHeader.includes('text/html'); - let requestURL = new URL( - `${ctxt.protocol}://${ctxt.host}${ctxt.originalUrl}`, - ); + let requestURL = fullRequestURL(ctxt); // Track published realm info from routing checks to avoid redundant // DB queries in the ETag logic below. @@ -937,9 +1169,7 @@ export class RealmServer { // dispatch below. Mount failures throw — the catch turns them into // 503 so the next request retries from scratch (ensureMounted's // failure path clears mounted/pendingMounts). - let requestURL = new URL( - `${ctxt.protocol}://${ctxt.host}${ctxt.originalUrl}`, - ); + let requestURL = fullRequestURL(ctxt); try { await this.findOrMountRealm(requestURL); } catch (err: any) { diff --git a/packages/realm-server/tests/atomic-endpoints-test.ts b/packages/realm-server/tests/atomic-endpoints-test.ts index 1d190e91409..b0474104eee 100644 --- a/packages/realm-server/tests/atomic-endpoints-test.ts +++ b/packages/realm-server/tests/atomic-endpoints-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import { basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import type { DirResult } from 'tmp'; import type { PgAdapter } from '@cardstack/postgres'; import type { Realm, RealmAdapter } from '@cardstack/runtime-common'; diff --git a/packages/realm-server/tests/boxel-domain-availability-test.ts b/packages/realm-server/tests/boxel-domain-availability-test.ts index 7765b39d280..ad59dc821ad 100644 --- a/packages/realm-server/tests/boxel-domain-availability-test.ts +++ b/packages/realm-server/tests/boxel-domain-availability-test.ts @@ -17,7 +17,7 @@ import { createJWT as createRealmServerJWT } from '../utils/jwt'; import { realmSecretSeed } from './helpers'; import type { SuperTest, Test } from 'supertest'; import supertest from 'supertest'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import { copySync, ensureDirSync } from 'fs-extra'; diff --git a/packages/realm-server/tests/card-dependencies-endpoint-test.ts b/packages/realm-server/tests/card-dependencies-endpoint-test.ts index f3e7b4b0095..664547ce9e7 100644 --- a/packages/realm-server/tests/card-dependencies-endpoint-test.ts +++ b/packages/realm-server/tests/card-dependencies-endpoint-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import { basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import type { DirResult } from 'tmp'; import type { Realm } from '@cardstack/runtime-common'; diff --git a/packages/realm-server/tests/card-endpoints-test.ts b/packages/realm-server/tests/card-endpoints-test.ts index e1d4752ce91..663876a96bb 100644 --- a/packages/realm-server/tests/card-endpoints-test.ts +++ b/packages/realm-server/tests/card-endpoints-test.ts @@ -2,7 +2,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import supertest from 'supertest'; import { join, basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import type { DirResult } from 'tmp'; import { existsSync, readJSONSync, statSync, writeFileSync } from 'fs-extra'; import type { @@ -1349,7 +1349,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1377,7 +1377,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1390,7 +1390,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1403,7 +1403,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1452,7 +1452,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1493,7 +1493,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1522,7 +1522,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1551,7 +1551,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1609,7 +1609,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { name: 'Friend', - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), }, realmInfo: testRealmInfo, realmURL: testRealmHref, @@ -1676,7 +1676,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1710,7 +1710,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1744,7 +1744,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1816,7 +1816,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { name: 'Friend', - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), }, realmInfo: testRealmInfo, realmURL: testRealmHref, @@ -1865,7 +1865,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1899,7 +1899,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -1954,7 +1954,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { name: 'Friend', - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), }, realmInfo: testRealmInfo, realmURL: testRealmHref, @@ -2011,7 +2011,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { name: 'Friend', - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), }, realmInfo: testRealmInfo, realmURL: testRealmHref, @@ -2043,7 +2043,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -2057,7 +2057,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, realmURL: ri(`http://some-other-realm/`), @@ -2105,7 +2105,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -2141,7 +2141,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, realmURL: ri(testRealmHref.replace(/\/$/, '')), @@ -2177,7 +2177,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, }, @@ -2230,7 +2230,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { name: 'Friend', - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), }, realmInfo: testRealmInfo, realmURL: testRealmHref, @@ -3529,7 +3529,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { module: rri( - 'http://localhost:4202/node-test/friend-with-used-link', + 'https://localhost:4202/node-test/friend-with-used-link', ), name: 'FriendWithUsedLink', }, @@ -3545,7 +3545,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { module: rri( - 'http://localhost:4202/node-test/friend-with-used-link', + 'https://localhost:4202/node-test/friend-with-used-link', ), name: 'FriendWithUsedLink', }, @@ -3621,7 +3621,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { module: rri( - 'http://localhost:4202/node-test/friend-with-used-link', + 'https://localhost:4202/node-test/friend-with-used-link', ), name: 'FriendWithUsedLink', }, @@ -3652,7 +3652,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { module: rri( - 'http://localhost:4202/node-test/friend-with-used-link', + 'https://localhost:4202/node-test/friend-with-used-link', ), name: 'FriendWithUsedLink', }, @@ -3712,7 +3712,7 @@ module(basename(__filename), function () { adoptsFrom: { name: 'FriendWithUsedLink', module: rri( - 'http://localhost:4202/node-test/friend-with-used-link', + 'https://localhost:4202/node-test/friend-with-used-link', ), }, realmInfo: testRealmInfo, @@ -3763,7 +3763,7 @@ module(basename(__filename), function () { meta: { adoptsFrom: { module: rri( - 'http://localhost:4202/node-test/friend-with-used-link', + 'https://localhost:4202/node-test/friend-with-used-link', ), name: 'FriendWithUsedLink', }, @@ -3819,7 +3819,7 @@ module(basename(__filename), function () { adoptsFrom: { name: 'FriendWithUsedLink', module: rri( - 'http://localhost:4202/node-test/friend-with-used-link', + 'https://localhost:4202/node-test/friend-with-used-link', ), }, realmInfo: testRealmInfo, @@ -3865,7 +3865,7 @@ module(basename(__filename), function () { }, meta: { adoptsFrom: { - module: rri('http://localhost:4202/node-test/friend'), + module: rri('https://localhost:4202/node-test/friend'), name: 'Friend', }, realmURL: ri(`http://some-other-realm/`), diff --git a/packages/realm-server/tests/card-source-endpoints-test.ts b/packages/realm-server/tests/card-source-endpoints-test.ts index 928b023dac1..9b57615e6ba 100644 --- a/packages/realm-server/tests/card-source-endpoints-test.ts +++ b/packages/realm-server/tests/card-source-endpoints-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import { join, basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import type { DirResult } from 'tmp'; import { existsSync, readFileSync } from 'fs-extra'; import { diff --git a/packages/realm-server/tests/claim-boxel-domain-test.ts b/packages/realm-server/tests/claim-boxel-domain-test.ts index 7be5da84d69..e1d2682ae61 100644 --- a/packages/realm-server/tests/claim-boxel-domain-test.ts +++ b/packages/realm-server/tests/claim-boxel-domain-test.ts @@ -17,7 +17,7 @@ import type { RealmServerTokenClaim } from '../utils/jwt'; import { createJWT as createRealmServerJWT } from '../utils/jwt'; import type { SuperTest, Test } from 'supertest'; import supertest from 'supertest'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import { copySync, ensureDirSync } from 'fs-extra'; diff --git a/packages/realm-server/tests/delete-boxel-claimed-domain-test.ts b/packages/realm-server/tests/delete-boxel-claimed-domain-test.ts index 8af4bb806cc..41e16b6ea27 100644 --- a/packages/realm-server/tests/delete-boxel-claimed-domain-test.ts +++ b/packages/realm-server/tests/delete-boxel-claimed-domain-test.ts @@ -22,7 +22,7 @@ import { createJWT as createRealmServerJWT } from '../utils/jwt'; import { realmSecretSeed } from './helpers'; import type { SuperTest, Test } from 'supertest'; import supertest from 'supertest'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import { copySync, ensureDirSync } from 'fs-extra'; diff --git a/packages/realm-server/tests/file-watcher-events-test.ts b/packages/realm-server/tests/file-watcher-events-test.ts index 2dacafff8b6..c6ea7c00ac9 100644 --- a/packages/realm-server/tests/file-watcher-events-test.ts +++ b/packages/realm-server/tests/file-watcher-events-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import { join, basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import type { DirResult } from 'tmp'; import { removeSync, writeJSONSync, writeFileSync } from 'fs-extra'; import type { Realm } from '@cardstack/runtime-common'; diff --git a/packages/realm-server/tests/fixtures/realistic/hassan-x.json b/packages/realm-server/tests/fixtures/realistic/hassan-x.json index 63a9cb4b186..2ffb8cd0370 100644 --- a/packages/realm-server/tests/fixtures/realistic/hassan-x.json +++ b/packages/realm-server/tests/fixtures/realistic/hassan-x.json @@ -13,7 +13,7 @@ }, "meta": { "adoptsFrom": { - "module": "http://localhost:4202/node-test/friend-with-used-link", + "module": "https://localhost:4202/node-test/friend-with-used-link", "name": "FriendWithUsedLink" } } diff --git a/packages/realm-server/tests/fixtures/realistic/jade-x.json b/packages/realm-server/tests/fixtures/realistic/jade-x.json index 1212b7bb03b..c6110865b73 100644 --- a/packages/realm-server/tests/fixtures/realistic/jade-x.json +++ b/packages/realm-server/tests/fixtures/realistic/jade-x.json @@ -6,7 +6,7 @@ }, "meta": { "adoptsFrom": { - "module": "http://localhost:4202/node-test/friend-with-used-link", + "module": "https://localhost:4202/node-test/friend-with-used-link", "name": "FriendWithUsedLink" } } diff --git a/packages/realm-server/tests/get-boxel-claimed-domain-test.ts b/packages/realm-server/tests/get-boxel-claimed-domain-test.ts index d99ac6d2570..62172a11f78 100644 --- a/packages/realm-server/tests/get-boxel-claimed-domain-test.ts +++ b/packages/realm-server/tests/get-boxel-claimed-domain-test.ts @@ -17,7 +17,7 @@ import type { RealmServerTokenClaim } from '../utils/jwt'; import { createJWT as createRealmServerJWT } from '../utils/jwt'; import type { SuperTest, Test } from 'supertest'; import supertest from 'supertest'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import { copySync, ensureDirSync } from 'fs-extra'; diff --git a/packages/realm-server/tests/helpers/index.ts b/packages/realm-server/tests/helpers/index.ts index ec862d1a564..6d9fc388086 100644 --- a/packages/realm-server/tests/helpers/index.ts +++ b/packages/realm-server/tests/helpers/index.ts @@ -58,7 +58,7 @@ import { PgQueuePublisher, PgQueueRunner, } from '@cardstack/postgres'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import { Socket as NetSocket } from 'net'; import { MatrixClient } from '@cardstack/runtime-common/matrix-client'; import { @@ -543,9 +543,11 @@ export async function closeServer(server: Server) { // Force-close idle keep-alive sockets so server.close() resolves promptly. // Without this, a lingering connection from the host page (puppeteer fetching // from the realm server) can hold the port bound long after the test moves - // on, causing EADDRINUSE when the next test tries to re-bind. - server.closeIdleConnections?.(); - server.closeAllConnections?.(); + // on, causing EADDRINUSE when the next test tries to re-bind. http.Server + // exposes these methods; Http2SecureServer does not — cast to widen at this + // call site and let the optional chain swallow the missing case. + (server as { closeIdleConnections?: () => void }).closeIdleConnections?.(); + (server as { closeAllConnections?: () => void }).closeAllConnections?.(); await new Promise((r) => server.close(() => r())); if (host && typeof port === 'number' && port > 0) { @@ -835,9 +837,9 @@ export async function stopTestPrerenderServer() { prerenderServerStart = undefined; } -interface StoppablePrerenderServer extends Server { +type StoppablePrerenderServer = Server & { __stopPrerenderer?: () => Promise; -} +}; function hasStopPrerenderer( server: Server, @@ -1100,6 +1102,23 @@ export async function createRealm({ return { realm, adapter }; } +// Defense-in-depth for test bootstraps that don't share `tests/index.ts`: +// strip the dev TLS env vars before any fixture realm-server is spun up. +// `env-vars.sh` exports these whenever the local mkcert cert exists, which +// is now the CI default (the init action provisions it). Without this +// delete, an in-process fixture would bind the HTTPS+HTTP/2 dispatcher +// on its random `127.0.0.1:444X` port and supertest / direct-fetch +// callers in tests that connect plain HTTP would get 308-redirected to +// `https://…`, breaking every assertion that expects `200`/`4xx`. +// The qunit-runner-driven realm-server tests already do this in their +// own `tests/index.ts`; this call covers callers like the boxel-cli and +// workspace-sync vitest suites that consume the helpers without that +// bootstrap. +function stripTlsEnvVars() { + delete process.env.REALM_SERVER_TLS_CERT_FILE; + delete process.env.REALM_SERVER_TLS_KEY_FILE; +} + export async function runTestRealmServer({ testRealmDir, realmsRootPath, @@ -1141,6 +1160,7 @@ export async function runTestRealmServer({ }; prerenderer?: Prerenderer; }) { + stripTlsEnvVars(); let prerenderer = providedPrerenderer ?? (await getTestPrerenderer()); let definitionLookup = new CachingDefinitionLookup( dbAdapter, @@ -1276,6 +1296,7 @@ export async function runTestRealmServerWithRealms({ }; prerenderer?: Prerenderer; }) { + stripTlsEnvVars(); ensureDirSync(realmsRootPath); let prerenderer = providedPrerenderer ?? (await getTestPrerenderer()); diff --git a/packages/realm-server/tests/index.ts b/packages/realm-server/tests/index.ts index 0aed30e1bc3..a1b6d5f4e7b 100644 --- a/packages/realm-server/tests/index.ts +++ b/packages/realm-server/tests/index.ts @@ -1,5 +1,16 @@ (globalThis as any).__environment = 'test'; +// Strip the dev TLS env vars before any fixture realm-server is spun up. +// `env-vars.sh` exports these whenever the local mkcert cert exists, which +// is now the CI default. Without this delete, in-process fixture servers +// would bind the HTTPS+HTTP/2 dispatcher on their random `127.0.0.1:444X` +// ports and the dispatcher's plain-HTTP branch would 308-redirect every +// supertest request to `https://…`, breaking every assertion that expects +// `200`/`4xx`. In-process tests don't need TLS — they speak HTTP/1.1 to +// supertest directly. +delete process.env.REALM_SERVER_TLS_CERT_FILE; +delete process.env.REALM_SERVER_TLS_KEY_FILE; + // Ensure test timers don't hold the Node event loop open. Wrap setTimeout and // setInterval to unref timers so the process can exit once work is done. This // does have the effect of masking any issues where code should be clearing @@ -169,6 +180,7 @@ const ALL_TEST_FILES: string[] = [ './full-reindex-test', './indexing-test', './lazy-mount-test', + './listener-dispatcher-test', './module-cache-race-test', './module-syntax-test', './permissions/permission-checker-test', diff --git a/packages/realm-server/tests/listener-dispatcher-test.ts b/packages/realm-server/tests/listener-dispatcher-test.ts new file mode 100644 index 00000000000..6f0ca2cab89 --- /dev/null +++ b/packages/realm-server/tests/listener-dispatcher-test.ts @@ -0,0 +1,398 @@ +import { module, test } from 'qunit'; +import { basename } from 'path'; +import { execFileSync } from 'child_process'; +import { mkdtempSync, rmSync, writeFileSync } from 'fs'; +import { tmpdir } from 'os'; +import { join } from 'path'; +import http from 'http'; +import https from 'https'; +import http2 from 'http2'; +import type { AddressInfo } from 'net'; +import Koa from 'koa'; +import { logger } from '@cardstack/runtime-common'; + +import { createListener } from '../server'; + +// Coverage for the same-port HTTPS+HTTP/2 dispatcher (CS-11114). Exercises +// every branch a peer can land in: TLS h2, TLS HTTP/1.1 via ALPN fallback, +// plain HTTP redirect, and malformed-cert downgrade. Spawns minimal Koa +// apps with raw http/https clients rather than supertest so we control the +// negotiation explicitly. +// +// Tests bootstrap clears REALM_SERVER_TLS_CERT_FILE/_KEY_FILE globally; +// this suite restores them per-test via a module-scoped setup that +// generates a fresh self-signed cert into a tmp dir. + +let tmpCertDir: string; +let certFile: string; +let keyFile: string; + +function makeCert(dir: string): { cert: string; key: string } { + let cert = join(dir, 'cert.pem'); + let key = join(dir, 'key.pem'); + // openssl is universally available on the GH-hosted Ubuntu CI image + // and on every dev box (mkcert depends on it, dev-cert task uses it). + // The cert covers localhost + 127.0.0.1 + ::1 so the test client can + // reach it via any local loopback address. + execFileSync('openssl', [ + 'req', + '-x509', + '-newkey', + 'rsa:2048', + '-keyout', + key, + '-out', + cert, + '-days', + '1', + '-nodes', + '-subj', + '/CN=localhost', + '-addext', + 'subjectAltName=DNS:localhost,IP:127.0.0.1,IP:::1', + ]); + return { cert, key }; +} + +function makeApp(): Koa { + let app = new Koa(); + app.use(async (ctx) => { + ctx.status = 200; + ctx.set('content-type', 'text/plain'); + ctx.body = `ok via ${ctx.req.httpVersion}`; + }); + return app; +} + +async function startListener(opts: { + cert?: string | null; + key?: string | null; +}): Promise<{ + port: number; + isHttp2: boolean; + close: () => Promise; +}> { + let priorCert = process.env.REALM_SERVER_TLS_CERT_FILE; + let priorKey = process.env.REALM_SERVER_TLS_KEY_FILE; + if (opts.cert == null) { + delete process.env.REALM_SERVER_TLS_CERT_FILE; + } else { + process.env.REALM_SERVER_TLS_CERT_FILE = opts.cert; + } + if (opts.key == null) { + delete process.env.REALM_SERVER_TLS_KEY_FILE; + } else { + process.env.REALM_SERVER_TLS_KEY_FILE = opts.key; + } + let { server, proto } = createListener(logger('test:dispatcher'), makeApp()); + let isHttp2 = proto === 'https/h2'; + await new Promise((resolve) => server.listen(0, '127.0.0.1', resolve)); + let port = (server.address() as AddressInfo).port; + let close = async () => { + // Restore env vars before we tear the server down so a later test + // can't observe leftover state from this one. + if (priorCert !== undefined) { + process.env.REALM_SERVER_TLS_CERT_FILE = priorCert; + } else { + delete process.env.REALM_SERVER_TLS_CERT_FILE; + } + if (priorKey !== undefined) { + process.env.REALM_SERVER_TLS_KEY_FILE = priorKey; + } else { + delete process.env.REALM_SERVER_TLS_KEY_FILE; + } + let force = (server as { closeAllConnections?: () => void }) + .closeAllConnections; + if (typeof force === 'function') { + force.call(server); + } + await new Promise((resolve, reject) => + server.close((err) => (err ? reject(err) : resolve())), + ); + }; + return { port, isHttp2, close }; +} + +function h1Request(opts: { + host: string; + port: number; + path: string; + scheme: 'http' | 'https'; + headers?: Record; + followRedirect?: boolean; +}): Promise<{ + status: number; + headers: http.IncomingHttpHeaders; + body: string; +}> { + return new Promise((resolve, reject) => { + let client = opts.scheme === 'https' ? https : http; + let req = (client as typeof https).request( + { + host: opts.host, + port: opts.port, + path: opts.path, + method: 'GET', + rejectUnauthorized: false, + headers: opts.headers, + }, + (res) => { + let chunks: Buffer[] = []; + res.on('data', (c) => chunks.push(c as Buffer)); + res.on('end', () => { + resolve({ + status: res.statusCode ?? 0, + headers: res.headers, + body: Buffer.concat(chunks).toString('utf8'), + }); + }); + }, + ); + req.on('error', reject); + req.end(); + }); +} + +function h2Request(opts: { + port: number; + path: string; + method?: 'GET' | 'HEAD'; + timeoutMs?: number; +}): Promise<{ + status: number; + body: string; + protocol: string; + responseHeaders: Record; +}> { + return new Promise((resolve, reject) => { + let client = http2.connect(`https://127.0.0.1:${opts.port}`, { + rejectUnauthorized: false, + }); + client.on('error', reject); + let req = client.request({ + ':method': opts.method ?? 'GET', + ':path': opts.path, + }); + if (opts.timeoutMs) { + req.setTimeout(opts.timeoutMs, () => { + req.close(); + client.close(); + reject(new Error(`h2 request timed out after ${opts.timeoutMs}ms`)); + }); + } + let status = 0; + let responseHeaders: Record = {}; + let chunks: Buffer[] = []; + req.on('response', (headers) => { + status = Number(headers[':status'] ?? 0); + for (let [k, v] of Object.entries(headers)) { + if (k.startsWith(':')) continue; + responseHeaders[k] = Array.isArray(v) ? v.join(', ') : String(v); + } + }); + req.on('data', (c) => chunks.push(c as Buffer)); + req.on('end', () => { + let body = Buffer.concat(chunks).toString('utf8'); + client.close(); + resolve({ status, body, protocol: 'h2', responseHeaders }); + }); + req.on('error', reject); + req.end(); + }); +} + +module(basename(__filename), function (hooks) { + hooks.before(function () { + tmpCertDir = mkdtempSync(join(tmpdir(), 'realm-listener-test-')); + let pair = makeCert(tmpCertDir); + certFile = pair.cert; + keyFile = pair.key; + }); + + hooks.after(function () { + rmSync(tmpCertDir, { recursive: true, force: true }); + }); + + test('TLS h2 path returns 200', async function (assert) { + let { port, isHttp2, close } = await startListener({ + cert: certFile, + key: keyFile, + }); + try { + assert.true(isHttp2, 'listener advertises h2 mode'); + let res = await h2Request({ port, path: '/_alive' }); + assert.strictEqual(res.status, 200, 'h2 GET returns 200'); + assert.true( + res.body.includes('ok via 2.0'), + `body indicates HTTP/2 — got "${res.body}"`, + ); + } finally { + await close(); + } + }); + + test('TLS h2 HEAD returns 200 without hanging', async function (assert) { + // Regression: Node's http2 compat layer marks Http2Stream.writable=false + // for HEAD-method server streams. Koa.respond() then short-circuits on + // `!ctx.writable` without sending any headers and the client hangs + // until its timeout. `patchKoaResponseForH2Head()` (applied inside + // `createListener` when an h2 listener is constructed) restores normal + // HEAD semantics. Without the patch, this test would time out below. + let { port, isHttp2, close } = await startListener({ + cert: certFile, + key: keyFile, + }); + try { + assert.true(isHttp2, 'listener advertises h2 mode'); + let res = await h2Request({ + port, + path: '/_alive', + method: 'HEAD', + timeoutMs: 2000, + }); + assert.strictEqual(res.status, 200, 'h2 HEAD returns 200'); + assert.strictEqual(res.body, '', 'h2 HEAD body is empty'); + assert.strictEqual( + res.responseHeaders['content-length'], + String(Buffer.byteLength('ok via 2.0')), + 'h2 HEAD reports the GET body length via content-length', + ); + } finally { + await close(); + } + }); + + test('TLS HTTP/1.1 ALPN fallback returns 200', async function (assert) { + let { port, close } = await startListener({ + cert: certFile, + key: keyFile, + }); + try { + let res = await h1Request({ + host: '127.0.0.1', + port, + path: '/_alive', + scheme: 'https', + }); + assert.strictEqual(res.status, 200, 'https/1.1 GET returns 200'); + assert.true( + res.body.includes('ok via 1.1'), + `body indicates HTTP/1.1 — got "${res.body}"`, + ); + } finally { + await close(); + } + }); + + test('plain HTTP request gets 308 redirect to https', async function (assert) { + let { port, close } = await startListener({ + cert: certFile, + key: keyFile, + }); + try { + let res = await h1Request({ + host: '127.0.0.1', + port, + path: '/_alive', + scheme: 'http', + }); + assert.strictEqual(res.status, 308, 'plain http GET returns 308'); + let location = + typeof res.headers.location === 'string' ? res.headers.location : ''; + assert.true( + location.startsWith('https://'), + `Location is https:// — got "${location}"`, + ); + assert.true( + location.endsWith('/_alive'), + `Location preserves /_alive — got "${location}"`, + ); + } finally { + await close(); + } + }); + + test('plain HTTP without Host header still produces a valid https Location', async function (assert) { + let { port, close } = await startListener({ + cert: certFile, + key: keyFile, + }); + try { + // node's http.request always sets Host; drop down to a raw socket + // for the no-Host case to exercise the socket.localAddress + // fallback in redirectToHttps. + let net = await import('net'); + let response = await new Promise((resolve, reject) => { + let socket = net.connect(port, '127.0.0.1'); + let chunks: Buffer[] = []; + socket.on('error', reject); + socket.on('data', (c) => chunks.push(c)); + socket.on('end', () => resolve(Buffer.concat(chunks).toString('utf8'))); + socket.write('GET /_alive HTTP/1.0\r\n\r\n'); + }); + let locMatch = response.match(/^Location:\s*(.+)\r$/im); + let location = locMatch?.[1] ?? ''; + let statusLine = response.split('\n')[0]?.trim() ?? ''; + assert.true( + /^HTTP\/1\.[01] 308\b/.test(statusLine), + `got 308 — first line was "${statusLine}"`, + ); + assert.true( + location.startsWith('https://127.0.0.1:'), + `Location uses https + bound host — got "${location}"`, + ); + assert.true( + location.endsWith('/_alive'), + `Location preserves /_alive — got "${location}"`, + ); + } finally { + await close(); + } + }); + + test('malformed cert downgrades to plain HTTP listener', async function (assert) { + let badCert = join(tmpCertDir, 'bad-cert.pem'); + let badKey = join(tmpCertDir, 'bad-key.pem'); + writeFileSync(badCert, 'not a real cert'); + writeFileSync(badKey, 'not a real key'); + let { port, isHttp2, close } = await startListener({ + cert: badCert, + key: badKey, + }); + try { + assert.false(isHttp2, 'listener falls back to plain HTTP'); + let res = await h1Request({ + host: '127.0.0.1', + port, + path: '/_alive', + scheme: 'http', + }); + assert.strictEqual(res.status, 200, 'plain http GET returns 200'); + assert.true( + res.body.includes('ok via 1.1'), + `body indicates HTTP/1.1 — got "${res.body}"`, + ); + } finally { + await close(); + } + }); + + test('no cert env vars produces plain HTTP listener', async function (assert) { + let { port, isHttp2, close } = await startListener({ + cert: null, + key: null, + }); + try { + assert.false(isHttp2, 'listener stays on plain HTTP'); + let res = await h1Request({ + host: '127.0.0.1', + port, + path: '/_alive', + scheme: 'http', + }); + assert.strictEqual(res.status, 200); + } finally { + await close(); + } + }); +}); diff --git a/packages/realm-server/tests/module-cache-race-test.ts b/packages/realm-server/tests/module-cache-race-test.ts index cf9f6afbcfd..0560e312a64 100644 --- a/packages/realm-server/tests/module-cache-race-test.ts +++ b/packages/realm-server/tests/module-cache-race-test.ts @@ -3,7 +3,7 @@ import { basename, join } from 'path'; import { ensureDirSync, writeFileSync, writeJSONSync } from 'fs-extra'; import { dirSync } from 'tmp'; import type { SuperTest, Test } from 'supertest'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import type { Realm } from '@cardstack/runtime-common'; import { CachingDefinitionLookup, diff --git a/packages/realm-server/tests/module-syntax-test.ts b/packages/realm-server/tests/module-syntax-test.ts index 910f1700802..6f1ae22296c 100644 --- a/packages/realm-server/tests/module-syntax-test.ts +++ b/packages/realm-server/tests/module-syntax-test.ts @@ -357,7 +357,7 @@ module(basename(__filename), function () { fieldType: 'linksTo', fieldDefinitionType: 'card', incomingRelativeTo: undefined, - outgoingRelativeTo: new URL('http://localhost:4202/node-test/pet'), // outgoing card + outgoingRelativeTo: new URL('https://localhost:4202/node-test/pet'), // outgoing card outgoingRealmURL: undefined, }); @@ -395,7 +395,7 @@ module(basename(__filename), function () { fieldType: 'contains', fieldDefinitionType: 'field', incomingRelativeTo: undefined, - outgoingRelativeTo: new URL('http://localhost:4202/node-test/pet'), // outgoing card + outgoingRelativeTo: new URL('https://localhost:4202/node-test/pet'), // outgoing card outgoingRealmURL: undefined, }); @@ -435,9 +435,9 @@ module(basename(__filename), function () { }, fieldType: 'linksTo', fieldDefinitionType: 'card', - incomingRelativeTo: rri(`http://localhost:4202/node-test/spec/1`), // hypothethical spec that lives at this id - outgoingRelativeTo: new URL('http://localhost:4202/node-test/pet'), // outgoing card - outgoingRealmURL: new URL('http://localhost:4202/node-test/'), // the realm that the spec lives in + incomingRelativeTo: rri(`https://localhost:4202/node-test/spec/1`), // hypothethical spec that lives at this id + outgoingRelativeTo: new URL('https://localhost:4202/node-test/pet'), // outgoing card + outgoingRealmURL: new URL('https://localhost:4202/node-test/'), // the realm that the spec lives in }); assert.codeEqual( @@ -477,15 +477,15 @@ module(basename(__filename), function () { }, fieldType: 'linksTo', fieldDefinitionType: 'card', - incomingRelativeTo: rri(`http://localhost:4202/test/spec/1`), // hypothethical spec that lives at this id - outgoingRelativeTo: new URL('http://localhost:4202/node-test/pet'), // outgoing card - outgoingRealmURL: new URL('http://localhost:4202/node-test/'), // the realm that thel spec lives in + incomingRelativeTo: rri(`https://localhost:4202/test/spec/1`), // hypothethical spec that lives at this id + outgoingRelativeTo: new URL('https://localhost:4202/node-test/pet'), // outgoing card + outgoingRealmURL: new URL('https://localhost:4202/node-test/'), // the realm that thel spec lives in }); assert.codeEqual( mod.code(), ` - import { Person as PersonCard } from "http://localhost:4202/test/person"; + import { Person as PersonCard } from "https://localhost:4202/test/person"; import { contains, field, CardDef, linksTo } from "https://cardstack.com/base/card-api"; import StringField from "https://cardstack.com/base/string"; export class Pet extends CardDef { diff --git a/packages/realm-server/tests/openrouter-passthrough-test.ts b/packages/realm-server/tests/openrouter-passthrough-test.ts index 54097381407..4d3fbf30dd4 100644 --- a/packages/realm-server/tests/openrouter-passthrough-test.ts +++ b/packages/realm-server/tests/openrouter-passthrough-test.ts @@ -3,7 +3,7 @@ import sinon from 'sinon'; import type { Test, SuperTest } from 'supertest'; import supertest from 'supertest'; import { basename, join } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import { copySync, ensureDirSync } from 'fs-extra'; import { diff --git a/packages/realm-server/tests/prerender-manager-test.ts b/packages/realm-server/tests/prerender-manager-test.ts index e63b585f935..bbe5048d467 100644 --- a/packages/realm-server/tests/prerender-manager-test.ts +++ b/packages/realm-server/tests/prerender-manager-test.ts @@ -4,7 +4,7 @@ import supertest from 'supertest'; import { basename } from 'path'; import Koa from 'koa'; import Router from '@koa/router'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import http, { createServer } from 'http'; import { buildPrerenderManagerApp } from '../prerender/manager-app'; import { diff --git a/packages/realm-server/tests/publish-unpublish-realm-test.ts b/packages/realm-server/tests/publish-unpublish-realm-test.ts index 961fb4ad988..145d02200bb 100644 --- a/packages/realm-server/tests/publish-unpublish-realm-test.ts +++ b/packages/realm-server/tests/publish-unpublish-realm-test.ts @@ -12,7 +12,7 @@ import { removeSync, } from 'fs-extra'; import { basename, join } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import type { Realm, VirtualNetwork } from '@cardstack/runtime-common'; import { diff --git a/packages/realm-server/tests/realm-endpoints-test.ts b/packages/realm-server/tests/realm-endpoints-test.ts index 7454792eeec..8255c2fd784 100644 --- a/packages/realm-server/tests/realm-endpoints-test.ts +++ b/packages/realm-server/tests/realm-endpoints-test.ts @@ -2,7 +2,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import supertest from 'supertest'; import { join, resolve, basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import { copySync, diff --git a/packages/realm-server/tests/realm-endpoints/dependencies-test.ts b/packages/realm-server/tests/realm-endpoints/dependencies-test.ts index e8616d00c9c..ee49b093c49 100644 --- a/packages/realm-server/tests/realm-endpoints/dependencies-test.ts +++ b/packages/realm-server/tests/realm-endpoints/dependencies-test.ts @@ -3,7 +3,7 @@ import type { SuperTest, Test } from 'supertest'; import { basename } from 'path'; import type { Realm } from '@cardstack/runtime-common'; import { SupportedMimeType } from '@cardstack/runtime-common'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import { closeServer, setupPermissionedRealmCached } from '../helpers'; module(`realm-endpoints/${basename(__filename)}`, function (hooks) { diff --git a/packages/realm-server/tests/realm-endpoints/info-test.ts b/packages/realm-server/tests/realm-endpoints/info-test.ts index fee220e9e2d..deffbadf39d 100644 --- a/packages/realm-server/tests/realm-endpoints/info-test.ts +++ b/packages/realm-server/tests/realm-endpoints/info-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import { basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import type { Realm } from '@cardstack/runtime-common'; import { setupPermissionedRealmCached, diff --git a/packages/realm-server/tests/realm-endpoints/markdown-test.ts b/packages/realm-server/tests/realm-endpoints/markdown-test.ts index b242aed3ebf..cb82bea6d9b 100644 --- a/packages/realm-server/tests/realm-endpoints/markdown-test.ts +++ b/packages/realm-server/tests/realm-endpoints/markdown-test.ts @@ -4,7 +4,7 @@ import { basename } from 'path'; import type { Realm } from '@cardstack/runtime-common'; import { rri } from '@cardstack/runtime-common'; import { SupportedMimeType } from '@cardstack/runtime-common'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import { closeServer, setupPermissionedRealmCached } from '../helpers'; // CS-10789 end-to-end tests for the markdown rendering pipeline (CS-10782 diff --git a/packages/realm-server/tests/realm-endpoints/reindex-test.ts b/packages/realm-server/tests/realm-endpoints/reindex-test.ts index d0ca64e70f7..de5b43104a0 100644 --- a/packages/realm-server/tests/realm-endpoints/reindex-test.ts +++ b/packages/realm-server/tests/realm-endpoints/reindex-test.ts @@ -4,7 +4,7 @@ import { readFileSync, utimesSync, writeFileSync } from 'fs'; import type { SuperTest, Test } from 'supertest'; import type { Realm } from '@cardstack/runtime-common'; import type { MatrixEvent } from 'https://cardstack.com/base/matrix-event'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import type { DirResult } from 'tmp'; import { createJWT, diff --git a/packages/realm-server/tests/realm-endpoints/user-test.ts b/packages/realm-server/tests/realm-endpoints/user-test.ts index 8a22fa869ec..9e14f3b7757 100644 --- a/packages/realm-server/tests/realm-endpoints/user-test.ts +++ b/packages/realm-server/tests/realm-endpoints/user-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import { basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import type { Realm } from '@cardstack/runtime-common'; import { setupPermissionedRealmCached, diff --git a/packages/realm-server/tests/request-forward-test.ts b/packages/realm-server/tests/request-forward-test.ts index 57e217d8625..5653f4fe648 100644 --- a/packages/realm-server/tests/request-forward-test.ts +++ b/packages/realm-server/tests/request-forward-test.ts @@ -3,7 +3,7 @@ import sinon from 'sinon'; import type { Test, SuperTest } from 'supertest'; import supertest from 'supertest'; import { basename, join } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import { dirSync, type DirResult } from 'tmp'; import { copySync, ensureDirSync } from 'fs-extra'; import { diff --git a/packages/realm-server/tests/server-endpoints/authentication-test.ts b/packages/realm-server/tests/server-endpoints/authentication-test.ts index fc5a8b8fd2f..6415b38c4de 100644 --- a/packages/realm-server/tests/server-endpoints/authentication-test.ts +++ b/packages/realm-server/tests/server-endpoints/authentication-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import { basename } from 'path'; import type { Test, SuperTest } from 'supertest'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import jwt from 'jsonwebtoken'; import { MatrixClient } from '@cardstack/runtime-common/matrix-client'; import type { RealmServerTokenClaim } from '../../utils/jwt'; diff --git a/packages/realm-server/tests/server-endpoints/federated-types-test.ts b/packages/realm-server/tests/server-endpoints/federated-types-test.ts index 0527ad96aaf..c11b4148313 100644 --- a/packages/realm-server/tests/server-endpoints/federated-types-test.ts +++ b/packages/realm-server/tests/server-endpoints/federated-types-test.ts @@ -22,7 +22,7 @@ import { runTestRealmServerWithRealms, } from '../helpers'; import { createJWT as createRealmServerJWT } from '../../utils/jwt'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; interface FederatedTypesResponse { data: FederatedCardTypeSummaryEntry[]; diff --git a/packages/realm-server/tests/server-endpoints/helpers.ts b/packages/realm-server/tests/server-endpoints/helpers.ts index 1fa5e4c5706..616525243e4 100644 --- a/packages/realm-server/tests/server-endpoints/helpers.ts +++ b/packages/realm-server/tests/server-endpoints/helpers.ts @@ -6,7 +6,7 @@ import type { Realm, VirtualNetwork, } from '@cardstack/runtime-common'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import type { PgAdapter } from '@cardstack/postgres'; import type { RealmServer } from '../../server'; import { diff --git a/packages/realm-server/tests/server-endpoints/index-responses-test.ts b/packages/realm-server/tests/server-endpoints/index-responses-test.ts index 5a5cbfdcb33..bed9eead294 100644 --- a/packages/realm-server/tests/server-endpoints/index-responses-test.ts +++ b/packages/realm-server/tests/server-endpoints/index-responses-test.ts @@ -2,7 +2,7 @@ import { module, test } from 'qunit'; import { join, basename } from 'path'; import supertest from 'supertest'; import type { Test, SuperTest } from 'supertest'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; import { dirSync, type DirResult } from 'tmp'; import { DEFAULT_PERMISSIONS, diff --git a/packages/realm-server/tests/server-endpoints/info-test.ts b/packages/realm-server/tests/server-endpoints/info-test.ts index 7a69dd55aca..8666ba8867c 100644 --- a/packages/realm-server/tests/server-endpoints/info-test.ts +++ b/packages/realm-server/tests/server-endpoints/info-test.ts @@ -19,7 +19,7 @@ import { runTestRealmServerWithRealms, } from '../helpers'; import { createJWT as createRealmServerJWT } from '../../utils/jwt'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; module(`server-endpoints/${basename(__filename)}`, function (_hooks) { module('Realm Server Endpoints | /_federated-info', function (hooks) { diff --git a/packages/realm-server/tests/server-endpoints/search-prerendered-test.ts b/packages/realm-server/tests/server-endpoints/search-prerendered-test.ts index a7103347915..7a87c529c64 100644 --- a/packages/realm-server/tests/server-endpoints/search-prerendered-test.ts +++ b/packages/realm-server/tests/server-endpoints/search-prerendered-test.ts @@ -26,7 +26,7 @@ import { runTestRealmServerWithRealms, } from '../helpers'; import { createJWT as createRealmServerJWT } from '../../utils/jwt'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; module(`server-endpoints/${basename(__filename)}`, function (_hooks) { module( diff --git a/packages/realm-server/tests/server-endpoints/search-test.ts b/packages/realm-server/tests/server-endpoints/search-test.ts index ef283b7428e..b03bf2309fe 100644 --- a/packages/realm-server/tests/server-endpoints/search-test.ts +++ b/packages/realm-server/tests/server-endpoints/search-test.ts @@ -23,7 +23,7 @@ import { runTestRealmServerWithRealms, } from '../helpers'; import { createJWT as createRealmServerJWT } from '../../utils/jwt'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../../server'; module(`server-endpoints/${basename(__filename)}`, function (_hooks) { module('Realm Server Endpoints | /_federated-search', function (hooks) { diff --git a/packages/realm-server/tests/types-endpoint-test.ts b/packages/realm-server/tests/types-endpoint-test.ts index f2ad7476e18..42acc8e5c3a 100644 --- a/packages/realm-server/tests/types-endpoint-test.ts +++ b/packages/realm-server/tests/types-endpoint-test.ts @@ -1,7 +1,7 @@ import { module, test } from 'qunit'; import type { Test, SuperTest } from 'supertest'; import { join, basename } from 'path'; -import type { Server } from 'http'; +import type { RealmHttpServer as Server } from '../server'; import type { DirResult } from 'tmp'; import { copySync, ensureDirSync } from 'fs-extra'; import type { Realm } from '@cardstack/runtime-common'; @@ -166,7 +166,7 @@ module(basename(__filename), function () { }, { type: 'card-type-summary', - id: 'http://localhost:4202/node-test/friend-with-used-link/FriendWithUsedLink', + id: 'https://localhost:4202/node-test/friend-with-used-link/FriendWithUsedLink', attributes: { displayName: 'FriendWithUsedLink', total: 2, diff --git a/packages/realm-test-harness/src/isolated-realm-stack.ts b/packages/realm-test-harness/src/isolated-realm-stack.ts index ac4c68b1f7c..212c54598f5 100644 --- a/packages/realm-test-harness/src/isolated-realm-stack.ts +++ b/packages/realm-test-harness/src/isolated-realm-stack.ts @@ -468,8 +468,23 @@ export async function startIsolatedRealmStack({ ); } + // Strip the dev TLS env vars exported by env-vars.sh when CI's init + // action provisions the cert. The harness drives plain + // `http://localhost:/...` URLs against the spawned + // realm-server; if the child inherits the cert env vars it binds + // the HTTPS+HTTP/2 dispatcher and every harness HTTP request gets + // 301-redirected, breaking benchmarks and tests that don't follow + // redirects through their auth handshake. + let { + REALM_SERVER_TLS_CERT_FILE: _certFile, + REALM_SERVER_TLS_KEY_FILE: _keyFile, + ...rest + } = process.env; + void _certFile; + void _keyFile; + let env = { - ...process.env, + ...rest, PGHOST: DEFAULT_PG_HOST, PGPORT: DEFAULT_PG_PORT, PGUSER: DEFAULT_PG_USER, diff --git a/packages/realm-test-harness/src/support-services.ts b/packages/realm-test-harness/src/support-services.ts index d06a0c143eb..894c7721f23 100644 --- a/packages/realm-test-harness/src/support-services.ts +++ b/packages/realm-test-harness/src/support-services.ts @@ -252,6 +252,20 @@ async function ensureHostReady(): Promise<{ ); await portReservation.release(); + // Strip REALM_SERVER_TLS_CERT_FILE / _KEY_FILE before spawning vite + // preview. packages/host/vite.config.mjs reads those env vars and, + // when present, terminates TLS in vite preview too. The harness + // probes readiness via `fetch('http://localhost:/')` and + // hands the same http URL to spawned realm-servers via HOST_URL, + // so an HTTPS preview server would make the readiness probe and + // every downstream HOST_URL fetch fail. The dev stack's HTTPS + // origin lives on a fixed port (4200); harness ports are dynamic + // and never browser-facing, so plain HTTP is the right scheme + // here. + let { REALM_SERVER_TLS_CERT_FILE, REALM_SERVER_TLS_KEY_FILE, ...env } = + process.env; + void REALM_SERVER_TLS_CERT_FILE; + void REALM_SERVER_TLS_KEY_FILE; let child = spawn( 'npx', ['vite', 'preview', '--port', String(port), '--strictPort'], @@ -259,7 +273,7 @@ async function ensureHostReady(): Promise<{ cwd: hostPackageDir, detached: true, stdio: ['ignore', 'pipe', 'pipe'], - env: process.env, + env, }, ); diff --git a/packages/software-factory/README.md b/packages/software-factory/README.md index a0a05309dcd..8c2d0c1ae90 100644 --- a/packages/software-factory/README.md +++ b/packages/software-factory/README.md @@ -47,7 +47,7 @@ The orchestrator (`runIssueLoop`) is a thin scheduler that picks the next unbloc ## Running the Factory -Make sure the prerequisites above are met, and that you have a brief card published in the software-factory realm (e.g., `http://localhost:4201/software-factory/Wiki/sticky-note`). +Make sure the prerequisites above are met, and that you have a brief card published in the software-factory realm (e.g., `https://localhost:4201/software-factory/Wiki/sticky-note`). Set up your profile: @@ -61,8 +61,8 @@ Then run the factory (default backend is Claude via the Agent SDK): cd packages/software-factory pnpm factory:go \ - --brief-url http://localhost:4201/software-factory/Wiki/sticky-note \ - --target-realm http://localhost:4201/your-username/my-test-realm/ \ + --brief-url https://localhost:4201/software-factory/Wiki/sticky-note \ + --target-realm https://localhost:4201/your-username/my-test-realm/ \ --debug ``` @@ -90,15 +90,15 @@ To skip retrying blocked issues, use `--no-retry-blocked`: ```bash pnpm factory:go \ - --brief-url http://localhost:4201/software-factory/Wiki/sticky-note \ - --target-realm http://localhost:4201/your-username/my-test-realm/ \ + --brief-url https://localhost:4201/software-factory/Wiki/sticky-note \ + --target-realm https://localhost:4201/your-username/my-test-realm/ \ --no-retry-blocked ``` ### What to expect on the command line ``` -[factory:go] brief=http://localhost:4201/software-factory/Wiki/sticky-note +[factory:go] brief=https://localhost:4201/software-factory/Wiki/sticky-note [factory:go] Starting seed issue + issue-driven loop... [factory-seed] Creating seed issue at Issues/bootstrap-seed.json [issue-loop] Starting issue loop: targetRealm=..., maxIterationsPerIssue=5 @@ -162,7 +162,7 @@ Key modules: ## Notes -- **Realm card tests (`realm/*.test.gts`)** — QUnit tests co-located with source realm card definitions. These run inside the Boxel host app (via the host test suite), not via Playwright. To run them, use `pnpm test` in `packages/host` with the relevant test file pattern. They are separate from the Playwright specs in `tests/` which test the factory loop end-to-end. To run them interactively in the browser, go to: `http://localhost:4200/tests/index.html?liveTest=true&realmURL=http%3A%2F%2Flocalhost%3A4201%2Fsoftware-factory%2F` +- **Realm card tests (`realm/*.test.gts`)** — QUnit tests co-located with source realm card definitions. These run inside the Boxel host app (via the host test suite), not via Playwright. To run them, use `pnpm test` in `packages/host` with the relevant test file pattern. They are separate from the Playwright specs in `tests/` which test the factory loop end-to-end. To run them interactively in the browser, go to: `https://localhost:4200/tests/index.html?liveTest=true&realmURL=http%3A%2F%2Flocalhost%3A4201%2Fsoftware-factory%2F` - Template DBs are reused across runs while the seeded Postgres container stays up. - `serve:support` publishes a shared support context in `/tmp/software-factory-runtime/support.json`. - When that shared support context exists, `serve:realm` and `smoke:realm` reuse the running Synapse and prerender services instead of restarting them. diff --git a/packages/workspace-sync-cli/tests/helpers/start-test-realm.ts b/packages/workspace-sync-cli/tests/helpers/start-test-realm.ts index 172f5af264e..1fd44dfcb13 100644 --- a/packages/workspace-sync-cli/tests/helpers/start-test-realm.ts +++ b/packages/workspace-sync-cli/tests/helpers/start-test-realm.ts @@ -23,6 +23,15 @@ export async function startTestRealmServer( // Use unique test database name like isolated-realm-server const testDbName = `test_db_${Math.floor(10000000 * Math.random())}`; + // Inherit REALM_SERVER_TLS_CERT_FILE / _KEY_FILE so the spawned + // realm-server speaks HTTPS+HTTP/2 on :4205, matching the production + // wire and the matrix harness's isolated stack. The integration tests + // below now drive `https://localhost:4205/test/` URLs end-to-end; the + // mkcert leaf (provisioned by infra:ensure-dev-cert) covers + // `localhost` so cert validation works for the canonical realm URL, + // and the WSC CLI gets `NODE_TLS_REJECT_UNAUTHORIZED=0` via the + // process env below to keep this harness independent of the global + // NODE_EXTRA_CA_CERTS chain. const env = { ...process.env, PGHOST: 'localhost', @@ -37,6 +46,11 @@ export async function startTestRealmServer( NODE_ENV: 'test', NODE_NO_WARNINGS: '1', LOW_CREDIT_THRESHOLD: '2000', + // The WSC CLI under test makes its own Node-side fetches against + // the spawned realm-server. Disable strict TLS on this harness so + // the integration suite doesn't depend on whether mise's + // env-vars.sh has populated NODE_EXTRA_CA_CERTS in the test shell. + NODE_TLS_REJECT_UNAUTHORIZED: '0', }; // Minimal stub prerender server to satisfy required args without needing full prerender stack @@ -89,13 +103,13 @@ export async function startTestRealmServer( 'worker-manager', '--port=4212', '--matrixURL=http://localhost:8008', - `--distURL=${process.env.HOST_URL ?? 'http://localhost:4200'}`, + `--distURL=${process.env.HOST_URL ?? 'https://localhost:4200'}`, `--prerendererUrl=http://localhost:${prerenderPort}`, '--migrateDB', - '--fromUrl=http://localhost:4205/test/', - '--toUrl=http://localhost:4205/test/', + '--fromUrl=https://localhost:4205/test/', + '--toUrl=https://localhost:4205/test/', '--fromUrl=https://cardstack.com/base/', - '--toUrl=http://localhost:4201/base/', + '--toUrl=https://localhost:4201/base/', ]; const workerProcess = spawn('ts-node', workerArgs, { @@ -136,10 +150,10 @@ export async function startTestRealmServer( '--useRegistrationSecretFunction', `--path=${realmPath}`, '--username=test_realm', - '--fromUrl=http://localhost:4205/test/', - '--toUrl=http://localhost:4205/test/', + '--fromUrl=https://localhost:4205/test/', + '--toUrl=https://localhost:4205/test/', '--fromUrl=https://cardstack.com/base/', - '--toUrl=http://localhost:4201/base/', + '--toUrl=https://localhost:4201/base/', ]; const realmProcess = spawn('ts-node', serverArgs, { diff --git a/packages/workspace-sync-cli/tests/integration-test.ts b/packages/workspace-sync-cli/tests/integration-test.ts index f98c56c610d..a0370523b2e 100644 --- a/packages/workspace-sync-cli/tests/integration-test.ts +++ b/packages/workspace-sync-cli/tests/integration-test.ts @@ -229,7 +229,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { const result = await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, context.localDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, context.localDir], process.cwd(), ); @@ -271,7 +271,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { // First pull to get initial files await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, context.localDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, context.localDir], process.cwd(), ); @@ -293,7 +293,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { const result = await runCommand( 'node', - [pushCmd, context.localDir, `http://localhost:${REALM_PORT}/test/`], + [pushCmd, context.localDir, `https://localhost:${REALM_PORT}/test/`], process.cwd(), ); @@ -309,7 +309,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { const pullResult = await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, verifyDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, verifyDir], process.cwd(), ); @@ -339,7 +339,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { // First pull to get initial files await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, context.localDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, context.localDir], process.cwd(), ); @@ -353,7 +353,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { 'node', [ pullCmd, - `http://localhost:${REALM_PORT}/test/`, + `https://localhost:${REALM_PORT}/test/`, context.localDir, '--delete', ], @@ -383,7 +383,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { // First pull to get initial files await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, context.localDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, context.localDir], process.cwd(), ); @@ -398,7 +398,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { [ pushCmd, context.localDir, - `http://localhost:${REALM_PORT}/test/`, + `https://localhost:${REALM_PORT}/test/`, '--dry-run', ], process.cwd(), @@ -416,7 +416,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, checkDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, checkDir], process.cwd(), ); @@ -434,7 +434,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { // Test pulling .realm.json const pullResult = await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, context.localDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, context.localDir], process.cwd(), ); @@ -474,7 +474,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { const pushResult = await runCommand( 'node', - [pushCmd, context.localDir, `http://localhost:${REALM_PORT}/test/`], + [pushCmd, context.localDir, `https://localhost:${REALM_PORT}/test/`], process.cwd(), ); @@ -490,7 +490,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { const verifyPullResult = await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, verifyDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, verifyDir], process.cwd(), ); @@ -532,7 +532,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { // Test using only REALM_SECRET_SEED instead of MATRIX_PASSWORD const pullResult = await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, context.localDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, context.localDir], process.cwd(), { // Remove MATRIX_PASSWORD and provide REALM_SECRET_SEED instead @@ -567,7 +567,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { // First pull to get initial files await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, context.localDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, context.localDir], process.cwd(), ); @@ -593,7 +593,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { const result = await runCommand( 'node', - [pushCmd, context.localDir, `http://localhost:${REALM_PORT}/test/`], + [pushCmd, context.localDir, `https://localhost:${REALM_PORT}/test/`], process.cwd(), ); @@ -609,7 +609,7 @@ module('Workspace Sync CLI Integration Tests', function (hooks) { await runCommand( 'node', - [pullCmd, `http://localhost:${REALM_PORT}/test/`, checkDir], + [pullCmd, `https://localhost:${REALM_PORT}/test/`, checkDir], process.cwd(), ); diff --git a/scripts/import-cached-index.sh b/scripts/import-cached-index.sh index 5fa37c267d6..3354181ba7d 100755 --- a/scripts/import-cached-index.sh +++ b/scripts/import-cached-index.sh @@ -67,9 +67,13 @@ PSQL_OPTS="-U postgres -d $DB_NAME --quiet --no-psqlrc -v ON_ERROR_STOP=1" if [ -n "${BOXEL_ENVIRONMENT:-}" ]; then SLUG=$(compute_env_slug "$BOXEL_ENVIRONMENT") echo "Remapping URLs for environment '${SLUG}'..." + # Match both http and https canonicals — local dev now stores + # https://localhost:4201/... in the index (CS-11114), but older + # cached snapshots still have http://. Either prefix in the snapshot + # gets remapped to the env-mode Traefik hostname. gunzip -c "$CACHE_FILE" \ | sed \ - -e "s|http://localhost:4201|http://realm-server.${SLUG}.localhost|g" \ + -e "s|https\\?://localhost:4201|http://realm-server.${SLUG}.localhost|g" \ -e "s|http://localhost:4206|http://icons.${SLUG}.localhost|g" \ | docker exec -i boxel-pg psql $PSQL_OPTS else