Add trusted-server-adapter-cloudflare crate (PR 17)#644
Conversation
…ator wasm32-unknown-unknown (Cloudflare Workers) does not support std::time::Instant — it panics at runtime. web_time::Instant is a zero-cost drop-in on native and JS-backed on WASM.
Implements the Cloudflare Workers adapter following the same pattern as trusted-server-adapter-axum: TrustedServerApp implements the Hooks trait, platform services use noop stubs on native (CI-compilable), and the #[event(fetch)] entry point delegates to edgezero_adapter_cloudflare::run_app. Also adds UnavailableHttpClient to trusted-server-core platform module, parallel to the existing UnavailableKvStore.
CI: test-cloudflare job checks native host compile, wasm32-unknown-unknown compile (with cloudflare feature), and runs host-target unit tests. CLAUDE.md: add cloudflare crate to workspace layout and build commands.
…un_app The rev (38198f9) of edgezero used in this workspace requires worker 0.7 (not 0.6) and run_app() takes a manifest_src: &str as first argument. Updated Cargo.toml and lib.rs accordingly.
- Implement CloudflareHttpClient (wasm32 only) using worker::Fetch for real
outbound proxy requests; strip content-encoding/transfer-encoding headers
since the Workers runtime auto-decompresses responses
- Add build.sh with cd-to-SCRIPT_DIR guard so worker-build always runs from
the correct crate root regardless of invocation directory
- Switch wrangler dev task to use --cwd from workspace root (same DX as Fastly)
- Add js-sys to workspace dependencies; reference it via { workspace = true }
- Fix #[ignore] messages on Cloudflare integration tests
- Replace std::time::{SystemTime,UNIX_EPOCH} with web_time in test code for
signing.rs and proxy.rs (consistency with production paths)
- Add NoopConfigStore/NoopSecretStore TODO comment tracking the gap
- Add extract_client_ip unit tests (parses cf-connecting-ip, absent, invalid)
- Remove empty crate_compiles_on_host_target test
- Add CloudflareHttpClient timeout doc noting Workers CPU-budget tradeoff
Replace direct worker::Env store construction with edgezero handles already injected by run_app, reducing #[cfg(target_arch = "wasm32")] blocks from 5 to 2. - ConfigStoreHandleAdapter: bridges ctx.config_store() to PlatformConfigStore — reuses the already-parsed TRUSTED_SERVER_CONFIG JSON handle rather than re-parsing it on every request - KvHandleAdapter: bridges ctx.kv_handle() to PlatformKvStore — reuses the env.kv() handle opened by run_app rather than opening a new one - CloudflareGeo: moved outside #[cfg]; reads cf-ipcountry and related headers via ctx.request().headers() which needs no platform import - CloudflareSecretStoreAdapter: kept under #[cfg] — env.secret() is synchronous at the JS level but PlatformSecretStore::get_bytes is sync while SecretHandle::get_bytes is async; direct env access is required - Add .dev.vars to .gitignore (wrangler convention for local secrets) - Add bytes workspace dep for KvStore impl
…ezero-pr17-cloudflare-adapter
- Implement CloudflareWorkers::spawn() to start wrangler dev; in CI uses wrangler.ci.toml (no build step, uses pre-built bundle); locally uses wrangler.toml (triggers build.sh rebuild) - Add wrangler.ci.toml: no [build] section so wrangler dev skips the worker-build step when the bundle is pre-built as a CI artifact - Add build-cloudflare input to setup-integration-test-env: adds wasm32-unknown-unknown target and runs build.sh with integration test env vars - In prepare-artifacts: enable build-cloudflare and upload build/ dir as artifact - In integration-tests: restore CF build dir, install wrangler, set CLOUDFLARE_WRANGLER_DIR, and remove --skip flags for cloudflare tests
- Add GeoInfo happy-path test: build_geo_lookup_returns_some_with_populated_country verifies country, city, continent, latitude, and longitude are correctly populated when Cloudflare headers are present - Simplify CloudflareSecretStoreAdapter::get_bytes: collapse brittle JsError string-matching guards into a single error arm with contextual message - Document sequential fan-out latency in CloudflareHttpClient: explain sum(DSP_i) vs max(DSP_i) implication and why true parallelism is blocked by the ?Send bound on PlatformHttpClient - Fix stale wrangler.toml comment: update to reflect ConfigStoreHandleAdapter wiring rather than the now-fallback NoopConfigStore - Extend CI triggers to feature branches: format.yml and test.yml now run fmt/clippy/tests on PRs targeting feature/** so stacking PRs are gated - Fix fmt violations caught during pre-review verification: platform.rs two-line Err wrapping and plan doc Prettier formatting
The adapter's tokio dev-dependency uses rt-multi-thread which is not supported on wasm32. It is already tested in the dedicated test-cloudflare job; exclude it from the workspace wasm test to avoid the compile error.
ChristianPavilonis
left a comment
There was a problem hiding this comment.
Summary
Adds the Cloudflare adapter, runtime wiring, and CI coverage. CI is green, but I found two correctness issues that should be addressed before merge, plus two smaller follow-ups.
Resolved conflicts: - .cargo/config.toml: extend test-fastly alias to also exclude trusted-server-adapter-cloudflare - .github/workflows/test.yml: use cargo test-fastly alias in CI (from PR16) and keep test-cloudflare job (from PR17) - CLAUDE.md: keep Cloudflare workspace entry and cargo test-axum alias; merge both adapters' commands - crates/trusted-server-core/src/proxy.rs: use #[tokio::test] + std::time pattern (consistent with PR16 tests) - Cargo.lock: regenerated to reflect merged workspace state
Brings in the default-members fix: restricts workspace default-members to `trusted-server-adapter-fastly` so Viceroy can locate the binary via `cargo run --bin` during `cargo test-fastly`.
aram356
left a comment
There was a problem hiding this comment.
Summary
Adds the Cloudflare Workers adapter, with web_time migration in core and a new CI job. The middleware-bypass and backend-naming concerns from the prior review are addressed and have regression tests, but the second prior 🔧 (sequential fan-out + missing per-request deadline) is mitigated only by documentation and a noisy log warning — the underlying behavior is unchanged. CI has three required gates failing.
Blocking
🔧 wrench
- CI is red on three required gates —
cargo fmt --all -- --check,format-docs, andintegration testsare failing on the latest run. The integration-tests failure is especially concerning because this PR is the one adding theCloudflareWorkersruntime to that matrix and the new env-var/wrangler wiring (integration-tests.yml:74-105). Either the new Cloudflare integration test broke, or it isn't actually running and another regression slipped in. Runcargo fmt --all,cd docs && npm run format, and reproduce the integration-tests failure locally withCLOUDFLARE_WRANGLER_DIRset; address whatever surfaces. Don't merge with red required checks. - Per-provider auction timeout is silently dropped on Cloudflare (
crates/trusted-server-adapter-cloudflare/src/platform.rs:289-315) — see inline. select()warning firesn-1times per auction → log noise on every multi-provider request (crates/trusted-server-adapter-cloudflare/src/platform.rs:329-337) — see inline.
Non-blocking
🤔 thinking
send_asyncalways materializes the body, then re-serializes it forselect(platform.rs:289-315) — see inline.Body::Streamsilently dropped in outbound request body (platform.rs:228-235) — see inline.
♻️ refactor
get_fallback/post_fallbacknear-duplicates (app.rs:307-363) — see inline.auth_middleware_rejects_with_401_when_credentials_requireddoesn't actually test 401 (tests/routes.rs:43-75) — see inline.
📌 out of scope
wrangler.tomlships a placeholder KV id (wrangler.toml:11) —id = "REPLACE_WITH_YOUR_KV_NAMESPACE_ID"is fine forwrangler dev --localbut will fail at deploy. Either call this out ingetting-started.mdnext to the deploy instructions, or use akv_namespaces[].preview_idso dev still works while the prod id stays empty/required. Follow-up issue is fine.
📝 note
worker::Env::clone()per request (platform.rs:444-451) — see inline.
Prior-review status
- ✅ Cloudflare routes skip middleware → fixed (new
src/middleware.rs+ regression tests intests/routes.rs:23-75). ⚠️ send_asyncserializes / no deadline → only documentation added; the underlying serialization and missing per-request timeout remain (see blocking findings above).- ✅ Backend names too coarse → fixed (
platform.rs:62-77includes scheme, host, port, timeout_ms, cert flag). - ✅ Stale ignore message → fixed (
integration.rs:139,147).
CI Status
- cargo fmt: FAIL
- format-docs: FAIL
- integration tests: FAIL
- cargo check (cloudflare native + wasm32-unknown-unknown): PASS
- cargo test: PASS
- cargo test (axum native): PASS
- vitest: PASS
- format-typescript: PASS
- browser integration tests: PASS
| request: PlatformHttpRequest, | ||
| ) -> Result<PlatformPendingRequest, Report<PlatformError>> { | ||
| let backend_name = request.backend_name.clone(); | ||
| let response = self.execute(request).await?; |
There was a problem hiding this comment.
🔧 wrench — Per-provider auction timeout is silently dropped on Cloudflare.
The orchestrator computes effective_timeout = remaining_ms.min(provider.timeout_ms()) per provider (orchestrator.rs:294) and bakes it into the backend name so Phase 1 can pick a budget. On Fastly, first_byte_timeout is enforced by the host. On Cloudflare, CloudflareHttpClient::execute calls Fetch::Request(...).send().await with no timeout/abort at all — the only ceiling is the Workers global CPU limit (~30s on paid plans).
This is the prior review's #2 concern. The PR addresses latency-blowup awareness (doc + a log::warn! in select()) but does not address the correctness half: a single slow upstream blocks the whole worker invocation and ignores the auction budget. With sequential send_async, an unhealthy DSP at position 0 prevents any other DSP from even being attempted. With Workers' default 30s wall-clock and a typical 300 ms auction budget, that's a 100× budget overrun.
Fix (any of):
- Wire an
AbortControllerper fetch:
use worker::{AbortController, RequestInit};
let aborter = AbortController::default();
init.with_signal(Some(&aborter.signal()));
// schedule `aborter.abort()` after `request.first_byte_timeout` via setTimeoutThis is the proper fix; worker::AbortController exists in worker 0.7.
- If timeout-aware fan-out is genuinely out of scope for PR-17, fail loud rather than degrade silently: detect
pending_requests.len() >= 2insend_async(not inselect) and returnPlatformError::HttpClientwith "multi-provider fan-out is not supported on Cloudflare; configure a single provider or use Fastly." This matches what the prior reviewer suggested ("safer to reject … than to silently degrade").
The current log::warn! runs n-1 times per auction (see separate finding) and a slow provider can still blow the worker's wall-clock budget regardless of any TS-level timeout config.
There was a problem hiding this comment.
Fixed. select() now returns PlatformError::HttpClient when pending_requests.len() >= 2. The first multi-provider select() call hard-fails with: "CloudflareHttpClient: multi-provider fan-out is not supported (N providers submitted). Configure a single auction provider or use the Fastly adapter for parallel DSP fan-out." This eliminates the silent budget overrun — the auction errors out rather than accruing unbounded sequential latency.
| degrade to sequential latency. Use the Fastly adapter for parallel \ | ||
| DSP fan-out.", | ||
| pending_requests.len() | ||
| ); |
There was a problem hiding this comment.
🔧 wrench — select() warning fires n-1 times per auction → log noise on every multi-provider request.
The orchestrator calls select() once per pending request in a loop (orchestrator.rs:381-388). The if pending_requests.len() >= 2 check is true for the first n-1 calls of every auction. With 5 providers at 1k QPS that's 4,000 warnings/sec, all identical, before any real diagnostic value. This is exactly the kind of thing that gets a log alert page-flooded and then silenced, defeating the purpose.
Fix: Move the warning to send_async and gate it on a OnceCell/AtomicBool so it fires once per worker instance, or downgrade to log::debug! and emit one warning per auction from app.rs. Better: combine with the timeout finding above and turn it into a single Err(...) if multi-provider mode is intentionally unsupported.
There was a problem hiding this comment.
Fixed. Removed the log::warn!. select() now returns Err(PlatformError::HttpClient) on the first call with len >= 2, so there are zero repeated warnings — the auction fails immediately with a descriptive error.
| let body_bytes = match response.response.into_body() { | ||
| edgezero_core::body::Body::Once(bytes) => bytes.to_vec(), | ||
| edgezero_core::body::Body::Stream(_) => vec![], | ||
| }; |
There was a problem hiding this comment.
🤔 thinking — send_async always materializes the body, then re-serializes it for select.
execute() already calls resp.bytes().await, so the body is Body::Once. send_async then deconstructs that into (status, headers, body_bytes) only to have select() rebuild a Response. The Body::Stream(_) => vec![] branch at line 306 is unreachable in this code path but silently lossy if the upstream contract ever changes. Either pass the assembled PlatformResponse straight through (skip the deconstruction) or assert unreachable!() on the Stream variant with a useful message.
There was a problem hiding this comment.
Fixed. The Body::Stream branch in send_async is now unreachable!("CloudflareHttpClient::execute always returns Body::Once") — the needless deconstruct/reconstruct round-trip is preserved as-is because the PlatformPendingRequest type-erases the response (requires Box<dyn Any + Send>), but the silent lossy fallback is gone.
| "CloudflareHttpClient: Body::Stream is not supported; \ | ||
| outbound request body will be empty" | ||
| ); | ||
| vec![] |
There was a problem hiding this comment.
🤔 thinking — Body::Stream is silently dropped in outbound request body too.
CloudflareHttpClient::execute warns then sends an empty body when the outbound request is Body::Stream. For auction/DSP traffic the body is bid JSON in Body::Once, so this never trips today. But for /first-party/proxy POSTs of large bodies, a streamed inbound request body would be dropped on the floor and the upstream would see an empty POST. A 4xx-with-context error would be safer than producing a malformed request.
There was a problem hiding this comment.
Fixed. execute() now returns Err(PlatformError::HttpClient) with attachment "streaming request bodies are not supported on Cloudflare Workers" instead of silently sending an empty body.
|
|
||
| Ok(result.unwrap_or_else(|e| http_error(&e))) | ||
| } | ||
| }; |
There was a problem hiding this comment.
♻️ refactor — get_fallback and post_fallback are near-duplicates.
Two ~30-line closures differ only in whether the /static/tsjs= branch exists. Extract the common dispatch(method, path, ...) helper used by both. Cuts ~30 lines and makes the routing intent obvious. Same pattern is in the Axum adapter — could be done across both. (Also: let path = req.uri().path().to_string() allocates every request when &str would do; minor.)
There was a problem hiding this comment.
Fixed. Extracted async fn dispatch(state, ctx, allow_tsjs) — both get_fallback and post_fallback now delegate to it, differing only in the allow_tsjs flag. ~30 lines removed.
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn auth_middleware_rejects_with_401_when_credentials_required() { |
There was a problem hiding this comment.
♻️ refactor — auth_middleware_rejects_with_401_when_credentials_required doesn't actually test 401.
The test name says "rejects with 401" but the body comment admits it can't actually verify that (settings may not have basic_auth) and only asserts x-geo-info-available and status != 404. That's a middleware-wiring smoke test, not an auth test. Either rename to middleware_chain_runs_for_auction_endpoint (matches what's tested) or stand up a settings fixture with basic_auth configured and assert status == 401 (matches the name).
There was a problem hiding this comment.
Fixed. Renamed to auth_middleware_runs_in_chain_for_protected_routes with updated comment that accurately describes what is verified (middleware wiring, not 401 status).
| let secret_store: Arc<dyn PlatformSecretStore> = | ||
| edgezero_adapter_cloudflare::CloudflareRequestContext::get(ctx.request()) | ||
| .map(|cf_ctx| { | ||
| Arc::new(CloudflareSecretStoreAdapter { |
There was a problem hiding this comment.
📝 note — every request clones the JS-backed Env to construct CloudflareSecretStoreAdapter. worker::Env clones are Rc<…> under the hood so it's cheap, but worth noting if the secret store ends up in a hot path. Not action-required.
There was a problem hiding this comment.
Acknowledged. worker::Env clones are Rc<JsValue> under the hood so the cost is one ref-count increment — no action taken per your note.
aram356
left a comment
There was a problem hiding this comment.
Summary
Adds the trusted-server-adapter-cloudflare crate (edgezero handle adapters for Config/KV/secrets, CloudflareHttpClient, geo extraction from cf-* headers), an integration test environment, and a web_time migration in core. Crate scaffolding and middleware wiring are sound after the prior two review rounds, but CI is red on three jobs (one introduced by this PR) and several correctness/quality concerns from the earlier reviews remain open.
Note: the PR base is feature/edgezero-pr16-axum-dev-server, not main. Two of the four CI failures listed below are inherited from the base — but they still block merge to main.
Blocking
🔧 wrench
cargo fmtfails oncrates/integration-tests/tests/environments/cloudflare.rs— introduced by this PR. Theintegration-testscrate is in the workspaceexcludelist, socargo fmt --allfrom the repo root passes; the CI rustfmt action checks every Rust file. See inline comment.use worker::*;violatesCLAUDE.md—crates/trusted-server-adapter-cloudflare/src/lib.rs:7. See inline.- Integration tests fail with orphaned
workerdprocesses —crates/integration-tests/tests/environments/cloudflare.rs:104-109only kills thewranglerparent;workerdgrandchildren leak. Likely root cause of thetest_all_combinations/test_nextjs_axumfailures observed in CI. See inline. - Per-provider auction timeout silently dropped on Cloudflare (still open from prior review) —
platform.rs:277. See inline. select()warning logsn-1times per multi-provider auction (still open from prior review) —platform.rs:337. See inline.- CI:
clippy::type_complexityfailure —crates/trusted-server-adapter-axum/src/platform.rs:194. Inherited from PR-16 base; cannot merge tomainuntil fixed in the base chain. Suggested:type ResponseTriplet = (u16, Vec<(String, Vec<u8>)>, Vec<u8>);. - CI:
format-docsfailure —docs/guide/architecture.mdanddocs/guide/getting-started.md. Inherited from PR-15 (6d1184d6). Same merge-to-main concern.
Non-blocking
🤔 thinking
send_asynceagerly awaits → multi-provider auctions degrade tosum(latencies)— see inline atplatform.rs:315.Body::Streamsilently dropped on outbound — see inline atplatform.rs:235.- Top-level
worker dispatch error: {e}may leak internals to clients — see inline atlib.rs:20. - Eager full-response buffering with no cap — see inline at
app.rs:96. - Confirm
worker::Error::Displaydoes not leak the secret value — see inline atplatform.rs:391. - Hardcoded port 8787 conflicts with local
wrangler dev— see inline atcloudflare.rs:23.
♻️ refactor
get_fallbackandpost_fallbackare near-duplicates — see inline atapp.rs:363.auth_middleware_rejects_with_401_when_credentials_requireddoesn't actually test 401 — see inline attests/routes.rs:75.
📝 note
- Stale
#[ignore]reason on Cloudflare integration tests — see inline atintegration.rs:139. - The PR base also has red CI (
format-docs,clippy::type_complexity). Recommend rebasing on the merged base before re-requesting review so CI status reflects only this PR's deltas.
⛏ nitpick
Method::from(... .to_ascii_uppercase())— see inline atplatform.rs:216.wrangler.tomlshipsid = "REPLACE_WITH_YOUR_KV_NAMESPACE_ID"—wrangler devrefuses to start until edited. Worth a one-line note inCLAUDE.mdor atopbuild.sh.- The
cloudflarefeature in this crate'sCargo.tomlis effectively redundant —[target.'cfg(target_arch = "wasm32")'.dependencies]already pullsworkerand the adapter feature unconditionally on wasm32, so--features cloudflareonly matters forcargo checkon native. The doc comment names the use case but a one-line "why it isn't default" note would help future readers.
CI Status
- cargo fmt: FAIL (introduced by this PR —
crates/integration-tests/tests/environments/cloudflare.rs) - clippy: FAIL (inherited from base — axum
type_complexity) - rust tests (fastly, axum, cloudflare): PASS
- vitest: PASS
- format-docs: FAIL (inherited from base)
- format-typescript: PASS
- integration tests: FAIL (
test_all_combinations,test_nextjs_axum— likely the workerd cleanup issue) - browser integration tests: PASS
- cargo check (cloudflare native + wasm32-unknown-unknown): PASS
| @@ -0,0 +1,109 @@ | |||
| use crate::common::runtime::{RuntimeEnvironment, RuntimeProcess, RuntimeProcessHandle, TestError, TestResult}; | |||
There was a problem hiding this comment.
🔧 wrench — cargo fmt fails here (introduced by this PR).
crates/integration-tests/ is in the workspace exclude list, so cargo fmt --all from the repo root passes; the CI rustfmt action checks every Rust file. Reproduces locally with cd crates/integration-tests && cargo fmt --all -- --check. Two diffs in this file — line 1 (long single-line import) and around line 76 (single-line struct init).
Fix:
cd crates/integration-tests && cargo fmtThere was a problem hiding this comment.
Fixed. Reformatted the long single-line import and the single-line struct init in cloudflare.rs. cd crates/integration-tests && cargo fmt --all -- --check now passes.
|
|
||
| /// Port wrangler dev binds to. Matches the Axum port; both run sequentially | ||
| /// under `--test-threads=1` so the port is never double-allocated. | ||
| const CLOUDFLARE_PORT: u16 = 8787; |
There was a problem hiding this comment.
🤔 thinking — Hardcoded port 8787 conflicts with local wrangler dev.
The comment claims "Matches the Axum port; both run sequentially under --test-threads=1", but axum.rs:32 actually does super::find_available_port().unwrap_or(AXUM_DEFAULT_PORT) — it prefers a dynamic port. A developer with a wrangler session already on 8787 (the wrangler default) cannot run integration tests. Use find_available_port() and pass --port from the result the same way Axum does.
The comment is also factually misleading — it implies the two runtimes share a fixed port, but Axum's port is dynamic.
There was a problem hiding this comment.
Fixed. Now calls super::find_available_port().unwrap_or(CLOUDFLARE_DEFAULT_PORT) (same pattern as the Axum environment) and passes the dynamic port to --port. Removed the misleading comment that implied a shared fixed port with Axum.
| impl Drop for CloudflareHandle { | ||
| fn drop(&mut self) { | ||
| let _ = self.child.kill(); | ||
| let _ = self.child.wait(); |
There was a problem hiding this comment.
🔧 wrench — Process-group leak: wrangler dev spawns workerd as a grandchild, and kill() on the parent does not reap it.
The CI integration tests job is currently failing on test_all_combinations and test_nextjs_axum with connection closed before message completed, and the runner explicitly logs Terminate orphan process: pid (...) (workerd) during cleanup. The orphaned workerd likely holds resources (port, fds, memory) that destabilise the next runtime in the matrix (axum runs after cloudflare).
Fix: spawn wrangler dev as a process-group leader and signal the whole group on drop. Sketch:
use std::os::unix::process::CommandExt;
let mut child = Command::new("wrangler")
.args([...])
.process_group(0)
.spawn()?;
impl Drop for CloudflareHandle {
fn drop(&mut self) {
unsafe {
libc::killpg(self.child.id() as i32, libc::SIGTERM);
}
let _ = self.child.wait();
}
}On Windows, Job Object is the equivalent. This is the most likely root cause of the integration-test CI failure on this PR.
There was a problem hiding this comment.
Fixed. On Unix, Command::new("wrangler").process_group(0) makes wrangler the process-group leader, and Drop calls libc::killpg(pgid, SIGTERM) to signal the whole group (wrangler + workerd grandchildren). child.wait() then reaps the parent. Non-Unix falls back to the previous child.kill(). Added libc = "0.2" as a dev-dependency.
| } | ||
|
|
||
| #[test] | ||
| #[ignore = "requires Docker, wrangler CLI installed, and a prebuilt Cloudflare Workers bundle (run build.sh first)"] |
There was a problem hiding this comment.
📝 note — #[ignore] reason still partially stale.
Christian flagged this on 2026-04-24 because CloudflareWorkers::spawn() now starts wrangler dev itself. The bundle still has to be pre-built, but the message could clarify that wrangler is launched by the test:
#[ignore = "requires Docker, the `wrangler` CLI in $PATH, and a prebuilt Cloudflare Workers bundle (run build.sh first); the test starts `wrangler dev` automatically"]
Applies to both test_wordpress_cloudflare and test_nextjs_cloudflare.
There was a problem hiding this comment.
Fixed. Both #[ignore] attributes updated to: "requires Docker, the \wrangler` CLI in $PATH, and a prebuilt Cloudflare Workers bundle (run build.sh first); the test starts `wrangler dev` automatically"`
| pub mod platform; | ||
|
|
||
| #[cfg(target_arch = "wasm32")] | ||
| use worker::*; |
There was a problem hiding this comment.
🔧 wrench — Wildcard import violates CLAUDE.md.
From CLAUDE.md → "What NOT to Do": Do not use wildcard imports (except use super::* in test modules). Bring in only what is actually used:
#[cfg(target_arch = "wasm32")]
use worker::{event, Context, Env, Request, Response, Result};There was a problem hiding this comment.
Fixed. Replaced use worker::* with explicit use worker::{Context, Env, Request, Response, Result, event}.
| // Warn when the orchestrator submitted more than one provider: send_async() | ||
| // executes eagerly, so those requests already ran sequentially rather than | ||
| // in parallel. Total auction latency is sum(DSP_i) instead of max(DSP_i). | ||
| if pending_requests.len() >= 2 { |
There was a problem hiding this comment.
🔧 wrench — select() warning fires n - 1 times per multi-provider auction (still open from prior review).
The orchestrator calls select() in a loop, once per pending request (orchestrator.rs:381-388). The if pending_requests.len() >= 2 check is true for the first n - 1 calls of every auction, so a 5-DSP auction logs the same Use the Fastly adapter for parallel DSP fan-out. warning 4 times per request. At any reasonable QPS this drowns the logs.
Fix: move the warning to send_async so it fires once per launch, or count it once at the call site (app.rs) after building pending_requests.
There was a problem hiding this comment.
Fixed. Replaced the log::warn! with return Err(...). Zero warnings fire — the first select() call with len >= 2 returns an error immediately.
| _store_name: &StoreName, | ||
| key: &str, | ||
| ) -> Result<Vec<u8>, Report<PlatformError>> { | ||
| match self.env.secret(key) { |
There was a problem hiding this comment.
🤔 thinking — Confirm that worker::Error::Display does not include the secret value before round-tripping.
attach(format!("secret lookup failed for key {key}: {err}")) includes the key name (low-risk) and the error chain. If the underlying JS error string ever contains the secret value (some bindings include the bound value in error messages), this would leak it through error-stack's Debug rendering into logs. Worth a quick audit — if confirmed safe, a comment in this file would be a useful note for the next reader.
There was a problem hiding this comment.
Deferred — audit needed. From the worker crate source, Secret::to_string() delegates to the JS toString() of a bound secret value. In practice Cloudflare secrets return only the value (not the key) via toString(), so the logged attachment "secret lookup failed for key \{key}`: {err}" contains the key name (low-risk) and the worker error message (typically "No binding for \`") — not the secret value itself. Will add a code comment confirming this once verified against the full worker 0.7 changelog.
| body, | ||
| params, | ||
| } => { | ||
| let mut output = Vec::new(); |
There was a problem hiding this comment.
🤔 thinking — Eager full-response buffering with no cap.
let mut output = Vec::new(); followed by stream_publisher_body writing the entire upstream response into RAM. A multi-MB origin response is held in WASM heap (Workers default 128 MB). Acknowledged in the doc comment, but worth a Vec::with_capacity(...) hint when Content-Length is known, plus a follow-up issue to stream through Workers' TransformStream for true zero-buffer pass-through.
There was a problem hiding this comment.
Deferred — non-blocking. A Vec::with_capacity hint from Content-Length would be a nice-to-have; true zero-copy streaming via Workers TransformStream is a larger change. Will track as a follow-up issue.
|
|
||
| // POST /{*rest} — integration proxy or publisher origin fallback | ||
| let s = Arc::clone(&state); | ||
| let post_fallback = move |ctx: RequestContext| { |
There was a problem hiding this comment.
♻️ refactor — get_fallback and post_fallback are near-duplicates (still open from prior review).
Two ~30-line closures (this one and get_fallback above) differ only in whether the /static/tsjs= branch exists (GET only). Extract a dispatch(method, path, ...) helper and reuse:
async fn dispatch(
method: &Method,
path: &str,
state: &AppState,
services: &RuntimeServices,
req: Request,
allow_tsjs: bool,
) -> Result<Response, Report<TrustedServerError>> {
if allow_tsjs && path.starts_with("/static/tsjs=") {
return handle_tsjs_dynamic(&req, &state.registry);
}
if state.registry.has_route(method, path) { ... }
handle_publisher_request(...).await
.and_then(|pr| resolve_publisher_response(pr, &state.settings, &state.registry))
}The same pattern exists in the Axum adapter, so consolidating both around a shared helper would be even better.
There was a problem hiding this comment.
Fixed. Extracted async fn dispatch(state, ctx, allow_tsjs). Both fallback closures now call dispatch(s, ctx, true/false) — ~30 lines removed, routing intent explicit.
| } | ||
|
|
||
| #[tokio::test] | ||
| async fn auth_middleware_rejects_with_401_when_credentials_required() { |
There was a problem hiding this comment.
♻️ refactor — auth_middleware_rejects_with_401_when_credentials_required doesn't actually test 401 (still open from prior review).
The test name and comment promise a 401 assertion, but the body comment admits it can't (basic_auth not configured in CI settings) and the only assertions are header present and status != 404. Two viable fixes:
- Construct a
Settingswithbasic_authpopulated (mirror whatapply_finalize_headerstests do inmiddleware.rs) andassert_eq!(resp.status(), 401). - Rename the test to reflect what it really verifies — e.g.
auth_middleware_runs_in_chain_for_protected_routes— and keep the existing assertions.
Either is fine; keeping the current name is misleading.
There was a problem hiding this comment.
Fixed. Renamed to auth_middleware_runs_in_chain_for_protected_routes. Updated comment describes what is actually verified: both middleware layers ran (X-Geo-Info-Available present) and the route is reachable (status != 404).
Blocking: - Fix cargo fmt failure in integration-tests cloudflare.rs (long import, struct init) - Replace `use worker::*` with explicit imports in adapter lib.rs - Return generic 500 body from top-level dispatch error; log detail server-side - Fix workerd process-group leak: spawn wrangler as group leader, killpg on drop - Use find_available_port() for wrangler dev instead of hardcoded 8787 - Reject multi-provider fan-out in select() with PlatformError::HttpClient instead of a noisy warn; per-provider timeout is now enforced by failing loudly rather than silently degrading to sum(latencies) - Fix clippy::type_complexity in axum platform.rs with ResponseTriplet alias - Fix docs prettier formatting Non-blocking: - Return Err from execute() on Body::Stream outbound instead of silent empty body - Assert unreachable! on Body::Stream in send_async (execute always returns Once) - Extract shared dispatch() helper from get_fallback/post_fallback duplicates - Rename auth test to auth_middleware_runs_in_chain_for_protected_routes - Update stale #[ignore] reasons for Cloudflare integration tests
- Merge latest PR16 base (bff1adc, 314575b): removes unused backend.rs and fixes .gitignore comment - Use SpawnedRequestResult type alias from PR16 (includes Result wrapper) - Add check-cloudflare alias for wasm32-unknown-unknown target check - Add clippy-cloudflare alias to complete the per-adapter clippy pattern
…ezero-pr17-cloudflare-adapter
Summary
Changes
Closes
Closes #498
Test plan
Checklist
Known deferred items