merge: temp#368
Conversation
📝 WalkthroughWalkthroughThis PR upgrades the Node.js runtime to version 22.22.0, updates core Credo Framework dependencies from 0.5.15 to 0.6.2, and refactors the agent architecture from core credential/proof modules to DIDComm equivalents. It adds comprehensive OpenID4VC support (issuer, verifier, holder), introduces a purge infrastructure with NATS/Cron schedulers, implements X.509 certificate and authentication flows, and migrates webhook/event handling to be tenant-aware. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
| console.log('[getTrustedCertificatesForVerification] tenantId from agentContext:', tenantId) | ||
|
|
||
| const authType = getAuthType() | ||
| console.log('[getTrustedCertificatesForVerification] authType:', authType) |
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com> Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
…de credential payload Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: RinkalBhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
* updated creat x509 method and few types Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com> * corrected mapper function for mdoc payload Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com> --------- Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
…ests Signed-off-by: Rinkal Bhojani <rinkal.bhojani@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
There was a problem hiding this comment.
Actionable comments posted: 6
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/controllers/polygon/PolygonController.ts (1)
110-117:⚠️ Potential issue | 🟠 MajorHandle all
DidOperationcases or restrict validation to supported operations.The guard accepts all four enum members (
Create,Update,Deactivate,AddResource) but only handlesCreateandUpdate. IfDeactivateorAddResourceare passed, validation succeeds but the function returnsundefinedimplicitly instead of raising an error or providing a response.Either add handlers for the remaining operations or restrict the validation to only
CreateandUpdateto prevent unexpected behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/polygon/PolygonController.ts` around lines 110 - 117, The guard currently allows all DidOperation enum values but only handles Create and Update in polygon. Update the logic in PolygonController (references: DidOperation, estimateFeeForDidOperation, estimateTransactionRequest) so it doesn't implicitly return undefined: either narrow the validation to only allow DidOperation.Create and DidOperation.Update (change the Object.values(...) check to only include those two) or add explicit handlers for DidOperation.Deactivate and DidOperation.AddResource that call request.agent.modules.polygon.estimateFeeForDidOperation with the appropriate payloads; if you choose not to support those ops, replace the current guard with one that throws BadRequestError for unsupported ops and ensure there is a final else that throws for any unhandled operation.src/controllers/anoncreds/endorser-transaction/EndorserTransactionController.ts (1)
1-204:⚠️ Potential issue | 🟠 MajorController is fully disabled via comments, removing its API surface.
This effectively drops all endorser-transaction endpoints while keeping dead code in-tree. If deprecation is intentional, delete the file and document the API removal; otherwise restore and migrate it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/anoncreds/endorser-transaction/EndorserTransactionController.ts` around lines 1 - 204, The EndorserTransactionController class and all its routes (class EndorserTransactionController and methods endorserTransaction, didNymTransaction, writeSchemaAndCredDefOnLedger, submitSchemaOnLedger, submitCredDefOnLedger) are commented out which removes the API surface; either delete the file and add a release note / changelog entry documenting removal, or restore the controller by uncommenting the class, its imports/decorators (Tags, Route, Security, injectable, Request/Body/Post), and all helper imports (parseIndySchemaId, getUnqualifiedSchemaId, parseIndyCredentialDefinitionId, getUnqualifiedCredentialDefinitionId, IndyVdrDidCreateOptions) then run/typecheck and fix any typing/name changes (AgentType, EndorserMode, CredentialEnum, ErrorHandlingService, BadRequestError) to match current codebase so the endpoints compile and are registered.package.json (1)
80-102:⚠️ Potential issue | 🟠 MajorRemove
@types/node-cron@3sincenode-cron@4ships with built-in type definitions.
node-cronv4 includes its own types and is fully TypeScript-native. The@types/node-cron@3package targets the v3 API and is incompatible with v4 (e.g.,job.running→job.isActive), which will cause type errors and break IntelliSense.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` around lines 80 - 102, Remove the incompatible dev dependency `@types/node-cron` from package.json (devDependencies) because node-cron@4 includes its own types; delete the "@types/node-cron" entry and then run the package manager (npm/yarn/pnpm) to update lockfiles so TypeScript picks up the built-in types for node-cron (no code changes required in functions using job.isActive).src/cli.ts (1)
158-164:⚠️ Potential issue | 🟠 MajorHarden API key coercion for empty/whitespace values.
Line 159 returns falsy input as-is, and whitespace-only values pass through as
''after trim. That bypasses the minimum-length guard.💡 Suggested fix
- coerce: (input: string | undefined) => { - if (!input) return input - input = input.trim() - if (input && input.length < 16) { + coerce: (input: string | undefined) => { + if (typeof input !== 'string') return undefined + const normalized = input.trim() + if (!normalized) return undefined + if (normalized.length < 16) { throw new Error('API key must be at least 16 characters long') } - return input + return normalized },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli.ts` around lines 158 - 164, The coerce function currently returns falsy input before trimming which allows whitespace-only values to become '' and bypass the length check; update coerce so it first checks for null/undefined, then trims input, then treat an empty trimmed string as invalid (return undefined or throw) and apply the minimum-length guard against the trimmed value, and finally return the trimmed API key; target the coerce implementation in src/cli.ts to make these changes.
🟠 Major comments (33)
src/enums/enum.ts-105-113 (1)
105-113:⚠️ Potential issue | 🟠 MajorChange the BLS curve identifier to use the registered casing.
The IETF COSE/JOSE specifications (draft-ietf-cose-bls-key-representations) define the BLS12381 G2 curve identifier as
BLS12381G2in all uppercase. When this enum value is serialized into JWKcrvfields, the current valuebls12381g2will not match the registered identifier, causing interoperability issues with standards-compliant systems.🔧 Proposed fix
export enum KeyAlgorithmCurve { Ed25519 = 'Ed25519', X25519 = 'X25519', P256 = 'P-256', P384 = 'P-384', P521 = 'P-521', secp256k1 = 'secp256k1', - Bls12381G2 = 'bls12381g2', + Bls12381G2 = 'BLS12381G2', }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/enums/enum.ts` around lines 105 - 113, The enum value for the BLS curve in KeyAlgorithmCurve uses lowercase 'bls12381g2' which doesn't match the IETF-registered identifier; update the KeyAlgorithmCurve member Bls12381G2 to use the registered casing 'BLS12381G2' so serialized JWK crv fields are standards-compliant and interoperable.src/enums/enum.ts-115-121 (1)
115-121:⚠️ Potential issue | 🟠 MajorRemove
VcSdJwtenum entry and update all references to useDcSdJwtinstead.The OpenID4VCI and OID4VP specifications use
dc+sd-jwtas the standard SD-JWT VC format identifier. Keepingvc+sd-jwtwill cause interoperability issues with spec-compliant implementations. This requires updating the active usage insrc/controllers/openid4vc/issuance-sessions/issuance-sessions.service.tswhereCredentialFormat.VcSdJwtis currently referenced, plus removing hardcoded'vc+sd-jwt'references from type definitions and examples throughout the codebase.🔧 Proposed changes
Remove from enum:
export enum CredentialFormat { - VcSdJwt = 'vc+sd-jwt', DcSdJwt = 'dc+sd-jwt', JwtVcJson = 'jwt_vc_json', JwtVcJsonLd = 'jwt_vc_json-ld', LdpVc = 'ldp_vc', }Update service reference:
-if (![CredentialFormat.VcSdJwt, CredentialFormat.DcSdJwt].includes(cred.format as unknown as CredentialFormat)) { +if (![CredentialFormat.DcSdJwt].includes(cred.format as unknown as CredentialFormat)) { throw new BadRequestError( - `Revocation is only supported for SD-JWT formats (vc+sd-jwt, dc+sd-jwt), got '${cred.format}'`, + `Revocation is only supported for SD-JWT format (dc+sd-jwt), got '${cred.format}'`, ) }Also update type interfaces and examples to remove
'vc+sd-jwt'references.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/enums/enum.ts` around lines 115 - 121, Remove the non-spec-compliant enum entry by deleting CredentialFormat.VcSdJwt from the CredentialFormat enum and update all usages to use CredentialFormat.DcSdJwt instead; specifically change the reference in IssuanceSessionsService (issuance-sessions.service.ts) where CredentialFormat.VcSdJwt is used, and search for and replace any hardcoded 'vc+sd-jwt' string literals in type definitions, interfaces, examples, and tests so they use 'dc+sd-jwt' or CredentialFormat.DcSdJwt; ensure any related type unions or example payloads are updated to remove 'vc+sd-jwt' occurrences to keep the codebase consistent with the enum change.src/controllers/didcomm/basic-messages/BasicMessageController.ts-1-58 (1)
1-58:⚠️ Potential issue | 🟠 MajorBasic message routes are effectively removed by commenting out the controller.
This disables both retrieval and send endpoints. If this is intentional, remove the file and publish a breaking-change note; otherwise keep the controller active and complete the migration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/didcomm/basic-messages/BasicMessageController.ts` around lines 1 - 58, The BasicMessageController class and its methods (BasicMessageController, getBasicMessages, sendMessage) are fully commented out, effectively removing the DIDComm basic-messages endpoints; either delete the file and add a breaking-change note if removal is intended, or restore and complete the migration by uncommenting the controller and all imports (DidCommBasicMessageRecord/DidCommBasicMessageStorageProps, Req, tsoa decorators, injectable, SCOPES, ErrorHandlingService, examples), ensure the route/security decorators (`@Route`('/didcomm/basic-messages'), `@Security`(...)), method decorators (`@Get`, `@Post`, `@Example`) and parameter decorators (`@Request`, `@Path`, `@Body`) are present, rewire calls to request.agent.modules.basicMessages in getBasicMessages and sendMessage, and run tests/compile to fix any type or import errors introduced by the migration.patches/@digitalcredentials+jsonld-signatures+12.0.1.patch-9-10 (1)
9-10:⚠️ Potential issue | 🟠 MajorRe-enable JSON-LD safe mode (or gate unsafe mode behind explicit opt-in).
Switching to
safe: falseremoves strict failure on dropped/undefined JSON-LD terms during canonization and verification-method framing, which can silently alter signed/verified payload semantics. This weakens signature integrity guarantees—safe mode should be enabled for security-sensitive operations per jsonld.js best practices.Suggested fix
- safe: false, + safe: true, ... - }, {documentLoader, compactToRelative: false, safe: false}); + }, {documentLoader, compactToRelative: false, safe: true});Also applies to: lines 18-19
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@patches/`@digitalcredentials+jsonld-signatures+12.0.1.patch around lines 9 - 10, The patch flips the jsonld option "safe" to false which disables strict handling of dropped/undefined terms; restore secure defaults by setting "safe: true" (or require an explicit opt-in flag like USE_UNSAFE_JSONLD/allowUnsafeJsonLd before setting safe:false) and apply this change to both occurrences noted (the two "safe" option sites in the diff). Locate the option object where "safe" is set (the JSON-LD canonization/verification-method framing calls) and either set safe:true or add a guarded conditional that only sets safe:false when an explicit environment/config flag is present, ensuring default behavior remains safe.samples/cliConfig.json-49-49 (1)
49-49:⚠️ Potential issue | 🟠 MajorEmpty
fileServerTokenmakes the sample config non-runnable without environment variable setup.The sample config passes an empty token to
PolygonModule, which requires this value. While the module validates and fails gracefully on missing values, an empty string in a sample config defeats its purpose as a working reference. Use a placeholder instead and document how to override it with an actual token.Suggested fix
- "fileServerToken": "" + "fileServerToken": "REPLACE_WITH_FILE_SERVER_TOKEN"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@samples/cliConfig.json` at line 49, The sample config's "fileServerToken" is an empty string which prevents running PolygonModule with a sensible sample; replace the empty value with a clear placeholder (e.g., "YOUR_FILE_SERVER_TOKEN_HERE") and add a short comment or adjacent sample field explaining that PolygonModule expects a real token and can be overridden via environment variable (or runtime config) so users know to substitute a real token before running.src/controllers/x509/crypto-util.ts-35-39 (1)
35-39:⚠️ Potential issue | 🟠 MajorValidate Ed25519 by
kty/crv, not bydpresence alone.
jwk.dexists in both Ed25519 and RSA private JWKs. Without checkingktyandcrv, this function will incorrectly process non-Ed25519 keys passed in PKCS8 DER format and return invalid key material.Suggested fix
const jwk = keyObj.export({ format: 'jwk' }) - if (!jwk.d) throw new Error('Not an Ed25519 private key') + if (jwk.kty !== 'OKP' || jwk.crv !== 'Ed25519' || !jwk.d) { + throw new Error('Not an Ed25519 private key') + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/x509/crypto-util.ts` around lines 35 - 39, The function currently trusts jwk.d exists and uses it as Ed25519 seed; instead validate the JWK is actually Ed25519 by checking jwk.kty === 'OKP' and jwk.crv === 'Ed25519' (after calling keyObj.export({ format: 'jwk' })), and if those fields don't match throw an Error like 'Not an Ed25519 private key'; only then convert jwk.d with Buffer.from(jwk.d, 'base64').toString('hex') and return it.src/utils/auth.ts-41-56 (1)
41-56:⚠️ Potential issue | 🟠 MajorReplace
console.*here or lint will keep failing.This hunk still trips
no-consoleon Lines 41 and 56, and Line 28 is already doing the same ingetAuthType(). As written,yarn lint/yarn validatewill not pass.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/auth.ts` around lines 41 - 56, Replace the console.log calls in validateAuthConfig (and any other console usage in getAuthType) with the project's logger; specifically, change the two console.log statements around authType validation to use the centralized logger (e.g., processLogger.info/debug or the imported logger instance) so lint no-console errors are resolved, keeping the messages identical and leaving the validators/authType logic untouched.src/utils/auth.ts-25-30 (1)
25-30:⚠️ Potential issue | 🟠 MajorFail closed when
TRUST_SERVICE_AUTH_TYPEis missing.Right now a missing env var silently falls back to
NoAuth, which can disable client auth in production by mistake. This should require an explicit value, or only allow the fallback in a clearly gated local/dev mode.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/auth.ts` around lines 25 - 30, getAuthType currently falls back to AuthTypes.NoAuth when TRUST_SERVICE_AUTH_TYPE is missing; change it to fail closed by throwing an Error if the env var is not set unless explicitly allowed by a gated local/dev condition (e.g. check NODE_ENV === 'development' or a specific LOCAL_DEV flag). Update the getAuthType function to read process.env.TRUST_SERVICE_AUTH_TYPE, validate it against known AuthTypes, and if missing/invalid throw a descriptive error; only when the gated local/dev condition is true should it log a warning and return AuthTypes.NoAuth.src/purge/schedulers/CronPurgeScheduler.ts-124-157 (1)
124-157:⚠️ Potential issue | 🟠 MajorAvoid full-record scans on every cron run
Lines 126, 134, 143, and 152 query all records and filter by
createdAtin memory. This will degrade badly as data grows.Please move cutoff filtering into storage queries (and/or batch/paginate ID retrieval) so each tick only touches expired records.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/schedulers/CronPurgeScheduler.ts` around lines 124 - 157, CronPurgeScheduler is performing full-record scans for DIDCOMM_CREDENTIAL, DIDCOMM_PROOF, OID4VC_ISSUANCE and OID4VC_VERIFICATION by calling findAllByQuery/findByQuery and filtering createdAt in memory; replace these in-memory filters with storage-level queries that include a cutoff (e.g., createdAt < cutoffMs) and return only IDs, and implement batching/pagination to iterate pages of IDs (rather than loading all records) for the methods handling PurgeRecordType.DIDCOMM_CREDENTIAL, PurgeRecordType.DIDCOMM_PROOF and the repositories OpenId4VcIssuanceSessionRepository and OpenId4VcVerificationSessionRepository so each cron tick only touches expired record IDs.src/utils/NatsAuthenticator.ts-11-29 (1)
11-29:⚠️ Potential issue | 🟠 MajorFail fast on unsupported
NATS_AUTH_TYPEvaluesLine 28 currently falls back to unauthenticated mode for any unknown auth type. A typo/misconfig can silently disable NATS auth.
Proposed fix
export function buildNatsAuthenticator(nats: NatsConfig): { authenticator?: Authenticator } { - const authType = (process.env.NATS_AUTH_TYPE as NatsAuthType) || 'none' + const rawAuthType = process.env.NATS_AUTH_TYPE ?? 'none' + const allowedAuthTypes: NatsAuthType[] = ['nkey', 'creds', 'usernamePassword', 'none'] + if (!allowedAuthTypes.includes(rawAuthType as NatsAuthType)) { + throw new Error(`[NATS] Unsupported NATS_AUTH_TYPE="${rawAuthType}"`) + } + const authType = rawAuthType as NatsAuthType switch (authType) { @@ case 'none': - default: return {} } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/NatsAuthenticator.ts` around lines 11 - 29, The switch on authType in NatsAuthenticator.ts currently defaults to returning an empty object for any unknown value; change the default branch so that it only returns {} when authType === 'none' and otherwise throws a clear Error for unsupported values (e.g., throw new Error(`[NATS] Unsupported NATS_AUTH_TYPE=${authType}`)); update the switch/default logic around the authType constant and the switch statement so typos or misconfigurations (values other than 'nkey', 'creds', 'usernamePassword', 'none') fail fast instead of silently disabling authentication.src/purge/schedulers/NatsPurgeScheduler.ts-37-64 (1)
37-64:⚠️ Potential issue | 🟠 MajorCleanup NATS connection if startup fails mid-way
If provisioning fails after Line 42, the connection remains open. Wrap startup in
try/catchand close/drain on failure.Proposed fix
async start(agent: Agent, config: PurgeConfig, webhookUrl: string | undefined): Promise<void> { - const { natsConfig } = config - this.ttlSeconds = natsConfig.ttlSeconds - this.recordTypes = natsConfig.recordTypes - - this.nc = await connect({ - servers: natsConfig.nats.servers, - ...buildNatsAuthenticator(natsConfig.nats), - maxReconnectAttempts: NATS_MAX_RECONNECT_ATTEMPTS, - reconnectTimeWait: NATS_RECONNECT_TIME_WAIT_MS, - }) - this.js = this.nc.jetstream() - this.jsm = await this.nc.jetstreamManager() + const { natsConfig } = config + this.ttlSeconds = natsConfig.ttlSeconds + this.recordTypes = natsConfig.recordTypes + + try { + this.nc = await connect({ + servers: natsConfig.nats.servers, + ...buildNatsAuthenticator(natsConfig.nats), + maxReconnectAttempts: NATS_MAX_RECONNECT_ATTEMPTS, + reconnectTimeWait: NATS_RECONNECT_TIME_WAIT_MS, + }) + this.js = this.nc.jetstream() + this.jsm = await this.nc.jetstreamManager() @@ - await this.startWorkers(agent, webhookUrl) + await this.startWorkers(agent, webhookUrl) @@ - agent.config.logger.info('[Purge] NatsPurgeScheduler started', { ttlSeconds: this.ttlSeconds }) + agent.config.logger.info('[Purge] NatsPurgeScheduler started', { ttlSeconds: this.ttlSeconds }) + } catch (error) { + if (this.nc) await this.nc.close() + this.nc = null + this.js = null + this.jsm = null + throw error + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/schedulers/NatsPurgeScheduler.ts` around lines 37 - 64, The start method may leave an open NATS connection (this.nc) if provisioning (provisionStreams/provisionConsumers) or startWorkers fails; wrap the body of start in a try/catch/finally (or try/catch) so that on any error you call await this.nc?.drain() or await this.nc?.close() (and set this.nc = undefined) before rethrowing the error; ensure you still set up this.js and this.jsm after connect and that errors thrown by provisionStreams, provisionConsumers, or startWorkers trigger the cleanup path in start (referencing start, this.nc, provisionStreams, provisionConsumers, startWorkers, this.js, this.jsm).src/purge/PurgeWorker.ts-52-64 (1)
52-64:⚠️ Potential issue | 🟠 MajorValidate job payload shape before processing
After Line 45, the code trusts payload fields. Missing/invalid
recordId,agentMode, ortenantId(for shared mode) will fail downstream and consume retries unnecessarily.Proposed fix
const { recordId, recordType, tenantId, agentMode } = job const logger = agent.config.logger const deliveryCount: number = msg.info.deliveryCount + + const validAgentMode = agentMode === 'shared' || agentMode === 'dedicated' + if (!recordId || typeof recordId !== 'string' || !validAgentMode) { + logger.error('[Purge] Invalid job payload — discarding', { consumer: this.consumerName }) + msg.ack() + return + } + if (agentMode === 'shared' && (!tenantId || typeof tenantId !== 'string')) { + logger.error('[Purge] Shared-mode job missing tenantId — discarding', { consumer: this.consumerName, recordId }) + msg.ack() + return + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/PurgeWorker.ts` around lines 52 - 64, Validate the job payload fields before processing: check that job.recordId is a non-empty string, job.recordType matches this.recordType (already done), and job.agentMode is present and one of the allowed modes; if agentMode indicates shared operation require a non-empty job.tenantId. If any validation fails, call logger.error with the expected vs received values (include recordId, recordType, agentMode, tenantId and deliveryCount), then call msg.ack() and return to avoid retry consumption; update the early-return path in PurgeWorker where recordType is checked to perform these validations (referencing variables recordId, recordType, tenantId, agentMode, this.recordType, logger, msg.ack(), and deliveryCount).src/purge/decorators/SchedulePurge.ts-15-20 (1)
15-20:⚠️ Potential issue | 🟠 MajorCron fallback is only logged, never executed.
Lines 17-20 skip scheduling as soon as NATS is unavailable, and Lines 42-47 only report that a cron fallback is "active" after a NATS failure. This decorator never calls the cron scheduler, so purges are dropped in cron-only deployments and on NATS publish failures.
Suggested fix
- const scheduler = getNatsPurgeScheduler() + const natsScheduler = getNatsPurgeScheduler() + const cronScheduler = getCronPurgeScheduler() + const scheduler = natsScheduler ?? cronScheduler if (!scheduler) { console.warn(`[Purge] `@SchedulePurge`(${recordType}): NATS scheduler not initialized — skipping`) return result } @@ - scheduler.schedulePurge(recordType, recordId, tenantId, agentMode).catch((err: Error) => { - const hasCronFallback = getCronPurgeScheduler() !== null - const level = hasCronFallback ? 'warn' : 'error' - console[level]( - `[Purge] Failed to schedule NATS purge for ${recordType}:${recordId} — ${hasCronFallback ? 'cron fallback active' : 'NO cron fallback, record may leak'}: ${err?.message}`, - ) - }) + if (natsScheduler) { + natsScheduler.schedulePurge(recordType, recordId, tenantId, agentMode).catch(async (err: Error) => { + if (cronScheduler) { + await cronScheduler.schedulePurge(recordType, recordId, tenantId, agentMode) + return + } + + console.error( + `[Purge] Failed to schedule purge for ${recordType}:${recordId}: ${err?.message}`, + ) + }) + } else { + void cronScheduler!.schedulePurge(recordType, recordId, tenantId, agentMode) + }Also applies to: 42-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/decorators/SchedulePurge.ts` around lines 15 - 20, The decorator currently aborts when getNatsPurgeScheduler() returns falsy and only logs that a cron fallback is "active" later, but never actually schedules the cron job; update the SchedulePurge decorator so that when getNatsPurgeScheduler() is falsy you call the cron scheduler (e.g. getCronPurgeScheduler()) and register the purge using the same schedule/recordType/handler parameters before returning result; likewise, in the NATS publish/error path where you detect failure (the block that logs the cron fallback as active) invoke the cron scheduler as a fallback and ensure you log the action and still return the original result. Ensure you reference and reuse the same scheduling parameters used for NATS scheduling so behavior is identical between NATS and cron fallbacks.src/events/ProofEvents.ts-12-16 (1)
12-16:⚠️ Potential issue | 🟠 MajorMake tenant ID extraction robust before calling
getTenantAgent.Line 14 assumes
contextCorrelationIdalways containstenant-. If format differs,tenantIdbecomesundefined/empty and tenant resolution fails.💡 Suggested fix
- if (event.metadata.contextCorrelationId && event.metadata.contextCorrelationId !== 'default') { + if (event.metadata.contextCorrelationId && event.metadata.contextCorrelationId !== 'default') { + const contextCorrelationId = event.metadata.contextCorrelationId + const tenantId = contextCorrelationId.startsWith('tenant-') + ? contextCorrelationId.slice('tenant-'.length) + : contextCorrelationId + + if (!tenantId) { + throw new Error(`Invalid tenant correlation id: ${contextCorrelationId}`) + } + const tenantAgent = await agent.modules.tenants.getTenantAgent({ - tenantId: event.metadata.contextCorrelationId.split('tenant-')[1], + tenantId, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/events/ProofEvents.ts` around lines 12 - 16, The code in ProofEvents reads event.metadata.contextCorrelationId and blindly splits by 'tenant-' before calling getTenantAgent, which can produce an undefined tenantId; update the logic to robustly extract and validate the tenant id (e.g., check that contextCorrelationId exists, match/extract with a regex or startsWith('tenant-') and take the substring after the prefix), handle the failure case by logging/erroring or falling back (do not call agent.modules.tenants.getTenantAgent with an undefined id), and only call tenantAgent.modules.didcomm.proofs.getFormatData(record.id) when a valid tenantId and tenantAgent are obtained.src/utils/customDocumentLoader.ts-14-23 (1)
14-23:⚠️ Potential issue | 🟠 MajorAvoid non-null env assertions in URL matching/replacement path.
Lines 14 and 22 depend on external gating. Add local validated constants so this loader fails fast if misconfigured.
💡 Suggested fix
-function isW3CDeprecatedUrl(url: string, agentContext: AgentContext): boolean { +function isW3CDeprecatedUrl(url: string, deprecatedDomain: string): boolean { - return url.startsWith(process.env.DEPRECATED_DOMAIN!) + return url.startsWith(deprecatedDomain) } -function replaceUrl(url: string, agent: AgentContext): string { +function replaceUrl(url: string, agent: AgentContext, deprecatedDomain: string, currentDomain: string): string { agent.config.logger.debug(`Replacing deprecated domain with updated domain`) - return url.replace(process.env.DEPRECATED_DOMAIN!, process.env.CURRENT_DOMAIN!) + return url.replace(deprecatedDomain, currentDomain) } @@ export const CustomDocumentLoader = (agentContext: AgentContext): DocumentLoader => { + const deprecatedDomain = process.env.DEPRECATED_DOMAIN?.trim() + const currentDomain = process.env.CURRENT_DOMAIN?.trim() + if (!deprecatedDomain || !currentDomain) { + throw new CredoError('CustomDocumentLoader requires DEPRECATED_DOMAIN and CURRENT_DOMAIN') + } const defaultLoader = defaultDocumentLoader(agentContext) @@ - if (isW3CDeprecatedUrl(url, agentContext)) { + if (isW3CDeprecatedUrl(url, deprecatedDomain)) { @@ - url = replaceUrl(url, agentContext) + url = replaceUrl(url, agentContext, deprecatedDomain, currentDomain) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/customDocumentLoader.ts` around lines 14 - 23, Remove the non-null assertions on process.env variables and validate the environment values up front: create local constants (e.g., const DEPRECATED_DOMAIN = process.env.DEPRECATED_DOMAIN; const CURRENT_DOMAIN = process.env.CURRENT_DOMAIN) validated at function entry or module init and throw a clear error if either is missing; then use those validated constants in the URL check and in replaceUrl (which receives AgentContext) instead of process.env.DEPRECATED_DOMAIN! and process.env.CURRENT_DOMAIN! so the loader fails fast and avoids runtime surprises.src/utils/customDocumentLoader.ts-2-5 (1)
2-5:⚠️ Potential issue | 🟠 MajorReplace internal
@credo-ts/core/build/...imports with public exports from@credo-ts/core.
defaultDocumentLoaderhas a confirmed public export in v0.6.2. Import it from@credo-ts/coreinstead of the internal build path.DocumentLoaderResultshould be imported from a public export if available. If no public export exists, define it inline asPromise<{ document: any; contextUrl?: string }>or check if theDocumentLoadertype definition already describes this result shape, allowing you to derive it from that type instead of using the internal build path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/customDocumentLoader.ts` around lines 2 - 5, The file imports internal build paths; replace the internal imports by importing defaultDocumentLoader from the public package entry (use defaultDocumentLoader from `@credo-ts/core`) and remove the internal path import; for DocumentLoaderResult either import a public type from `@credo-ts/core` if available or replace its use with an inline type alias Promise<{ document: any; contextUrl?: string }> (or derive the shape from the public DocumentLoader type) so update usages of DocumentLoaderResult accordingly in this module (look for defaultDocumentLoader and DocumentLoaderResult references).src/controllers/didcomm/connections/ConnectionController.ts-132-140 (1)
132-140:⚠️ Potential issue | 🟠 Major
/didcomm/url/:invitationIdnow returns a full record instead of URL-shaped dataLine 139 returns
outOfBandRecorddirectly. For a/url/endpoint, this is a contract change and can leak internal fields to clients expecting invitation URL data.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/didcomm/connections/ConnectionController.ts` around lines 132 - 140, The getInvitation method in ConnectionController currently returns the full outOfBandRecord which leaks internal fields; change getInvitation to map/sanitize the found record (from request.agent.modules.didcomm.connections.findByInvitationDid) and return only the invitation URL-shaped payload expected by the /didcomm/url/:invitationId contract (e.g., the invitation URL and minimal metadata such as invitationId or createdAt), not the entire outOfBandRecord; ensure the NotFoundError behavior remains and that you reference the same method (getInvitation) and symbol outOfBandRecord when implementing the mapping/sanitization.src/controllers/auth/AuthController.ts-26-39 (1)
26-39:⚠️ Potential issue | 🟠 Major
orgIdis unused in an org-scoped token routeLine 28 takes
orgId, but Lines 36-39 ignore it. This lets/orgs/{orgId}/tokenbehave identically across orgs and can break org scoping.Suggested fix
const response = await axios.post<OrgTokenResponse>( `${trustServiceTokenUrl}`, - { clientId: body.clientId, clientSecret: body.clientSecret }, + { orgId, clientId: body.clientId, clientSecret: body.clientSecret }, { headers: { 'Content-Type': 'application/json', accept: 'application/json' } }, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/auth/AuthController.ts` around lines 26 - 39, getOrgToken currently accepts orgId but never uses it, causing org-scoped token requests to be identical across orgs; update the getOrgToken implementation to include orgId in the call to the trust service and/or validate it against request data: modify the axios.post call in getOrgToken to either send orgId in the request payload (e.g. include orgId alongside clientId/clientSecret) or include it as a path/query param or header to `${trustServiceTokenUrl}`, and add validation logic in getOrgToken to ensure the returned token is scoped to the provided orgId (or reject if the trust service response lacks the expected orgId). Ensure you update references in getOrgToken to use the orgId parameter and adjust any request/response typing if needed.src/cliAgent.ts-208-212 (1)
208-212:⚠️ Potential issue | 🟠 MajorFix inverted custom document-loader toggle.
isCustomDocumentLoaderEnabled()currently returns the opposite branch from what the name implies, so the custom loader is applied when disabled and skipped when enabled.Suggested fix
- w3cCredentials: isCustomDocumentLoaderEnabled() - ? new W3cCredentialsModule() - : new W3cCredentialsModule({ - documentLoader: CustomDocumentLoader, - }), + w3cCredentials: isCustomDocumentLoaderEnabled() + ? new W3cCredentialsModule({ + documentLoader: CustomDocumentLoader, + }) + : new W3cCredentialsModule(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cliAgent.ts` around lines 208 - 212, The conditional for w3cCredentials uses isCustomDocumentLoaderEnabled() inverted: when the flag is true we should pass the custom loader to W3cCredentialsModule, otherwise use the default constructor. Update the expression around isCustomDocumentLoaderEnabled() so that when it returns true you instantiate new W3cCredentialsModule({ documentLoader: CustomDocumentLoader }) and when false you instantiate new W3cCredentialsModule() — changes affect the w3cCredentials assignment that calls W3cCredentialsModule and references CustomDocumentLoader.src/cliAgent.ts-301-304 (1)
301-304:⚠️ Potential issue | 🟠 MajorRemove clear-text logging of tenant/auth context.
Line 301 and Line 304 log sensitive runtime context directly. This is a security/privacy regression and aligns with the CodeQL finding.
Suggested fix
- console.log('[getTrustedCertificatesForVerification] tenantId from agentContext:', tenantId) ... - console.log('[getTrustedCertificatesForVerification] authType:', authType) + // Avoid logging tenant/auth-sensitive values in clear text🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cliAgent.ts` around lines 301 - 304, Remove the clear-text console.log statements that print sensitive tenant/auth context in getTrustedCertificatesForVerification (references: function getTrustedCertificatesForVerification, variables tenantId and authType) and replace them with either no logging or a non-sensitive, masked/debug-level log via the centralized logger (e.g., processLogger.debug or similar) that does not include actual tenantId or authType values; ensure any retained log uses redaction/masking (e.g., hash or prefix-only) and follow the project's logger usage patterns instead of console.log.src/cliAgent.ts-222-239 (1)
222-239:⚠️ Potential issue | 🟠 Major
||defaults are overriding explicit config values.At Line 222 and Line 474/489,
autoAcceptConnections || truealways resolves totruewhenfalseis intentionally configured. Use nullish coalescing (??) for defaults.Suggested fix
- autoAcceptConnections: autoAcceptConnections || true, + autoAcceptConnections: autoAcceptConnections ?? true, ... - autoAcceptProofs: autoAcceptProofs || DidCommAutoAcceptProof.ContentApproved, + autoAcceptProofs: autoAcceptProofs ?? DidCommAutoAcceptProof.ContentApproved, ... - autoAcceptCredentials: autoAcceptCredentials || DidCommAutoAcceptCredential.Always, + autoAcceptCredentials: autoAcceptCredentials ?? DidCommAutoAcceptCredential.Always,- autoAcceptConnections || true, - autoAcceptCredentials || DidCommAutoAcceptCredential.Always, - autoAcceptProofs || DidCommAutoAcceptProof.ContentApproved, + autoAcceptConnections ?? true, + autoAcceptCredentials ?? DidCommAutoAcceptCredential.Always, + autoAcceptProofs ?? DidCommAutoAcceptProof.ContentApproved,Also applies to: 474-491
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cliAgent.ts` around lines 222 - 239, The defaults using the logical OR operator are overriding explicit false values; update the config to use nullish coalescing instead of || so only null/undefined fall back to defaults—for example replace occurrences like autoAcceptConnections || true with autoAcceptConnections ?? true, autoAcceptProofs || DidCommAutoAcceptProof.ContentApproved with autoAcceptProofs ?? DidCommAutoAcceptProof.ContentApproved, and autoAcceptCredentials || DidCommAutoAcceptCredential.Always with autoAcceptCredentials ?? DidCommAutoAcceptCredential.Always in the CLI agent configuration (look for the object containing autoAcceptConnections, proofs.autoAcceptProofs, and credentials.autoAcceptCredentials in cliAgent.ts).src/controllers/openid4vc/holder/credentialBindingResolver.ts-38-43 (1)
38-43:⚠️ Potential issue | 🟠 MajorClamp and validate
batchSizebefore key generation.
requestBatchcan produce0, negative, or non-finite values, which can breaknew Array(batchSize)or silently request no bindings.Proposed fix
- const batchSize = - requestBatch === true - ? Math.min(issuerMaxBatchSize, 10) - : typeof requestBatch === 'number' - ? Math.min(issuerMaxBatchSize, requestBatch) - : 1 + const requested = + requestBatch === true ? Math.min(issuerMaxBatchSize, 10) : typeof requestBatch === 'number' ? requestBatch : 1 + const batchSize = Math.max(1, Math.min(issuerMaxBatchSize, Math.floor(requested)))Also applies to: 54-56
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/holder/credentialBindingResolver.ts` around lines 38 - 43, The batchSize calculation must validate and clamp inputs to avoid 0, negative, NaN, or non-finite values before any array/key generation: when computing batchSize (the existing ternary using requestBatch and issuerMaxBatchSize), coerce numeric requestBatch to a finite integer via Math.floor(Number(requestBatch)) (or parseInt), then clamp with Math.max(1, Math.min(issuerMaxBatchSize, ...)) and for the boolean-true branch use Math.min(issuerMaxBatchSize, 10); ensure the default branch yields 1; apply the same validated/clamped logic to the other identical batchSize computation later in this file so new Array(batchSize) or binding generation always receives an integer >=1 and <=issuerMaxBatchSize.src/controllers/did/DidController.ts-500-504 (1)
500-504:⚠️ Potential issue | 🟠 MajorGuard against missing DID document before import.
In the existing-DID branch,
didDocumentcan be undefined ifgetCreatedDidsreturns empty. Importing in that state is unsafe.Proposed fix
const createdDid = await agent.dids.getCreatedDids({ method: DidMethod.Key, did: didOptions.did, }) didDocument = createdDid[0]?.didDocument + if (!didDocument) { + throw new BadRequestError(`DID not found in wallet: ${didOptions.did}`) + } await agent.dids.import({ did, overwrite: true, didDocument, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/did/DidController.ts` around lines 500 - 504, The import call to agent.dids.import can receive an undefined didDocument when getCreatedDids returns empty; add a guard before calling agent.dids.import (in the Existing-DID branch around the didDocument variable) to verify didDocument is defined and either throw a clear error or skip the import, e.g., check the result of getCreatedDids/getCreatedDids(...) or the didDocument variable and only call agent.dids.import({ did, overwrite: true, didDocument }) when didDocument is non-null/defined to avoid unsafe imports.src/utils/statusListService.ts-91-93 (1)
91-93:⚠️ Potential issue | 🟠 MajorValidate status-list size before allocating the array.
sizecan becomeNaN,0, or negative when env/default input is bad, which will throw at runtime (new Array(size)) or create an invalid list.Proposed fix
- const size = listSize || Number(process.env.STATUS_LIST_DEFAULT_SIZE); - const statusList = new StatusList(new Array(size).fill(0), 1); + const defaultSize = Number(process.env.STATUS_LIST_DEFAULT_SIZE) + const size = listSize ?? defaultSize + if (!Number.isInteger(size) || size <= 0) { + throw new Error(`Invalid status list size: ${size}`) + } + const statusList = new StatusList(new Array(size).fill(0), 1)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/statusListService.ts` around lines 91 - 93, Validate and sanitize the computed size before constructing the array: ensure the derived size (from listSize or Number(process.env.STATUS_LIST_DEFAULT_SIZE)) is a finite integer >= 1 and <= a reasonable max (e.g., MAX_STATUS_LIST_SIZE constant) and fall back to a safe default or throw a clear error if validation fails; replace the current assignment/usage of size and the call new Array(size) in the status-list creation (the code that uses listSize, process.env.STATUS_LIST_DEFAULT_SIZE, the size variable, and new StatusList(...)) with a validatedSize computed via Number.parseInt/Number.isInteger checks and bounds enforcement so you never call new Array with NaN, 0, negative, or excessively large values.src/controllers/openid4vc/issuers/issuer.service.ts-36-40 (1)
36-40:⚠️ Potential issue | 🟠 MajorUse
agent.modules.openid4vcconsistently ingetIssuersByQuery.Lines 38-39 use an incorrect access pattern that differs from the rest of the file and other methods in this service. All other methods (lines 13, 29, 44, 54) correctly access modules via
agentReq.agent.modules.openid4vc, butgetIssuersByQueryuses(agentReq.agent as Agent<RestAgentModules>).openid4vcwhich omits the.modulesproperty.Proposed fix
public async getIssuersByQuery(agentReq: Req, publicIssuerId?: string) { - const result = publicIssuerId - ? (agentReq.agent as Agent<RestAgentModules>).openid4vc.issuer?.getIssuerByIssuerId(publicIssuerId) - : (agentReq.agent as Agent<RestAgentModules>).openid4vc.issuer?.getAllIssuers() + const result = publicIssuerId + ? agentReq.agent.modules.openid4vc.issuer?.getIssuerByIssuerId(publicIssuerId) + : agentReq.agent.modules.openid4vc.issuer?.getAllIssuers() return result }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/issuers/issuer.service.ts` around lines 36 - 40, In getIssuersByQuery, replace the incorrect access to (agentReq.agent as Agent<RestAgentModules>).openid4vc with the consistent modules path agentReq.agent.modules.openid4vc so both branches call agentReq.agent.modules.openid4vc.getIssuerByIssuerId(publicIssuerId) or agentReq.agent.modules.openid4vc.getAllIssuers(); update the function to use agentReq.agent.modules.openid4vc everywhere to match other methods (e.g., getIssuersByQuery, and referenced methods on openid4vc) and return the resulting promise as before.src/controllers/x509/x509.service.ts-100-126 (1)
100-126:⚠️ Potential issue | 🟠 Major
subjectPublicKeyinput is currently ignored during certificate creation.
subjectPublicKeyIDis derived, butagent.x509.createCertificate(...)only receivesauthorityKey. As written, caller-supplied subject key material has no effect.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/x509/x509.service.ts` around lines 100 - 126, The derived subject key (subjectPublicKeyID) is computed but never passed into agent.x509.createCertificate, so provided subjectPublicKey has no effect; update the call to agent.x509.createCertificate to include the subjectPublicKey parameter (e.g. subjectPublicKey: Kms.PublicJwk.fromPublicJwk(subjectPublicKeyID)) when subjectPublicKeyID is set, ensuring you use the same PublicJwk conversion used for the authorityKey (Kms.PublicJwk.fromPublicJwk) and only pass it if subjectPublicKeyID is defined; also remove or adjust the temporary info log if unnecessary.src/utils/helpers.ts-261-267 (1)
261-267:⚠️ Potential issue | 🟠 MajorDo not log raw token payloads or token lifecycle details.
tokenResponse.dataand cache-expiry logs can expose bearer credentials and auth metadata in logs. Redact/remove these fields.🔐 Suggested hardening
- console.log(`[${label}] token response data:`, JSON.stringify(tokenResponse.data, null, 2)) + console.log(`[${label}] token response received`) - console.log( - `[${label}] token cached for clientId:`, - clientId, - '| expires at:', - new Date(expiresAt * 1000).toISOString(), - ) + console.log(`[${label}] token cached`)Also applies to: 272-277
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/helpers.ts` around lines 261 - 267, Remove logging of raw token payloads and any fields that may contain bearer tokens or auth metadata; in the code that handles tokenResponse (use the variables label, tokenResponse, and token in this block and the similar block at the later occurrence), stop calling JSON.stringify on tokenResponse.data and do not log access_token, refresh_token, expires_at, or similar fields. Instead log only non-sensitive metadata (e.g., tokenResponse.status and a boolean indicating presence of an access token) or a redacted shape where token fields are replaced with "<REDACTED>". When throwing the error, include the label and a minimal diagnostic (e.g., "access_token not found") without printing the full tokenResponse.data. Ensure both occurrences (current block and the later 272-277 block) are updated consistently.src/utils/helpers.ts-240-244 (1)
240-244:⚠️ Potential issue | 🟠 MajorAdd explicit timeouts to both axios.post calls to prevent request stalling.
Both axios.post calls (lines 240 and 295) lack timeout configuration in their config objects. Under network issues or service latency, these external calls can hang indefinitely and stall request handling. Add a
timeoutproperty to the config object (e.g.,timeout: 10000for 10 seconds) for both calls.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/helpers.ts` around lines 240 - 244, The two external axios.post calls that obtain tokens (the one assigning tokenResponse and the other token-fetch call later around the refresh/secondary flow) lack timeouts and can hang; update both axios.post config objects to include a timeout (e.g., timeout: 10000) alongside the existing headers so the requests fail fast on network/service latency, keeping the rest of the config (headers: {'Content-Type':'application/json', accept:'application/json'}) intact.src/controllers/openid4vc/issuance-sessions/issuance-sessions.service.ts-119-123 (1)
119-123:⚠️ Potential issue | 🟠 MajorRequire a non-empty X5C chain for X5C signer mode.
At Line 119,
!cred.signerOptions.x5callows[], which then breaks later whenx5c[0]is consumed.Suggested fix
- if (cred.signerOptions.method === SignerMethod.X5c && !cred.signerOptions.x5c) { + if ( + cred.signerOptions.method === SignerMethod.X5c && + (!Array.isArray(cred.signerOptions.x5c) || cred.signerOptions.x5c.length === 0) + ) { throw new BadRequestError( `For ${cred.credentialSupportedId} : x5c must be present inside signerOptions if SignerMethod is 'x5c' `, ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/issuance-sessions/issuance-sessions.service.ts` around lines 119 - 123, The current guard in issuance-sessions.service.ts only checks falsiness of cred.signerOptions.x5c which allows an empty array and later causes failures when x5c[0] is accessed; update the validation for the X5C signer mode (where you check cred.signerOptions.method === SignerMethod.X5c) to assert that cred.signerOptions.x5c is an array with at least one element (e.g., Array.isArray(cred.signerOptions.x5c) && cred.signerOptions.x5c.length > 0) and throw the existing BadRequestError with the same message if that check fails so x5c[0] is safe to consume later.src/utils/oid4vc-agent.ts-59-79 (1)
59-79:⚠️ Potential issue | 🟠 MajorEnforce signer material presence before branching.
For DID/X5C modes, missing
did/x5ccurrently slips through and fails later with misleading errors. Validate required signer fields immediately.Suggested fix
- if (credential.signerOptions.method === SignerMethod.Did) { - if (credential.signerOptions.did) { + if (credential.signerOptions.method === SignerMethod.Did) { + if (!credential.signerOptions?.did) { + throw new Error('did must be provided when using Did as signer method') + } const didsApi = agentContext.dependencyManager.resolve(DidsApi) const didDocument = await didsApi.resolveDidDocument(credential.signerOptions.did) // Set the first verificationMethod as backup, in case we won't find a match if (didDocument.verificationMethod?.[0].id) { issuerDidVerificationMethod = didDocument.verificationMethod?.[0].id } if (!issuerDidVerificationMethod) { throw new Error('DID must be provided when using Did as signer method') } - } } else if (credential.signerOptions.method === SignerMethod.X5c) { - if (credential.signerOptions.x5c) { - issuerx509certificate = credential.signerOptions.x5c // as string[] | undefined; - - if (!issuerx509certificate) { - throw new Error('x509certificate must be provided when using x5c as signer method') - } - } + if (!Array.isArray(credential.signerOptions?.x5c) || credential.signerOptions.x5c.length === 0) { + throw new Error('x509certificate must be provided when using x5c as signer method') + } + issuerx509certificate = credential.signerOptions.x5c }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/oid4vc-agent.ts` around lines 59 - 79, The branch logic in oid4vc-agent.ts lets missing signer material (credential.signerOptions.did or credential.signerOptions.x5c) slip through and surface later as confusing errors; update the code in the SignerMethod.Did and SignerMethod.X5c handling to validate required fields up-front: when credential.signerOptions.method === SignerMethod.Did, immediately check credential.signerOptions.did and throw a clear Error if missing before calling agentContext.dependencyManager.resolve(DidsApi) / resolveDidDocument and before using issuerDidVerificationMethod; similarly, when method === SignerMethod.X5c, immediately check credential.signerOptions.x5c and throw a clear Error if missing before assigning issuerx509certificate or proceeding further.src/utils/oid4vc-agent.ts-161-172 (1)
161-172:⚠️ Potential issue | 🟠 MajorDo not fallback to an empty JWK for unsupported holder bindings.
At Line 171, defaulting to
{}can produce invalid credentials silently. Fail fast on unsupported binding methods.Suggested fix
- holder: - binding.method === 'did' - ? ({ - method: 'did' as const, - didUrl: binding.didUrl, - } as SdJwtVcHolderBinding) - : ({ - method: 'jwk' as const, - jwk: binding.method === 'jwk' ? binding.jwk : {}, - } as SdJwtVcHolderBinding), + holder: + binding.method === 'did' + ? ({ + method: 'did' as const, + didUrl: binding.didUrl, + } as SdJwtVcHolderBinding) + : binding.method === 'jwk' + ? ({ + method: 'jwk' as const, + jwk: binding.jwk, + } as SdJwtVcHolderBinding) + : (() => { + throw new Error(`Unsupported holder binding method: ${String(binding.method)}`) + })(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/oid4vc-agent.ts` around lines 161 - 172, The credentials mapping silently falls back to an empty JWK for unsupported holder bindings; update the holder construction inside the credentials mapping (where holderBinding.keys.map is used and SdJwtVcHolderBinding is created) to fail fast instead of defaulting to {} — when binding.method === 'jwk' use binding.jwk, when binding.method === 'did' use binding.didUrl as currently done, and for any other binding.method throw a descriptive error (or return a rejection) so invalid/unsupported holder bindings are surfaced immediately.src/controllers/openid4vc/issuance-sessions/issuance-sessions.service.ts-153-177 (1)
153-177:⚠️ Potential issue | 🟠 MajorValidate status-list fields before creating metadata and URIs.
listId,index, andlistSizeare used without shape/range checks. Invalid values can create unusable status entries and later revocation failures.Suggested fix
if (!effectiveStatusList) { throw new BadRequestError('Status list details must be provided for revocable credentials') } + if (typeof effectiveStatusList.listId !== 'string' || effectiveStatusList.listId.trim() === '') { + throw new BadRequestError('statusListDetails.listId must be a non-empty string') + } + if (!Number.isInteger(effectiveStatusList.listSize) || effectiveStatusList.listSize <= 0) { + throw new BadRequestError('statusListDetails.listSize must be a positive integer') + } + if ( + !Number.isInteger(effectiveStatusList.index) || + effectiveStatusList.index < 0 || + effectiveStatusList.index >= effectiveStatusList.listSize + ) { + throw new BadRequestError('statusListDetails.index must be within [0, listSize)') + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/issuance-sessions/issuance-sessions.service.ts` around lines 153 - 177, The code uses effectiveStatusList.listId, .index, and .listSize without validation which can produce invalid metadata and URIs; before calling checkAndCreateStatusList and before constructing listUri/pushing to offerStatusInfo, validate that effectiveStatusList exists and that listId is a non-empty string (and matches expected format if applicable), listSize is a positive integer, and index is an integer 0 <= index < listSize; if any check fails, throw a BadRequestError with a clear message. Add these validations right above the call to checkAndCreateStatusList (affecting effectiveStatusList, checkAndCreateStatusList, listUri, offerStatusInfo, and the returned status_list) so you never call checkAndCreateStatusList or build URIs with invalid values.src/utils/oid4vc-agent.ts-318-324 (1)
318-324:⚠️ Potential issue | 🟠 MajorAdd timeout protection around trust-list fetch.
This external call has no timeout; a hung endpoint can block request processing and degrade availability.
Suggested fix
- const response = await fetch(trustListUrl) + const controller = new AbortController() + const timeout = setTimeout(() => controller.abort(), 10_000) + let response: Response + try { + response = await fetch(trustListUrl, { signal: controller.signal }) + } catch (error) { + const message = error instanceof Error ? error.message : String(error) + throw new Error(`[getX509CertsByUrl] failed to fetch trust list: ${message}`) + } finally { + clearTimeout(timeout) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/oid4vc-agent.ts` around lines 318 - 324, The fetch call in getX509CertsByUrl has no timeout and can hang; wrap the fetch(trustListUrl) with an AbortController-based timeout: create an AbortController, start a setTimeout that calls controller.abort() after a configurable timeout (e.g., 5s), pass controller.signal into fetch, and clear the timer after fetch completes; also catch the abort error and rethrow a clear Error indicating the trust-list fetch timed out. Update getX509CertsByUrl to use this pattern so the existing response.ok and response.json() logic remains unchanged but protected by the abort timeout.
| const response = await axios.post<OrgTokenResponse>( | ||
| `${trustServiceTokenUrl}`, | ||
| { clientId: body.clientId, clientSecret: body.clientSecret }, | ||
| { headers: { 'Content-Type': 'application/json', accept: 'application/json' } }, | ||
| ) |
There was a problem hiding this comment.
Add timeout for the trust-service HTTP call
Line 36 performs an external POST without timeout. A slow upstream can pin request threads and degrade API availability.
Suggested fix
+ const timeoutMs = Number(process.env.TRUST_SERVICE_TOKEN_TIMEOUT_MS ?? 5000)
+
const response = await axios.post<OrgTokenResponse>(
`${trustServiceTokenUrl}`,
{ clientId: body.clientId, clientSecret: body.clientSecret },
- { headers: { 'Content-Type': 'application/json', accept: 'application/json' } },
+ {
+ headers: { 'Content-Type': 'application/json', accept: 'application/json' },
+ timeout: Number.isFinite(timeoutMs) && timeoutMs > 0 ? timeoutMs : 5000,
+ },
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await axios.post<OrgTokenResponse>( | |
| `${trustServiceTokenUrl}`, | |
| { clientId: body.clientId, clientSecret: body.clientSecret }, | |
| { headers: { 'Content-Type': 'application/json', accept: 'application/json' } }, | |
| ) | |
| const timeoutMs = Number(process.env.TRUST_SERVICE_TOKEN_TIMEOUT_MS ?? 5000) | |
| const response = await axios.post<OrgTokenResponse>( | |
| `${trustServiceTokenUrl}`, | |
| { clientId: body.clientId, clientSecret: body.clientSecret }, | |
| { | |
| headers: { 'Content-Type': 'application/json', accept: 'application/json' }, | |
| timeout: Number.isFinite(timeoutMs) && timeoutMs > 0 ? timeoutMs : 5000, | |
| }, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/controllers/auth/AuthController.ts` around lines 36 - 40, The axios POST
to obtain OrgTokenResponse currently has no timeout and can hang; update the
call that uses axios.post<OrgTokenResponse> with trustServiceTokenUrl to include
a request timeout (e.g., add a timeout property in the third-argument config
object alongside headers) or use a configurable constant/env var instead of a
hardcoded value; ensure the timeout is added to the same config passed to
axios.post and adjust any surrounding error handling in AuthController.ts to
surface timeout errors appropriately.
| parsedCertificate = X509Service.parseCertificate(agentReq.agent.context, { | ||
| encodedCertificate: requestSigner.x5c[0], | ||
| }) | ||
| requestSigner.issuer = parsedCertificate.sanUriNames[0] | ||
| requestSigner.clientIdPrefix = dto.requestSigner.clientIdPrefix ?? ClientIdPrefix.X509Hash |
There was a problem hiding this comment.
Guard x5c input and only attach parsed cert in x5c mode.
requestSigner.x5c[0] is read without validation, and options.requestSigner.x5c = [parsedCertificate] executes even for DID signers. This can break authorization request creation.
🐛 Suggested fix
- } else {
+ } else {
requestSigner = dto.requestSigner as OpenId4VcIssuerX5cOptions
+ if (!requestSigner.x5c?.[0]) {
+ throw new Error('requestSigner.x5c[0] is required for x5c signer')
+ }
parsedCertificate = X509Service.parseCertificate(agentReq.agent.context, {
encodedCertificate: requestSigner.x5c[0],
})
requestSigner.issuer = parsedCertificate.sanUriNames[0]
requestSigner.clientIdPrefix = dto.requestSigner.clientIdPrefix ?? ClientIdPrefix.X509Hash
}
@@
- if (parsedCertificate) {
- parsedCertificate.publicJwk.keyId = requestSigner.keyId
- }
- options.requestSigner.x5c = [parsedCertificate]
+ if (parsedCertificate) {
+ parsedCertificate.publicJwk.keyId = requestSigner.keyId
+ options.requestSigner.x5c = [parsedCertificate]
+ }Also applies to: 77-81
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/controllers/openid4vc/verifier-sessions/verification-sessions.service.ts`
around lines 57 - 61, Guard access to requestSigner.x5c and only parse/attach
the certificate when the signer is in x5c mode: check that dto.requestSigner.x5c
is an array with at least one element (and the first element is a string/encoded
cert) before calling X509Service.parseCertificate and before setting
options.requestSigner.x5c = [parsedCertificate]; also only set
requestSigner.issuer and requestSigner.clientIdPrefix (fallback to
ClientIdPrefix.X509Hash) when x5c mode is detected. Apply the same guarded logic
to the analogous block around the code that runs at the later 77-81 section so
DID signers never get x5c parsing/attachment.
|
|
||
| let authorityKeyID, subjectPublicKeyID, authorityKeyKmsId | ||
|
|
||
| agent.config.logger.debug(`createCertificate options:`, options) |
There was a problem hiding this comment.
Remove secret-bearing logs from certificate/key paths.
Logging full options and generated seed risks leaking key material and sensitive inputs to log sinks.
🔐 Suggested fix
- agent.config.logger.debug(`createCertificate options:`, options)
+ agent.config.logger.debug('createCertificate called')
- agent.config.logger.debug(`createKey: got seed ${seed}`)
+ agent.config.logger.debug('createKey: seed generated')Also applies to: 223-223
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/controllers/x509/x509.service.ts` at line 82, Remove logging of
secret-bearing data by not dumping the full options object to the debug log in
the createCertificate flow: stop calling
agent.config.logger.debug(`createCertificate options:`, options) and instead log
only non-sensitive metadata (e.g., option keys or boolean flags) or redact
sensitive fields like seed, privateKey, certificate, certificatePath, and any
passphrases; apply the same treatment to the other debug call at the later
occurrence (around line 223). Locate the logger calls
(agent.config.logger.debug) in the createCertificate-related code paths and
replace them with a redaction-aware log or explicit non-secret fields only.
| import { Server } from 'ws' | ||
|
|
||
| export const sendWebSocketEvent = async (server: WebSocket.Server, data: unknown) => { | ||
| export const sendWebSocketEvent = async (server: Server, data: unknown) => { | ||
| server.clients.forEach((client) => { | ||
| if (client.readyState === WebSocket.OPEN) { | ||
| typeof data === 'string' ? client.send(data) : client.send(JSON.stringify(data)) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n src/events/WebSocketEvents.tsRepository: credebl/agent-controller
Length of output: 424
🏁 Script executed:
# Check what the 'ws' library exports
# First, let's see if there's a package.json and what version of 'ws' is used
find . -name "package.json" -type f | head -5Repository: credebl/agent-controller
Length of output: 276
🏁 Script executed:
# Check the ws package type definitions or documentation
fd "package.json" | head -1 | xargs cat | jq '.dependencies, .devDependencies' | grep -A2 -B2 '"ws"'Repository: credebl/agent-controller
Length of output: 156
🏁 Script executed:
# Check the ws library's main exports and types
# Look at node_modules/ws to understand the exports
cat node_modules/ws/package.json | jq '.exports, .main, .types' | head -20Repository: credebl/agent-controller
Length of output: 129
🏁 Script executed:
# Check the actual TypeScript definitions or main entry
head -50 node_modules/ws/index.js 2>/dev/null || head -50 node_modules/ws/lib/receiver.js 2>/dev/null || ls -la node_modules/ws/ | head -20Repository: credebl/agent-controller
Length of output: 132
🏁 Script executed:
# More directly: check what's actually exported from ws
python3 << 'EOF'
import json
import os
ws_package = json.load(open('./node_modules/ws/package.json'))
print("Main entry:", ws_package.get('main'))
print("Types entry:", ws_package.get('types'))
print("Exports config:", json.dumps(ws_package.get('exports'), indent=2))
EOFRepository: credebl/agent-controller
Length of output: 230
🏁 Script executed:
# Check the actual index.d.ts or type definitions for ws
find node_modules/ws -name "*.d.ts" -type f | head -5Repository: credebl/agent-controller
Length of output: 119
🏁 Script executed:
# Search for WebSocket usage/imports throughout the codebase
rg "WebSocket" --type ts --type tsx -B 2 -A 2Repository: credebl/agent-controller
Length of output: 96
🏁 Script executed:
# Check if WebSocket is defined globally or in type definitions
rg "declare.*WebSocket|type.*WebSocket|interface.*WebSocket" --type ts --type tsxRepository: credebl/agent-controller
Length of output: 96
🏁 Script executed:
# Search for ws library imports to see how it's used elsewhere
rg "from\s+['\"]ws['\"]" --type ts --type tsx -A 3Repository: credebl/agent-controller
Length of output: 96
🏁 Script executed:
# Check tsconfig.json to see if there are DOM libs included
cat tsconfig.json 2>/dev/null || cat tsconfig.*.json 2>/dev/nullRepository: credebl/agent-controller
Length of output: 597
🏁 Script executed:
# Try different file type flags for rg
rg "WebSocket" -g "*.ts" -B 2 -A 2Repository: credebl/agent-controller
Length of output: 7006
🏁 Script executed:
# Check all imports from ws in the codebase
rg "from\s+['\"]ws['\"]" -g "*.ts" -A 3Repository: credebl/agent-controller
Length of output: 654
🏁 Script executed:
# Check if WebSocket is available from ws package or needs to come from elsewhere
rg "import.*WebSocket" -g "*.ts" -B 1 -A 1Repository: credebl/agent-controller
Length of output: 1876
🏁 Script executed:
# Check if there's a types directory with WebSocket definitions
find src -name "*.ts" -type f | xargs grep -l "WebSocket" 2>/dev/nullRepository: credebl/agent-controller
Length of output: 397
🏁 Script executed:
# Let's check the ws library documentation and its actual exports
# Since node_modules isn't available, let's search for how WebSocket is used elsewhere
rg "WebSocket\." -g "*.ts" -B 2 -A 2Repository: credebl/agent-controller
Length of output: 475
🏁 Script executed:
# Check if there's any global WebSocket setup or polyfill
rg "global.*WebSocket|declare.*WebSocket" -g "*.ts" -g "*.d.ts"Repository: credebl/agent-controller
Length of output: 50
🏁 Script executed:
# Check src/types directory for any custom type definitions
ls -la src/types/ 2>/dev/null || echo "No src/types directory"Repository: credebl/agent-controller
Length of output: 299
🏁 Script executed:
# Look at how clients are iterated in the server context
rg "server\.clients|\.clients\." -g "*.ts" -B 3 -A 3Repository: credebl/agent-controller
Length of output: 564
🏁 Script executed:
# Search for WebSocket constant/value usage patterns
rg "readyState" -g "*.ts" -B 3 -A 1Repository: credebl/agent-controller
Length of output: 470
🏁 Script executed:
# Check if there are any other type definitions or ambient declarations
find . -name "*.d.ts" -type f | xargs cat | grep -i "websocket\|readystate" 2>/dev/nullRepository: credebl/agent-controller
Length of output: 25300
🏁 Script executed:
# Let's check the exact ws library version and its typical exports by looking at common patterns
# See if ws is used elsewhere with WebSocket import
rg "import.*from\s+['\"]ws['\"]" -g "*.ts" -A 5 -B 1Repository: credebl/agent-controller
Length of output: 1200
🏁 Script executed:
# Search web for ws library v8.18.2 exports to verify the correct import
# Let's check if we can find any ws library type information or examples
web_search "ws npm library WebSocket export default named"Repository: credebl/agent-controller
Length of output: 119
🌐 Web query:
ws npm package v8.18.2 WebSocket OPEN constant export
💡 Result:
No, the 'ws' npm package version 8.18.2 does not export a WebSocket.OPEN constant. The WebSocket class uses numeric readyState values (0 for CONNECTING, 1 for OPEN, 2 for CLOSING, 3 for CLOSED), as shown in all examples like if (client.readyState === WebSocket.OPEN), but WebSocket.OPEN is not defined as an exported constant on the WebSocket class or module. The lib/constants.js file exists (503B) but is used internally, not exported for WebSocket.OPEN. This matches the WebSocket spec where ready states are numbers, and browser WebSocket exposes constants, but 'ws' for Node.js uses direct numbers in code.
Citations:
- 1: https://www.npmjs.com/package/ws?activeTab=versions
- 2: https://npmx.dev/package/ws/v/%5E8.18.0
- 3: https://npm.io/package/ws
- 4: https://github.com/websockets/ws/tree/8.19.0
- 5: https://registry.npmjs.org/ws
- 6: https://github.com/websockets/ws
- 7: https://websocket.org/reference/websocket-api
- 8: https://github.com/websockets/ws/tree/master/lib
- 9: https://github.com/websockets/ws/blob/8.19.0/doc/ws.md
- 10: https://developer.mozilla.org/en-US/docs/Web/API/WebSocket/readyState
Fix runtime error: WebSocket.OPEN is undefined and ws library doesn't export it.
Line 5 references WebSocket.OPEN, but WebSocket is not imported and the ws library (v8.18.2) does not export a WebSocket class or OPEN constant. The ws library uses numeric readyState values instead: 1 for OPEN, 2 for CLOSING, 3 for CLOSED. Also, Server on line 1 should be a type-only import.
💡 Suggested fix
-import { Server } from 'ws'
+import { type Server } from 'ws'
export const sendWebSocketEvent = async (server: Server, data: unknown) => {
server.clients.forEach((client) => {
- if (client.readyState === WebSocket.OPEN) {
+ if (client.readyState === 1) { // 1 = OPEN
typeof data === 'string' ? client.send(data) : client.send(JSON.stringify(data))
}
})
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { Server } from 'ws' | |
| export const sendWebSocketEvent = async (server: WebSocket.Server, data: unknown) => { | |
| export const sendWebSocketEvent = async (server: Server, data: unknown) => { | |
| server.clients.forEach((client) => { | |
| if (client.readyState === WebSocket.OPEN) { | |
| typeof data === 'string' ? client.send(data) : client.send(JSON.stringify(data)) | |
| import { type Server } from 'ws' | |
| export const sendWebSocketEvent = async (server: Server, data: unknown) => { | |
| server.clients.forEach((client) => { | |
| if (client.readyState === 1) { // 1 = OPEN | |
| typeof data === 'string' ? client.send(data) : client.send(JSON.stringify(data)) | |
| } | |
| }) | |
| } |
🧰 Tools
🪛 ESLint
[error] 1-1: All imports in the declaration are only used as types. Use import type.
(@typescript-eslint/consistent-type-imports)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/events/WebSocketEvents.ts` around lines 1 - 6, The runtime error comes
from referencing an undefined WebSocket.OPEN; change the import to a type-only
import (import type { Server } from 'ws') and update the readyState check in
sendWebSocketEvent to compare against the numeric OPEN value (1) instead of
WebSocket.OPEN; keep the rest of the logic (iterating server.clients and sending
stringified data) unchanged so client.readyState === 1 is used in the
server.clients.forEach callback.
| } else if (err?.message?.includes('subjects overlap')) { | ||
| // Stale streams from a previous version — delete and retry | ||
| console.warn('[Purge] Subject overlap detected — purging stale streams and retrying') | ||
| await this.deleteStaleStreams(config.subjects) | ||
| await this.jsm.streams.add(config) | ||
| } else { | ||
| throw err | ||
| } | ||
| } | ||
| } | ||
|
|
||
| private async deleteStaleStreams(_subjects: string[]): Promise<void> { | ||
| if (!this.jsm) return | ||
| const list = await this.jsm.streams.list().next() | ||
| for (const stream of list) { | ||
| if (stream.config.name === PURGE_STREAM) continue | ||
| // Only delete streams that explicitly claim purge.* subjects — never touch unrelated streams | ||
| const isPurgeStream = stream.config.subjects?.some( | ||
| (s: string) => s.startsWith('purge.schedule.') || s.startsWith('purge.execute.'), | ||
| ) | ||
| if (isPurgeStream) { | ||
| console.warn(`[Purge] Deleting stale purge stream: ${stream.config.name}`) | ||
| await this.jsm.streams.delete(stream.config.name) | ||
| } |
There was a problem hiding this comment.
Do not auto-delete other streams on overlap
Line 128-149 can delete any stream that uses purge.schedule.* / purge.execute.*. In a shared NATS account, this risks deleting unrelated workloads.
Safer direction
- } else if (err?.message?.includes('subjects overlap')) {
- // Stale streams from a previous version — delete and retry
- console.warn('[Purge] Subject overlap detected — purging stale streams and retrying')
- await this.deleteStaleStreams(config.subjects)
- await this.jsm.streams.add(config)
+ } else if (err?.message?.includes('subjects overlap')) {
+ throw new Error(
+ '[Purge] Subject overlap detected for purge stream subjects. ' +
+ 'Refusing automatic stream deletion; clean up conflicting streams manually.',
+ )
} else {
throw err
}🧰 Tools
🪛 ESLint
[error] 128-128: Unexpected console statement.
(no-console)
[error] 147-147: Unexpected console statement.
(no-console)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/purge/schedulers/NatsPurgeScheduler.ts` around lines 126 - 149, The
current deleteStaleStreams method can remove any stream that merely contains
purge.schedule.* or purge.execute.* subjects; change it to only delete streams
that we definitively own by tightening the check in deleteStaleStreams: instead
of isPurgeStream = stream.config.subjects?.some(...), require either (a) the
stream.config.name matches our PURGE_STREAM naming convention (e.g., startsWith
or equals PURGE_STREAM) or (b) the stream's subjects exactly match the _subjects
array passed in (compare as sets so order doesn't matter) and/or a
metadata/annotation that proves ownership (e.g., stream.config.metadata.owner
=== this.instanceId) before calling this.jsm.streams.delete(stream.config.name);
update the function to make this ownership check and skip deletion for any
stream that doesn't meet it.
| httpInbound.app.get( | ||
| '/invitation', | ||
| async (req: { query: { d_m: any; c_i: any }; url: string }, res: { send: (arg0: any) => void }) => { | ||
| if (typeof req.query.d_m === 'string') { | ||
| const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url.replace('d_m=', 'c_i=')) | ||
| res.send(invitation.toJSON()) | ||
| } | ||
| if (typeof req.query.c_i === 'string') { | ||
| const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url) | ||
| res.send(invitation.toJSON()) | ||
| } else { | ||
| const { outOfBandInvitation } = await agent.modules.didcomm.oob.createInvitation() | ||
|
|
||
| res.send(outOfBandInvitation.toUrl({ domain: endpoints + '/invitation' })) | ||
| } | ||
| }) | ||
| res.send(outOfBandInvitation.toUrl({ domain: endpoints + '/invitation' })) | ||
| } |
There was a problem hiding this comment.
Fix double-response bug and malformed invitation domain.
Line 152-163 can send multiple responses for d_m requests, and Line 162 concatenates a string[] into URL text. This will cause runtime errors and broken invitation URLs.
Suggested fix
httpInbound.app.get(
'/invitation',
async (req: { query: { d_m: any; c_i: any }; url: string }, res: { send: (arg0: any) => void }) => {
- if (typeof req.query.d_m === 'string') {
+ if (typeof req.query.d_m === 'string') {
const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url.replace('d_m=', 'c_i='))
- res.send(invitation.toJSON())
- }
- if (typeof req.query.c_i === 'string') {
+ return res.send(invitation.toJSON())
+ } else if (typeof req.query.c_i === 'string') {
const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url)
- res.send(invitation.toJSON())
+ return res.send(invitation.toJSON())
} else {
const { outOfBandInvitation } = await agent.modules.didcomm.oob.createInvitation()
-
- res.send(outOfBandInvitation.toUrl({ domain: endpoints + '/invitation' }))
+ const domain = `${endpoints[0]}/invitation`
+ return res.send(outOfBandInvitation.toUrl({ domain }))
}
},
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| httpInbound.app.get( | |
| '/invitation', | |
| async (req: { query: { d_m: any; c_i: any }; url: string }, res: { send: (arg0: any) => void }) => { | |
| if (typeof req.query.d_m === 'string') { | |
| const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url.replace('d_m=', 'c_i=')) | |
| res.send(invitation.toJSON()) | |
| } | |
| if (typeof req.query.c_i === 'string') { | |
| const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url) | |
| res.send(invitation.toJSON()) | |
| } else { | |
| const { outOfBandInvitation } = await agent.modules.didcomm.oob.createInvitation() | |
| res.send(outOfBandInvitation.toUrl({ domain: endpoints + '/invitation' })) | |
| } | |
| }) | |
| res.send(outOfBandInvitation.toUrl({ domain: endpoints + '/invitation' })) | |
| } | |
| httpInbound.app.get( | |
| '/invitation', | |
| async (req: { query: { d_m: any; c_i: any }; url: string }, res: { send: (arg0: any) => void }) => { | |
| if (typeof req.query.d_m === 'string') { | |
| const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url.replace('d_m=', 'c_i=')) | |
| return res.send(invitation.toJSON()) | |
| } else if (typeof req.query.c_i === 'string') { | |
| const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url) | |
| return res.send(invitation.toJSON()) | |
| } else { | |
| const { outOfBandInvitation } = await agent.modules.didcomm.oob.createInvitation() | |
| const domain = `${endpoints[0]}/invitation` | |
| return res.send(outOfBandInvitation.toUrl({ domain })) | |
| } | |
| }, | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/agent.ts` around lines 149 - 163, The route handler in
httpInbound.app.get can send multiple responses when req.query.d_m is present
(both d_m and the fallback run) and also concatenates a string[] into a URL when
building the invitation domain; fix by making the d_m and c_i branches mutually
exclusive (use if/else if and return immediately after res.send) and ensure the
domain passed into outOfBandInvitation.toUrl is a single string (convert or
select the correct entry from endpoints rather than concatenating an array,
e.g., use a single endpoint string or join/format it properly) — update the
handler around req.query.d_m, req.query.c_i,
DidCommConnectionInvitationMessage.fromUrl, and outOfBandInvitation.toUrl
accordingly.
…e sessions and integrate SdJwtVcModule. Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
…lection for status list operations Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: Sagar Khole <sagar.khole@ayanworks.com>
Signed-off-by: KambleSahil3 <sahil.kamble@ayanworks.com>
* fix: events typings Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: remove unnecessary comments Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: add non-throwing method Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: edge case handling Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: formatting Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> --------- Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
* feat: add agent custom document loader Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: formatting Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: formatting Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * feat: add appropriate warning for incorrect custom document loader initiation Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: formatting Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: formatting Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: error handling Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: formatting Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: formatting Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: coderabbit suggestions Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> * fix: reviewer suggestions Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com> --------- Signed-off-by: Krishna Waske <krishna.waske@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
e0ffd16 to
597987b
Compare
Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
…yanworks.com> On behalf of Deepak Nemade <deepak.nemade@ayanworks.com>, I, Tipu_Singh <tipu.singh@ayanworks.com>, hereby add my Signed-off-by to this commit: bcbc82e Signed-off-by: Tipu_Singh <tipu.singh@ayanworks.com>
|
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/server.ts (1)
21-34:⚠️ Potential issue | 🟡 MinorFix import-order lint errors in newly added imports.
The additions around Line 21 and Line 32-34 violate
import/ordergrouping/ordering and will fail lint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/server.ts` around lines 21 - 34, The imports violate ESLint import/order: move the external package import TenantAgent (from '@credo-ts/tenants') into the external/third-party import group and reorder groups so built-ins/external come before local modules; then group and, if your project requires, alphabetize within groups for symbols like validateAuthConfig, ErrorMessages, BaseError, basicMessageEvents, connectionEvents, credentialEvents, proofEvents, questionAnswerEvents, reuseConnectionEvents, RegisterRoutes, SecurityMiddleware, openId4VcIssuanceSessionEvents, and openId4VcVerificationSessionEvents to satisfy the import/order rule.
♻️ Duplicate comments (5)
src/events/WebSocketEvents.ts (1)
1-1:⚠️ Potential issue | 🟡 MinorUse a type-only import for
Server.At Line 1,
Serveris only used in type position, so this should beimport typeto satisfy the lint rule and avoid runtime import overhead.Suggested change
-import { Server } from 'ws' +import type { Server } from 'ws'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/events/WebSocketEvents.ts` at line 1, Change the runtime import of Server to a type-only import since Server is only used in type positions: replace the current import of Server from 'ws' with a type import (i.e., use "import type { Server } from 'ws'") in WebSocketEvents.ts so the linter warning is resolved and no runtime import overhead occurs; update any related import usages that reference Server as a type if necessary.src/utils/agent.ts (1)
152-163:⚠️ Potential issue | 🔴 CriticalFix double-response flow and invitation domain construction.
Line 152-163 can send multiple responses for one request, and Line 162 concatenates a
string[]into URL text. This can trigger runtime response errors and broken invitation URLs.Suggested fix
if (typeof req.query.d_m === 'string') { const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url.replace('d_m=', 'c_i=')) - res.send(invitation.toJSON()) - } - if (typeof req.query.c_i === 'string') { + return res.send(invitation.toJSON()) + } else if (typeof req.query.c_i === 'string') { const invitation = await DidCommConnectionInvitationMessage.fromUrl(req.url) - res.send(invitation.toJSON()) + return res.send(invitation.toJSON()) } else { const { outOfBandInvitation } = await agent.modules.didcomm.oob.createInvitation() - - res.send(outOfBandInvitation.toUrl({ domain: endpoints + '/invitation' })) + const domain = `${endpoints[0]}/invitation` + return res.send(outOfBandInvitation.toUrl({ domain })) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/agent.ts` around lines 152 - 163, The handler can send multiple responses because the two separate if blocks for req.query.d_m and req.query.c_i both call res.send; change the second "if (typeof req.query.c_i === 'string')" to "else if" so only one branch runs, and ensure you return or otherwise stop after res.send to avoid fallthrough. Also fix invitation URL construction: endpoints may be a string[] — use a single string (e.g., Array.isArray(endpoints) ? endpoints[0] : endpoints) or join it appropriately before calling outOfBandInvitation.toUrl, i.e., compute a string baseUrl and pass baseUrl + '/invitation' to outOfBandInvitation.toUrl; update references to DidCommConnectionInvitationMessage.fromUrl, agent.modules.didcomm.oob.createInvitation, outOfBandInvitation.toUrl, endpoints, and res.send accordingly.src/purge/schedulers/NatsPurgeScheduler.ts (1)
137-149:⚠️ Potential issue | 🔴 CriticalDo not delete streams based only on subject prefix
Current logic can remove unrelated streams in shared NATS if they use
purge.*subjects. Restrict deletion to streams you can prove are owned (e.g., exact name/metadata/exact-subject-set match).🛡️ Safer ownership check direction
- const isPurgeStream = stream.config.subjects?.some( - (s: string) => s.startsWith('purge.schedule.') || s.startsWith('purge.execute.'), - ) - if (isPurgeStream) { + const sameName = stream.config.name === PURGE_STREAM + const sameSubjects = + Array.isArray(stream.config.subjects) && + Array.isArray(_subjects) && + stream.config.subjects.length === _subjects.length && + stream.config.subjects.every((s: string) => _subjects.includes(s)) + const ownedByUs = sameName || sameSubjects + if (ownedByUs) { await this.jsm.streams.delete(stream.config.name) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/schedulers/NatsPurgeScheduler.ts` around lines 137 - 149, deleteStaleStreams currently deletes any stream that has subjects starting with 'purge.schedule.' or 'purge.execute.' which can remove unrelated streams; update deleteStaleStreams in NatsPurgeScheduler to only delete when you can prove ownership by (1) checking a stream metadata flag (e.g., stream.config?.meta?.managedBy === 'purge-scheduler' or similar project-specific key) and (2) verifying the subjects exactly match the expected set (compare stream.config.subjects array to an expectedSubjects array or ensure there are no extra subjects) before calling this.jsm.streams.delete(stream.config.name); keep the existing skip for PURGE_STREAM and only perform deletion when both ownership metadata and exact-subject-set checks pass.src/controllers/x509/x509.service.ts (1)
82-82:⚠️ Potential issue | 🔴 CriticalStop logging secret-bearing material in certificate/key paths.
Line 82 logs full certificate options (can include seed/private inputs), and Line 223 logs generated seed directly. This is a direct secret-leak risk.
🔐 Suggested patch
- agent.config.logger.debug(`createCertificate options:`, options) + agent.config.logger.debug('createCertificate called') - agent.config.logger.debug(`createKey: got seed ${seed}`) + agent.config.logger.debug('createKey: seed generated')Also applies to: 223-223
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/x509/x509.service.ts` at line 82, The debug statements in createCertificate are logging secret-bearing values: remove or redact any sensitive fields when calling agent.config.logger.debug (do not log full `options`), and stop logging the generated seed value directly (where the seed variable is logged around line 223); instead log only non-sensitive metadata (e.g., presence/length or masked values) or a success message. Locate the debug call to agent.config.logger.debug that prints `options` and the log that prints the generated seed, and replace them with redacted-safe logging that omits private keys, seeds, file paths containing secrets, or prints a masked representation.src/controllers/openid4vc/verifier-sessions/verification-sessions.service.ts (1)
57-61:⚠️ Potential issue | 🔴 CriticalGuard x5c access and only attach parsed cert in x5c mode.
Line 58 dereferences
requestSigner.x5c[0]without validation, and Line 80 setsoptions.requestSigner.x5ceven for DID signers. This can break authorization-request creation.🐛 Suggested patch
} else { requestSigner = dto.requestSigner as OpenId4VcIssuerX5cOptions + if (!Array.isArray(requestSigner.x5c) || typeof requestSigner.x5c[0] !== 'string') { + throw new Error('requestSigner.x5c[0] is required for x5c signer') + } parsedCertificate = X509Service.parseCertificate(agentReq.agent.context, { encodedCertificate: requestSigner.x5c[0], }) requestSigner.issuer = parsedCertificate.sanUriNames[0] requestSigner.clientIdPrefix = dto.requestSigner.clientIdPrefix ?? ClientIdPrefix.X509Hash } @@ - if (parsedCertificate) { - parsedCertificate.publicJwk.keyId = requestSigner.keyId - } - options.requestSigner.x5c = [parsedCertificate] + if (parsedCertificate) { + parsedCertificate.publicJwk.keyId = requestSigner.keyId + options.requestSigner.x5c = [parsedCertificate] + }Also applies to: 77-81
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/verifier-sessions/verification-sessions.service.ts` around lines 57 - 61, Guard access to requestSigner.x5c and only attach x5c-related data when running in X509 mode: before calling X509Service.parseCertificate and reading requestSigner.x5c[0], verify requestSigner.x5c is an array and has at least one element; only then call X509Service.parseCertificate and set requestSigner.issuer and parsedCertificate-based fields (e.g., clientIdPrefix defaulting to ClientIdPrefix.X509Hash). Conversely, when handling DID signers, do not set options.requestSigner.x5c — ensure any assignment to options.requestSigner.x5c happens only in the branch where x5c exists. Update code paths around parsedCertificate, X509Service.parseCertificate, requestSigner.x5c, dto.requestSigner.clientIdPrefix and options.requestSigner.x5c accordingly.
🟠 Major comments (31)
src/controllers/didcomm/basic-messages/BasicMessageController.ts-11-58 (1)
11-58:⚠️ Potential issue | 🟠 MajorController is fully disabled, effectively removing the API.
Lines 11–58 comment out all route decorators and handlers, so these endpoints are no longer served. If this is intentional, it should be an explicit API deprecation/removal change; otherwise restore the controller before merge.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/didcomm/basic-messages/BasicMessageController.ts` around lines 11 - 58, The BasicMessageController class and its route handlers (BasicMessageController, getBasicMessages, sendMessage) were commented out, removing the DIDComm basic-messages API; restore the controller by uncommenting the class declaration, decorators (`@Tags`, `@Route`, `@Security`, `@injectable`) and the two methods (getBasicMessages and sendMessage) along with their route decorators (`@Get`('/:connectionId') and `@Post`('/:connectionId')), request parameter decorators (`@Request`, `@Path`, `@Body`), and error handling so the endpoints are served again, or if removal was intended, replace the commented code with an explicit deprecation notice and remove route decorators accordingly to make the change intentional and visible in the codebase.src/cli.ts-5-9 (1)
5-9:⚠️ Potential issue | 🟠 MajorLoad dotenv before any imports from modules that access environment variables.
In ESM, all imports are hoisted to module top and execute before module body code. Even though
dotenv.config()appears at line 7 in source, the import of./cliAgent.jsat line 9 is also hoisted and executes first. Since cliAgent.ts accessesprocess.envat module-level (lines 151–152 in express app setup), dotenv runs too late.Use
import 'dotenv/config'as a side-effect import at the very top to ensure environment variables load before any module imports.Suggested change
+import 'dotenv/config' import type { AriesRestConfig } from './cliAgent.js' import yargs from 'yargs' import { hideBin } from 'yargs/helpers' -import dotenv from 'dotenv' - -dotenv.config() import { runRestAgent } from './cliAgent.js'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli.ts` around lines 5 - 9, Move dotenv initialization to run before any module imports by replacing the current dotenv.config() usage with a top-level side-effect import; ensure the very first import is the side-effect import of dotenv (so environment variables are set before cliAgent.js executes), since cliAgent.js (and its runRestAgent/exported setup) reads process.env at module scope (e.g., the module-level express app config). Update the file so the side-effect import occurs before importing runRestAgent/cliAgent.js and remove or stop using dotenv.config() later in this module.src/controllers/did/DidController.ts-494-504 (1)
494-504:⚠️ Potential issue | 🟠 MajorGuard missing DID record before import.
When
getCreatedDidsreturns empty,didDocumentis undefined and the subsequent import call is unsafe.Proposed fix
const createdDid = await agent.dids.getCreatedDids({ method: DidMethod.Key, did: didOptions.did, }) didDocument = createdDid[0]?.didDocument + if (!didDocument) { + throw new BadRequestError(`DID not found in wallet: ${didOptions.did}`) + } await agent.dids.import({ did, overwrite: true, didDocument, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/did/DidController.ts` around lines 494 - 504, getCreatedDids can return an empty array so didDocument may be undefined before calling agent.dids.import; update the code around getCreatedDids/createdDid/didDocument to check that createdDid[0] and createdDid[0].didDocument exist (or throw a clear error) and only call agent.dids.import when didDocument is present. Specifically, add a guard after the call to agent.dids.getCreatedDids({ method: DidMethod.Key, did: didOptions.did }) to validate createdDid[0]?.didDocument (or return/raise a descriptive error) before invoking agent.dids.import({ did, overwrite: true, didDocument }).src/controllers/did/DidController.ts-276-277 (1)
276-277:⚠️ Potential issue | 🟠 MajorValidate
INDICIO_NYM_URLbefore calling Axios.This path currently assumes env presence and can fail with a low-signal runtime error.
Proposed fix
- const INDICIO_NYM_URL = process.env.INDICIO_NYM_URL as string + if (!process.env.INDICIO_NYM_URL) { + throw new InternalServerError('INDICIO_NYM_URL is not set in environment variables') + } + const INDICIO_NYM_URL = process.env.INDICIO_NYM_URL as string🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/did/DidController.ts` around lines 276 - 277, The code uses INDICIO_NYM_URL directly and calls axios.post(INDICIO_NYM_URL, key) without validating the environment variable; update the DidController logic to check that process.env.INDICIO_NYM_URL is present and is a valid URL before calling axios.post, and handle the missing/invalid case by returning/throwing a clear error or logging and aborting the request; locate the constant assignment INDICIO_NYM_URL and the axios.post call in DidController.ts and add the guard/validation (e.g., ensure non-empty string and URL.parse/RegExp or URL constructor validation) and use the validated value for the axios.post call.src/controllers/openid4vc/holder/credentialBindingResolver.ts-38-43 (1)
38-43:⚠️ Potential issue | 🟠 MajorClamp and validate
batchSizebefore array allocation.Current logic can exceed the stated max-10 rule when
requestBatchis numeric, and invalid numeric values can breaknew Array(batchSize).Proposed fix
- const batchSize = - requestBatch === true - ? Math.min(issuerMaxBatchSize, 10) - : typeof requestBatch === 'number' - ? Math.min(issuerMaxBatchSize, requestBatch) - : 1 + const requestedBatch = + requestBatch === true ? 10 : typeof requestBatch === 'number' ? requestBatch : 1 + const batchSize = Math.max(1, Math.min(10, issuerMaxBatchSize, requestedBatch))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/holder/credentialBindingResolver.ts` around lines 38 - 43, The batchSize calculation can produce values >10 and non-integer/invalid values which will break array allocation; update the logic around batchSize (the calculation using requestBatch and issuerMaxBatchSize) to: coerce numeric requestBatch to a safe integer, clamp it to a minimum of 1 and a maximum of Math.min(issuerMaxBatchSize, 10), and fallback to 1 for non-numeric or out-of-range inputs before any call to new Array(batchSize) so allocation always receives a valid positive integer.src/events/ProofEvents.ts-12-15 (1)
12-15:⚠️ Potential issue | 🟠 MajorHarden tenant-id extraction from correlation id.
split('tenant-')[1]can produceundefinedfor unexpected formats, which can break tenant agent resolution.Proposed fix
- if (event.metadata.contextCorrelationId && event.metadata.contextCorrelationId !== 'default') { + if ( + event.metadata.contextCorrelationId && + event.metadata.contextCorrelationId !== 'default' && + event.metadata.contextCorrelationId.startsWith('tenant-') + ) { const tenantAgent = await agent.modules.tenants.getTenantAgent({ tenantId: event.metadata.contextCorrelationId.split('tenant-')[1], })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/events/ProofEvents.ts` around lines 12 - 15, The extraction of tenant id using event.metadata.contextCorrelationId.split('tenant-')[1] is fragile; update the logic in the ProofEvents handling code to safely parse and validate the tenant id before calling agent.modules.tenants.getTenantAgent: check that event.metadata.contextCorrelationId exists, that it contains the expected prefix (e.g., startsWith('tenant-') or matches /tenant-([^]+)/), extract the capture group or substring, verify it is not undefined/empty, and if invalid log or bail out gracefully (or throw a clear error) instead of passing undefined to getTenantAgent.src/events/ProofEvents.ts-2-3 (1)
2-3:⚠️ Potential issue | 🟠 MajorSplit mixed import and use type-only imports.
Line 3 mixes type and runtime imports.
DidCommProofStateChangedEventis used only as a type annotation (line 9) and must use thetypekeyword. Separate it fromDidCommProofEventTypes, which is a runtime value. This violates both@typescript-eslint/consistent-type-importsandimport/orderrules.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/events/ProofEvents.ts` around lines 2 - 3, The import mixes a type-only symbol and a runtime value; change the import so DidCommProofStateChangedEvent is imported as a type-only import (e.g., "import type { DidCommProofStateChangedEvent } from '@credo-ts/didcomm'") while keeping DidCommProofEventTypes as a normal (runtime) import, and ensure imports are ordered consistently with other imports (Agent remains as is) so the file uses type-only imports for DidCommProofStateChangedEvent and value import for DidCommProofEventTypes.src/controllers/openid4vc/types/issuer.types.ts-4-7 (1)
4-7:⚠️ Potential issue | 🟠 MajorRemove unused imports and fix type-only import syntax.
These imports violate the type-import rule and include three entirely unused symbols. This blocks CI.
Proposed fix
-import { Kms } from '@credo-ts/core' -import { OpenId4VciCreateCredentialOfferOptions, OpenId4VciSignCredentials } from '@credo-ts/openid4vc' - -import { SignerMethod } from '../../../enums/enum' +import type { SignerMethod } from '../../../enums/enum'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/types/issuer.types.ts` around lines 4 - 7, Remove the unused Kms import and convert the OpenId4VciCreateCredentialOfferOptions and OpenId4VciSignCredentials imports to type-only imports to satisfy the type-import rule: delete "Kms" from the imports, change the import from '@credo-ts/openid4vc' to "import type { OpenId4VciCreateCredentialOfferOptions, OpenId4VciSignCredentials } from '@credo-ts/openid4vc'", and keep the existing import of SignerMethod as-is (SignerMethod) so only the required symbols remain.src/authentication.ts-11-13 (1)
11-13:⚠️ Potential issue | 🟠 MajorFix import ordering and type-only import to pass CI.
The import block violates
@typescript-eslint/consistent-type-importsandimport/orderrules.TenantAgentis used only as a type (in type annotations and function signatures) and must be a type-only import. Additionally,randomUUIDfromnode:crypto(a builtin import) is currently positioned after sibling imports but should be grouped with external imports.Proposed fix
import type { RestAgentModules, RestMultiTenantAgentModules } from './cliAgent' import type { Request } from 'express' +import type { TenantAgent } from '@credo-ts/tenants' import { Agent, LogLevel } from '@credo-ts/core' import jwt, { decode } from 'jsonwebtoken' +import { randomUUID as uuid } from 'node:crypto' import { container } from 'tsyringe' import { AgentRole, ErrorMessages, SCOPES } from './enums' import { StatusException } from './errors' import { TsLogger } from './utils/logger' -import { TenantAgent } from '@credo-ts/tenants' -import { randomUUID as uuid } from 'node:crypto'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/authentication.ts` around lines 11 - 13, Change the import of TenantAgent to a type-only import and reorder the imports so the built-in crypto import is grouped with other builtins/externals: replace "import { TenantAgent } from '@credo-ts/tenants'" with a type-only form ("import type { TenantAgent } ...") and move "import { randomUUID as uuid } from 'node:crypto'" up into the external/builtin import group ahead of package imports so it satisfies import/order and `@typescript-eslint/consistent-type-imports`; ensure all references to TenantAgent and uuid remain unchanged.src/events/openId4VcIssuanceSessionEvents.ts-17-29 (1)
17-29:⚠️ Potential issue | 🟠 MajorDecouple webhook failure from websocket emission.
At Line 18, if webhook delivery throws, websocket publishing at Line 22 is skipped. This can silently drop session updates on socket clients.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/events/openId4VcIssuanceSessionEvents.ts` around lines 17 - 29, The webhook send (sendWebhookEvent) can throw and prevent the websocket send (sendWebSocketEvent) from running; wrap the sendWebhookEvent call in a try/catch that logs the error with agent.config.logger.error but does not rethrow, then always proceed to call sendWebSocketEvent when config.socketServer is set; keep the same payload shape (use body and event) and ensure errors from sendWebhookEvent are recorded (include error details) while not affecting websocket emission.src/events/openId4VcVerificationSessionEvents.ts-15-27 (1)
15-27:⚠️ Potential issue | 🟠 MajorDon’t let webhook failures suppress websocket delivery.
At Line 16, a thrown webhook error stops execution, so the websocket event at Line 20 won’t fire. This couples two delivery channels and can cause dropped notifications.
Suggested hardening
if (config.webhookUrl) { - await sendWebhookEvent(config.webhookUrl + '/openid4vc-verification', body, agent.config.logger) + try { + await sendWebhookEvent(config.webhookUrl + '/openid4vc-verification', body, agent.config.logger) + } catch (error) { + agent.config.logger.error('[OpenID4VC] Failed to deliver verification webhook', error) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/events/openId4VcVerificationSessionEvents.ts` around lines 15 - 27, The webhook call (sendWebhookEvent) can throw and currently prevents the subsequent sendWebSocketEvent from running; wrap the webhook delivery in a try/catch (or fire-and-forget with error handling) so any exception is caught and logged via agent.config.logger without rethrowing, then proceed to call sendWebSocketEvent when config.socketServer is set; reference sendWebhookEvent, sendWebSocketEvent, config.webhookUrl, config.socketServer, agent.config.logger, event, and body to locate and modify the code.src/utils/NatsAuthenticator.ts-11-29 (1)
11-29:⚠️ Potential issue | 🟠 MajorFail closed on invalid
NATS_AUTH_TYPEvalues.Current default branch returns
{}for unknown values, so a typo can silently drop auth config and attempt unauthenticated connection.Suggested fix
export function buildNatsAuthenticator(nats: NatsConfig): { authenticator?: Authenticator } { - const authType = (process.env.NATS_AUTH_TYPE as NatsAuthType) || 'none' + const rawAuthType = process.env.NATS_AUTH_TYPE ?? 'none' + const allowed: ReadonlySet<string> = new Set(['nkey', 'creds', 'usernamePassword', 'none']) + if (!allowed.has(rawAuthType)) { + throw new Error(`[NATS] Unsupported NATS_AUTH_TYPE=${rawAuthType}`) + } + const authType = rawAuthType as NatsAuthType🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/NatsAuthenticator.ts` around lines 11 - 29, The switch on authType currently falls through to the default returning {} and can silently ignore typos; update the logic that evaluates authType (the authType variable and switch in NatsAuthenticator.ts) so unknown values throw a clear Error instead of returning an empty object—keep the explicit 'none' case returning {} but replace the default branch with an Error that includes the invalid authType value (e.g. referencing authType) so callers immediately fail on invalid NATS_AUTH_TYPE; no change to the nkeyAuthenticator, credsAuthenticator, or usernamePasswordAuthenticator calls other than preserving their existing checks.src/cliAgent.ts-221-223 (1)
221-223:⚠️ Potential issue | 🟠 MajorExplicit
falsecan never disable auto-accept connections here.Both the module config and the
getModules/getWithTenantModulescall sites use|| true, so a validfalsevalue is overwritten totrue. That makes the REST config impossible to honor.🔧 Suggested fix
- autoAcceptConnections: autoAcceptConnections || true, + autoAcceptConnections: autoAcceptConnections ?? true,- autoAcceptConnections || true, + autoAcceptConnections ?? true,Also applies to: 466-495
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cliAgent.ts` around lines 221 - 223, The config currently forces autoAcceptConnections true because "autoAcceptConnections || true" overwrites explicit false; change assignments to respect a boolean false by using the nullish coalescing or explicit boolean check: replace "autoAcceptConnections: autoAcceptConnections || true" with "autoAcceptConnections: autoAcceptConnections ?? true" (or "autoAcceptConnections: typeof autoAcceptConnections === 'boolean' ? autoAcceptConnections : true") and apply the same fix to the other occurrences mentioned (the other getModules/getWithTenantModules assignments around the referenced block).src/events/CredentialEvents.ts-15-20 (1)
15-20:⚠️ Potential issue | 🟠 MajorPreserve non-prefixed tenant IDs when normalizing
contextCorrelationId.
split('tenant-')[1]returnsundefinedwhen the metadata already contains a bare tenant id, so the later lookups fall back to the base agent andoutOfBandId/credentialDatacan be resolved from the wrong wallet. Keep the original value unless thetenant-prefix is actually present.🔧 Suggested fix
- const tenantId = (!event.metadata.contextCorrelationId || event.metadata.contextCorrelationId === 'default') ? event.metadata.contextCorrelationId : event.metadata.contextCorrelationId.split('tenant-')[1] + const rawContextCorrelationId = event.metadata.contextCorrelationId + const tenantId = + !rawContextCorrelationId || rawContextCorrelationId === 'default' + ? rawContextCorrelationId + : rawContextCorrelationId.startsWith('tenant-') + ? rawContextCorrelationId.slice('tenant-'.length) + : rawContextCorrelationId🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/events/CredentialEvents.ts` around lines 15 - 20, The normalization of contextCorrelationId in CredentialEvents.ts incorrectly turns bare tenant IDs into undefined because split('tenant-')[1] is used unconditionally; update the tenantId computation (the variable you set from event.metadata.contextCorrelationId) to: if contextCorrelationId is falsy or exactly 'default' keep it as-is, otherwise if it startsWith('tenant-') extract the portion after the prefix, else preserve the original value; update any usage relying on tenantId (e.g., the body creation that sets contextCorrelationId) so lookups use the preserved or extracted tenant id.src/cliAgent.ts-208-212 (1)
208-212:⚠️ Potential issue | 🟠 MajorThe custom document-loader flag is wired backwards.
When
isCustomDocumentLoaderEnabled()returnstrue, this branch creates the defaultW3cCredentialsModuleand skipsCustomDocumentLoader. The flag currently disables the custom loader instead of enabling it.🔧 Suggested fix
- w3cCredentials: isCustomDocumentLoaderEnabled() - ? new W3cCredentialsModule() - : new W3cCredentialsModule({ - documentLoader: CustomDocumentLoader, - }), + w3cCredentials: isCustomDocumentLoaderEnabled() + ? new W3cCredentialsModule({ + documentLoader: CustomDocumentLoader, + }) + : new W3cCredentialsModule(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cliAgent.ts` around lines 208 - 212, The ternary that initializes w3cCredentials is wired backwards: isCustomDocumentLoaderEnabled() currently returns true but constructs the default W3cCredentialsModule, skipping CustomDocumentLoader; fix it by inverting the branches so that when isCustomDocumentLoaderEnabled() is true you instantiate new W3cCredentialsModule({ documentLoader: CustomDocumentLoader }) and when false you instantiate the default new W3cCredentialsModule(); update the expression that references isCustomDocumentLoaderEnabled(), W3cCredentialsModule, and CustomDocumentLoader accordingly.src/controllers/agent/AgentController.ts-23-29 (1)
23-29:⚠️ Potential issue | 🟠 MajorDon't repurpose
labelas the tenant correlation id.Line 25 changes a stable display/config field into tenancy context. For the default agent this can become
'default'orundefined, and existing clients lose the actual configured agent label. Keeplabelsourced from agent config and expose tenant context under a separate field if you need both.🔧 Suggested fix
return { - label: request.agent.context.contextCorrelationId, + label: request.agent.config.label, endpoints: request.agent.modules.didcomm.config.endpoints, isInitialized: request.agent.isInitialized, publicDid: undefined,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/agent/AgentController.ts` around lines 23 - 29, The return payload is repurposing label to hold tenancy context; restore label to the agent's configured display label (use the agent config property, e.g. request.agent.config.label or request.agent.label) and stop assigning request.agent.context.contextCorrelationId to label, and if you need the tenant correlation id include it as a separate field such as tenantCorrelationId: request.agent.context.contextCorrelationId while keeping endpoints (request.agent.modules.didcomm.config.endpoints), isInitialized (request.agent.isInitialized) and publicDid unchanged.src/utils/auth.ts-25-30 (1)
25-30:⚠️ Potential issue | 🟠 MajorRequire an explicit auth mode instead of defaulting to
NoAuth.If
TRUST_SERVICE_AUTH_TYPEis missing, startup silently drops into unauthenticated mode. For a security-sensitive trust-service flow, that is an unsafe misconfiguration path; fail fast unless this fallback is deliberately restricted to local development.🔧 Suggested fix
export function getAuthType(): AuthType { - const authType = process.env.TRUST_SERVICE_AUTH_TYPE as AuthType + const authType = process.env.TRUST_SERVICE_AUTH_TYPE as AuthType | undefined if (!authType) { - console.warn('[getAuthType] TRUST_SERVICE_AUTH_TYPE is not set — defaulting to NoAuth') - return AuthTypes.NoAuth + throw new Error('TRUST_SERVICE_AUTH_TYPE must be set explicitly') }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/auth.ts` around lines 25 - 30, The getAuthType function currently defaults to AuthTypes.NoAuth when TRUST_SERVICE_AUTH_TYPE is unset; change it to fail fast by throwing an Error if the env var is missing, except in an explicit development override (e.g. allow when process.env.NODE_ENV === 'development' or process.env.ALLOW_NO_AUTH === 'true'); update getAuthType to read process.env.TRUST_SERVICE_AUTH_TYPE as AuthType, validate it against known AuthTypes, and throw a descriptive error (including the variable name) when absent or invalid unless the explicit dev override is present.src/cliAgent.ts-592-599 (1)
592-599:⚠️ Potential issue | 🟠 MajorClose the HTTP server and gracefully shut down the agent before forcing process exit.
The shutdown handler only stops the purge schedulers. It misses closing the HTTP server (
app.close()), which cuts off in-flight requests, and omits any wallet/database cleanup or transport teardown. The immediateprocess.exit(0)prevents async cleanup operations from completing, making shutdown nondeterministic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cliAgent.ts` around lines 592 - 599, The shutdown handler currently only calls stopPurgeSchedulers and immediately process.exit(0), which prevents graceful teardown; update the shutdown function (shutdown) to sequentially and asynchronously: log the shutdown start via agent.config.logger, stop purge schedulers (stopPurgeSchedulers), await closing the HTTP server (await app.close() or app.shutdown equivalent), invoke and await any wallet/database/transport cleanup helpers (e.g., wallet.close(), db.close(), transport.stop() or their actual teardown functions), catch and log errors, and only call process.exit(0) after those awaitable cleanups complete (or use a small timeout fallback) so in-flight requests and resources are properly closed.src/purge/schedulers/CronPurgeScheduler.ts-190-193 (1)
190-193:⚠️ Potential issue | 🟠 MajorDo not let webhook failures abort the purge batch.
At Line 191, webhook errors bubble out of
deleteAndNotify, which causes the surrounding scan block to fail and skip remaining records for that type.Suggested fix
if (webhookUrl) { - await sendPurgeWebhook(webhookUrl, recordId, recordType, tenantId, status, logger) + try { + await sendPurgeWebhook(webhookUrl, recordId, recordType, tenantId, status, logger) + } catch (err: any) { + logger.error('[Purge] Webhook dispatch failed after delete', { + recordId, + recordType, + tenantId, + error: err?.message, + }) + } } return trueAlso applies to: 97-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/schedulers/CronPurgeScheduler.ts` around lines 190 - 193, The webhook call in CronPurgeScheduler (calls to sendPurgeWebhook inside deleteAndNotify) must not allow failures to propagate and abort the purge batch; wrap each await sendPurgeWebhook(webhookUrl, recordId, recordType, tenantId, status, logger) in a try/catch, log the error via the existing logger (including context: recordId, recordType, tenantId, status) and continue without rethrowing so the surrounding scan loop can proceed; apply the same change to both occurrences (the block around lines where deleteAndNotify calls sendPurgeWebhook and the earlier occurrence around lines 97-103) to ensure webhook failures don't skip remaining records.src/purge/schedulers/CronPurgeScheduler.ts-17-35 (1)
17-35:⚠️ Potential issue | 🟠 MajorPrevent duplicate cron jobs on repeated starts.
Line 20 overwrites
this.jobwithout stopping an existing task. Re-initialization can run duplicate scans and duplicate deletions/webhooks.Suggested fix
async start(agent: Agent, config: PurgeConfig, webhookUrl: string | undefined): Promise<void> { const { cronConfig } = config + if (this.job) { + this.job.stop() + this.job = null + agent.config.logger.warn('[Purge] Existing cron job was running and has been replaced') + } this.job = cron.schedule(cronConfig.cronSchedule, () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/schedulers/CronPurgeScheduler.ts` around lines 17 - 35, The start method in CronPurgeScheduler overwrites this.job on repeated starts and can create duplicate scheduled tasks; before creating a new cron.schedule in start (method CronPurgeScheduler.start) check if this.job exists and gracefully stop/destroy it (e.g. call this.job.stop() or this.job.destroy()), clear any running flags if needed (this.isRunning = false only if appropriate), then assign the new scheduled job and proceed to call runScan as before to avoid duplicate scans/deletions/webhooks.src/purge/PurgeWorker.ts-30-38 (1)
30-38:⚠️ Potential issue | 🟠 MajorKeep the worker alive when
consumer.consume()throws.If Line 31 throws (e.g., transient broker/network issues),
start()exits and this worker stops permanently.Suggested fix
while (true) { - const messages = await consumer.consume() - console.log(`[Purge][Worker] Consuming messages — consumer=${this.consumerName}`) - for await (const msg of messages) { - await this.processMessage(msg, agent) - } - console.warn(`[Purge][Worker] Consume loop ended — restarting consumer=${this.consumerName}`) - agent.config.logger.warn('[Purge] Consume loop ended — restarting', { consumer: this.consumerName }) + try { + const messages = await consumer.consume() + console.log(`[Purge][Worker] Consuming messages — consumer=${this.consumerName}`) + for await (const msg of messages) { + await this.processMessage(msg, agent) + } + console.warn(`[Purge][Worker] Consume loop ended — restarting consumer=${this.consumerName}`) + agent.config.logger.warn('[Purge] Consume loop ended — restarting', { consumer: this.consumerName }) + } catch (err: any) { + agent.config.logger.error('[Purge] consume() failed; retrying', { + consumer: this.consumerName, + error: err?.message, + }) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/PurgeWorker.ts` around lines 30 - 38, The consume call can throw and currently will exit the infinite loop; wrap the await this.consumer.consume() (inside the start()/consume loop in PurgeWorker.ts) in a try/catch so exceptions are logged (use agent.config.logger.error or console.error) and do not break the loop, optionally implement a short backoff/retry delay before continuing; keep the inner for-await (for await (const msg of messages) { await this.processMessage(msg, agent) }) unchanged so successful consumes still process messages, and include the consumerName in the error log for context.src/utils/statusListService.ts-91-93 (1)
91-93:⚠️ Potential issue | 🟠 MajorValidate status-list size before allocating the backing array.
Line 91 can produce
NaN/non-positive values (e.g., unset or invalidSTATUS_LIST_DEFAULT_SIZE), and Line 92 then throws at runtime vianew Array(size).Suggested fix
- const size = listSize || Number(process.env.STATUS_LIST_DEFAULT_SIZE); - const statusList = new StatusList(new Array(size).fill(0), 1); + const envDefaultSize = Number(process.env.STATUS_LIST_DEFAULT_SIZE) + const size = listSize ?? envDefaultSize + if (!Number.isInteger(size) || size <= 0) { + throw new Error(`Invalid status list size: ${size}`) + } + const statusList = new StatusList(new Array(size).fill(0), 1);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/statusListService.ts` around lines 91 - 93, The allocation uses size = listSize || Number(process.env.STATUS_LIST_DEFAULT_SIZE) which can be NaN, zero, negative, or non-integer and will blow up at new Array(size); validate and normalize the size before allocating: parse and validate process.env.STATUS_LIST_DEFAULT_SIZE (use parseInt/Number and isFinite), ensure listSize (input) is an integer > 0, fall back to a defined safe default constant (e.g., DEFAULT_STATUS_LIST_SIZE) if invalid, and optionally clamp or Math.floor the value to an integer; update the code paths around the variables size, listSize, and STATUS_LIST_DEFAULT_SIZE and then call new StatusList(new Array(validSize).fill(0), 1) with the validated value.src/controllers/openid4vc/issuance-sessions/issuance-sessions.service.ts-97-105 (1)
97-105:⚠️ Potential issue | 🟠 MajorUse
BadRequestErrorfor credential config input failures.Line 99 and Line 102 throw generic
Errorfor client-provided invalid payloads, which will typically surface as 500 instead of 4xx.Suggested fix
private validateCredentialConfig(cred: any, supported: any) { if (!supported) { - throw new Error(`CredentialSupportedId '${cred.credentialSupportedId}' is not supported by issuer`) + throw new BadRequestError(`CredentialSupportedId '${cred.credentialSupportedId}' is not supported by issuer`) } if (supported.format !== cred.format) { - throw new Error( + throw new BadRequestError( `Format mismatch for '${cred.credentialSupportedId}': expected '${supported.format}', got '${cred.format}'`, ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/issuance-sessions/issuance-sessions.service.ts` around lines 97 - 105, The validateCredentialConfig function currently throws generic Error for client input problems (cred and supported checks); change those throws to BadRequestError so the failures return 4xx, e.g., replace the two Error throws in validateCredentialConfig with BadRequestError containing the same messages and add the necessary import for BadRequestError at the top of the file; keep the existing messages for credentialSupportedId and format mismatch to preserve context.src/purge/PurgeTypes.ts-55-70 (1)
55-70:⚠️ Potential issue | 🟠 MajorOnly validate TTL env vars for enabled purge modes.
Line 57 and Line 69 parse TTL values for both modes even when one mode is disabled. A bad disabled-mode env value can fail startup unexpectedly.
Suggested fix
export function buildPurgeConfig(): PurgeConfig | undefined { if (process.env.PURGE_ENABLED !== 'true') return undefined const natsEnabled = process.env.PURGE_NATS_ENABLED === 'true' const cronEnabled = process.env.PURGE_CRON_ENABLED === 'true' + const defaultTtlSeconds = 2592000 + const recordTypes = buildPurgeRecordTypes() if (!natsEnabled && !cronEnabled) return undefined return { natsConfig: { enabled: natsEnabled, - ttlSeconds: parseTtlSeconds(process.env.PURGE_NATS_TTL_SECONDS, 'PURGE_NATS_TTL_SECONDS'), + ttlSeconds: natsEnabled + ? parseTtlSeconds(process.env.PURGE_NATS_TTL_SECONDS, 'PURGE_NATS_TTL_SECONDS', defaultTtlSeconds) + : defaultTtlSeconds, nats: { servers: (process.env.NATS_SERVERS || 'nats://localhost:4222').split(',').map((s) => s.trim()).filter(Boolean), nkeySeed: process.env.NATS_NKEY_SEED, credentialsFile: process.env.NATS_CREDENTIALS_FILE, username: process.env.NATS_USER, password: process.env.NATS_PASSWORD, }, - recordTypes: buildPurgeRecordTypes(), + recordTypes, }, cronConfig: { enabled: cronEnabled, - ttlSeconds: parseTtlSeconds(process.env.PURGE_CRON_TTL_SECONDS, 'PURGE_CRON_TTL_SECONDS'), + ttlSeconds: cronEnabled + ? parseTtlSeconds(process.env.PURGE_CRON_TTL_SECONDS, 'PURGE_CRON_TTL_SECONDS', defaultTtlSeconds) + : defaultTtlSeconds, cronSchedule: process.env.PURGE_CRON_SCHEDULE || '0 * * * *', - recordTypes: buildPurgeRecordTypes(), + recordTypes, }, webhookEnabled: process.env.PURGE_WEBHOOK_ENABLED !== 'false', } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/PurgeTypes.ts` around lines 55 - 70, The code is parsing TTL env vars for both purge modes unconditionally (calls to parseTtlSeconds for natsConfig and cronConfig) which can cause startup failures when a mode is disabled; update the construction of natsConfig and cronConfig so that ttlSeconds is only computed when the corresponding flag (natsEnabled for natsConfig, cronEnabled for cronConfig) is true (otherwise set ttlSeconds to undefined or a safe default), i.e., wrap or conditionally call parseTtlSeconds(process.env.PURGE_NATS_TTL_SECONDS, 'PURGE_NATS_TTL_SECONDS') and parseTtlSeconds(process.env.PURGE_CRON_TTL_SECONDS, 'PURGE_CRON_TTL_SECONDS') based on natsEnabled/cronEnabled while leaving other fields like nats, recordTypes (buildPurgeRecordTypes()) and cronSchedule unchanged.src/purge/schedulers/NatsPurgeScheduler.ts-51-66 (1)
51-66:⚠️ Potential issue | 🟠 MajorReplace
console.*with structured loggerThese lines trigger lint errors (
no-console) and bypass tenant-aware structured logs. Please route all logs through the existing logger used in this class.Also applies to: 93-93, 128-128, 147-147, 177-177, 201-205
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/purge/schedulers/NatsPurgeScheduler.ts` around lines 51 - 66, Replace all console.* calls in NatsPurgeScheduler with the class's structured logger to fix linting and ensure tenant-aware logs: use agent.config.logger.info/warn/error (or this.logger.* if that is the instance logger used in this class) instead of console.log in the startup sequence around provisionStreams(), provisionConsumers(), startWorkers(), and any other console usages (including the occurrences near the provision/consumer/worker calls and the messages that reference this.recordTypes and this.ttlSeconds). Ensure message text remains the same and include contextual fields where appropriate (e.g., recordTypes, ttlSeconds, natsConfig.nats.servers) so logs remain structured and consistent with existing logging calls like agent.config.logger.info('[Purge] NATS streams ready').src/controllers/openid4vc/issuers/issuer.service.ts-29-34 (1)
29-34:⚠️ Potential issue | 🟠 MajorFail fast if issuer module is unavailable (avoid silent no-op)
At Line [29], optional chaining can silently skip metadata update and still return a response. This should throw, consistent with
createIssuerAgent.✅ Suggested fail-fast pattern
public async updateIssuerMetadata( agentReq: Req, publicIssuerId: string, updateIssuerRecordOptions: any, // TODO: Replace with OpenId4VcUpdateIssuerRecordOptions ) { - await agentReq.agent.modules.openid4vc.issuer?.updateIssuerMetadata({ + const issuer = agentReq.agent.modules.openid4vc.issuer + if (!issuer) { + throw new Error('OID4VC issuer module not initialized') + } + await issuer.updateIssuerMetadata({ issuerId: publicIssuerId, ...updateIssuerRecordOptions, }) return await this.getIssuer(agentReq, publicIssuerId) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/issuers/issuer.service.ts` around lines 29 - 34, The current optional chaining on agentReq.agent.modules.openid4vc.issuer?.updateIssuerMetadata can silently no-op; instead, fail fast by explicitly checking the issuer module before calling updateIssuerMetadata (e.g., guard on agentReq.agent.modules.openid4vc?.issuer) and throw a clear error (matching createIssuerAgent behavior) if missing, then call issuer.updateIssuerMetadata(...) and return this.getIssuer(agentReq, publicIssuerId).src/utils/helpers.ts-240-244 (1)
240-244:⚠️ Potential issue | 🟠 MajorAdd request timeouts to axios calls.
Both trust-service POST calls can hang indefinitely under network stalls, which can tie up request handlers and degrade availability.
⏱️ Suggested patch
- tokenResponse = await axios.post<any>( + tokenResponse = await axios.post<any>( tokenUrl, { clientId, clientSecret }, - { headers: { 'Content-Type': 'application/json', accept: 'application/json' } }, + { + timeout: 10_000, + headers: { 'Content-Type': 'application/json', accept: 'application/json' }, + }, ) - const matchResponse = await axios.post<{ matched: boolean }>( + const matchResponse = await axios.post<{ matched: boolean }>( matchUrl, { x509, ...(tenantId && { tenantId }) }, - { headers: { 'Content-Type': 'application/json', accept: 'application/json', ...authHeaders } }, + { + timeout: 10_000, + headers: { 'Content-Type': 'application/json', accept: 'application/json', ...authHeaders }, + }, )Also applies to: 295-299
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/helpers.ts` around lines 240 - 244, The axios.post calls that set tokenResponse (variable tokenResponse) and the second trust-service POST (the other axios.post around lines 295-299) need a request timeout to avoid hanging; introduce a configurable timeout constant (e.g., TRUST_SERVICE_TIMEOUT_MS or REQUEST_TIMEOUT_MS with a sensible default from env) and pass it into the axios config object (merge with existing headers) for both axios.post calls so the requests fail fast on network stalls.src/utils/oid4vc-agent.ts-64-65 (1)
64-65:⚠️ Potential issue | 🟠 MajorFix unsafe DID verification-method access.
Line 64/65 can throw when
verificationMethodexists but is empty ([0]isundefined).🐛 Suggested patch
- if (didDocument.verificationMethod?.[0].id) { - issuerDidVerificationMethod = didDocument.verificationMethod?.[0].id + const firstVerificationMethodId = didDocument.verificationMethod?.[0]?.id + if (firstVerificationMethodId) { + issuerDidVerificationMethod = firstVerificationMethodId }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/oid4vc-agent.ts` around lines 64 - 65, The access of didDocument.verificationMethod?.[0].id in src/utils/oid4vc-agent.ts is unsafe when verificationMethod is an empty array; change the assignment for issuerDidVerificationMethod to first verify that didDocument.verificationMethod has at least one element (e.g., check length or that verificationMethod[0] is defined) or use safe optional chaining on the element (verificationMethod?.[0]?.id) before reading .id, ensuring issuerDidVerificationMethod only receives a value when the first verificationMethod entry exists.src/controllers/openid4vc/verifier-sessions/verification-sessions.service.ts-95-103 (1)
95-103:⚠️ Potential issue | 🟠 MajorFail fast when verifier module is missing in query endpoint.
Line 95 uses optional chaining and can return
undefinedinstead of a clear error, unlike other methods in this service.✅ Suggested patch
) { - return await agentReq.agent.modules.openid4vc.verifier?.findVerificationSessionsByQuery({ + const verifier = agentReq.agent.modules.openid4vc.verifier + if (!verifier) { + throw new Error('OID4VC verifier module not initialized') + } + return await verifier.findVerificationSessionsByQuery({ verifierId: publicVerifierId, payloadState, state, authorizationRequestUri, nonce, authorizationRequestId, }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/controllers/openid4vc/verifier-sessions/verification-sessions.service.ts` around lines 95 - 103, The current call to agentReq.agent.modules.openid4vc.verifier?.findVerificationSessionsByQuery uses optional chaining and may silently return undefined; update the containing function to explicitly check that agentReq.agent.modules.openid4vc.verifier exists and throw a clear Error (or return a rejected Promise) if it's missing before calling findVerificationSessionsByQuery, so the endpoint fails fast and matches the behavior of the other service methods that assume the verifier module is present.src/utils/oid4vc-agent.ts-318-318 (1)
318-318:⚠️ Potential issue | 🟠 MajorAdd timeout handling when fetching trust list URL.
Line 318 uses plain
fetchwithout timeout/abort, so slow upstreams can stall request processing.⏱️ Suggested patch
- const response = await fetch(trustListUrl) + const controller = new AbortController() + const timeout = setTimeout(() => controller.abort(), 10_000) + const response = await fetch(trustListUrl, { signal: controller.signal }) + clearTimeout(timeout)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/oid4vc-agent.ts` at line 318, The fetch(trustListUrl) call must use an AbortController-based timeout to avoid hanging requests: create an AbortController, pass controller.signal into fetch(trustListUrl, { signal }), set a timer (e.g., via setTimeout) to call controller.abort() after a configurable timeout, and clear the timer when the fetch resolves; catch the abort error and surface a clear error/timeout log before proceeding. Update the code around the existing response/response handling (the fetch/trustListUrl usage and the variable response) to use this pattern so slow upstreams won't stall processing.src/utils/oid4vc-agent.ts-100-104 (1)
100-104:⚠️ Potential issue | 🟠 MajorReplace hard-coded issuer value in PresentationAuthorization flow.
Line 103 uses
'ISSUER_HOST', which can produce invalid issuer metadata in issued credentials.✅ Suggested patch
- issuer: { + const leafCert = X509Certificate.fromEncodedCertificate(trustedCertificates[0]) + issuer: { method: 'x5c', x5c: trustedCertificates.map((cert) => X509Certificate.fromEncodedCertificate(cert)), - issuer: 'ISSUER_HOST', + issuer: leafCert.sanUriNames?.[0] ?? leafCert.subjectName, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/oid4vc-agent.ts` around lines 100 - 104, The issuer field currently hard-codes 'ISSUER_HOST' in the issuer object (the PresentationAuthorization flow where issuer: { method: 'x5c', x5c: trustedCertificates.map(...), issuer: 'ISSUER_HOST' }), which can yield invalid credential metadata; replace that literal with the real issuer host pulled from configuration or environment (e.g., an existing issuerHost/config.issuer variable or a getIssuer() helper) so the issuer value reflects the runtime issuer endpoint instead of the fixed string.
| msg.ack() | ||
|
|
||
| if (this.webhookUrl) { | ||
| await sendPurgeWebhook(this.webhookUrl, recordId, this.recordType, tenantId, PurgeDeletionStatus.DELETED, logger) | ||
| } | ||
| } catch (err: any) { | ||
| if (err instanceof RecordNotFoundError) { | ||
| console.warn(`[Purge][Worker] Record already absent — recordType=${recordType} recordId=${recordId}`) | ||
| logger.warn('[Purge] Record already absent — treating as success', { recordId, recordType }) | ||
| msg.ack() | ||
|
|
||
| if (this.webhookUrl) { | ||
| await sendPurgeWebhook(this.webhookUrl, recordId, this.recordType, tenantId, PurgeDeletionStatus.ALREADY_ABSENT, logger) | ||
| } | ||
| return | ||
| } | ||
|
|
||
| console.warn(`[Purge][Worker] Job failed — recordType=${recordType} recordId=${recordId} deliveryCount=${deliveryCount}`, err?.message) | ||
| logger.warn('[Purge] Job failed', { recordId, recordType, deliveryCount, error: err?.message }) | ||
|
|
||
| if (deliveryCount >= PURGE_CONSUMER_MAX_DELIVER) { | ||
| console.error(`[Purge][Worker] Job dropped after max retries — recordType=${recordType} recordId=${recordId} tenantId="${tenantId}"`) | ||
| logger.error('[Purge] Job dropped after max retries', { recordId, recordType, tenantId, deliveryCount }) | ||
| msg.ack() | ||
| } else { | ||
| console.log(`[Purge][Worker] Nacking job for retry — recordType=${recordType} recordId=${recordId} deliveryCount=${deliveryCount}`) | ||
| msg.nak() | ||
| } |
There was a problem hiding this comment.
Separate webhook errors from ack/retry logic.
After Line 80/89, the message is already ACKed. If webhook delivery throws, execution enters failure handling and may attempt nak() on an ACKed message (or bubble and break processing in the RecordNotFoundError path).
Suggested fix
logger.info('[Purge] Record deleted', { recordId, recordType, tenantId })
msg.ack()
if (this.webhookUrl) {
- await sendPurgeWebhook(this.webhookUrl, recordId, this.recordType, tenantId, PurgeDeletionStatus.DELETED, logger)
+ try {
+ await sendPurgeWebhook(this.webhookUrl, recordId, this.recordType, tenantId, PurgeDeletionStatus.DELETED, logger)
+ } catch (webhookErr: any) {
+ logger.error('[Purge] Webhook failed after successful deletion', {
+ recordId,
+ recordType,
+ tenantId,
+ error: webhookErr?.message,
+ })
+ }
}
+ return
} catch (err: any) {
if (err instanceof RecordNotFoundError) {
console.warn(`[Purge][Worker] Record already absent — recordType=${recordType} recordId=${recordId}`)
logger.warn('[Purge] Record already absent — treating as success', { recordId, recordType })
msg.ack()
if (this.webhookUrl) {
- await sendPurgeWebhook(this.webhookUrl, recordId, this.recordType, tenantId, PurgeDeletionStatus.ALREADY_ABSENT, logger)
+ try {
+ await sendPurgeWebhook(this.webhookUrl, recordId, this.recordType, tenantId, PurgeDeletionStatus.ALREADY_ABSENT, logger)
+ } catch (webhookErr: any) {
+ logger.error('[Purge] Webhook failed after already-absent outcome', {
+ recordId,
+ recordType,
+ tenantId,
+ error: webhookErr?.message,
+ })
+ }
}
return
}Also applies to: 91-93
🧰 Tools
🪛 ESLint
[error] 83-83: Replace this.webhookUrl,·recordId,·this.recordType,·tenantId,·PurgeDeletionStatus.DELETED,·logger with ⏎··········this.webhookUrl,⏎··········recordId,⏎··········this.recordType,⏎··········tenantId,⏎··········PurgeDeletionStatus.DELETED,⏎··········logger,⏎········
(prettier/prettier)
[error] 87-87: Unexpected console statement.
(no-console)
[error] 92-92: Replace this.webhookUrl,·recordId,·this.recordType,·tenantId,·PurgeDeletionStatus.ALREADY_ABSENT,·logger with ⏎············this.webhookUrl,⏎············recordId,⏎············this.recordType,⏎············tenantId,⏎············PurgeDeletionStatus.ALREADY_ABSENT,⏎············logger,⏎··········
(prettier/prettier)
[error] 97-97: Unexpected console statement.
(no-console)
[error] 97-97: Replace ``[Purge][Worker]·Job·failed·—·recordType=${recordType}·recordId=${recordId}·deliveryCount=${deliveryCount},·err?.message with `⏎········`[Purge][Worker]·Job·failed·—·recordType=${recordType}·recordId=${recordId}·deliveryCount=${deliveryCount}`,⏎········err?.message,⏎······`
(prettier/prettier)
[error] 101-101: Unexpected console statement.
(no-console)
[error] 101-101: Replace [Purge][Worker]·Job·dropped·after·max·retries·—·recordType=${recordType}·recordId=${recordId}·tenantId="${tenantId}" with ⏎··········[Purge][Worker]·Job·dropped·after·max·retries·—·recordType=${recordType}·recordId=${recordId}·tenantId="${tenantId}",⏎········
(prettier/prettier)
[error] 105-105: Unexpected console statement.
(no-console)
[error] 105-105: Replace [Purge][Worker]·Nacking·job·for·retry·—·recordType=${recordType}·recordId=${recordId}·deliveryCount=${deliveryCount} with ⏎··········[Purge][Worker]·Nacking·job·for·retry·—·recordType=${recordType}·recordId=${recordId}·deliveryCount=${deliveryCount},⏎········
(prettier/prettier)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/purge/PurgeWorker.ts` around lines 80 - 107, The webhook send
(sendPurgeWebhook) is being awaited after msg.ack(), so if it throws it falls
into the outer catch and can cause msg.nak()/other retry/ack logic to run on an
already-acked message; wrap each sendPurgeWebhook call (both the success path
after msg.ack() and the RecordNotFoundError ALREADY_ABSENT path) in its own
try/catch and handle/log webhook errors locally without rethrowing so the outer
catch never reacts to webhook failures, ensuring msg.ack(), msg.nak(),
deliveryCount checks and RecordNotFoundError handling remain unaffected.
| console.log(`[${label}] using cached token for clientId:`, clientId) | ||
| return cachedToken |
There was a problem hiding this comment.
Remove secret/PII-bearing console logs in trust-service flows.
Line 262 and Line 266 can log raw token payloads (including access tokens). Line 356 logs full certificate chains and Line 232 logs identifiers. These are high-risk leaks in production logs.
🔐 Suggested patch
- console.log(`[${label}] using cached token for clientId:`, clientId)
+ // Avoid logging token identifiers in plaintext
- console.log(`[${label}] fetching token from:`, tokenUrl)
+ // Avoid verbose auth endpoint logs in production
- console.error(`[${label}] token request failed:`, {
+ console.error(`[${label}] token request failed`, {
url: tokenUrl,
status: error.response?.status,
statusText: error.response?.statusText,
- data: error.response?.data,
message: error.message,
})
- console.log(`[${label}] token response status:`, tokenResponse.status)
- console.log(`[${label}] token response data:`, JSON.stringify(tokenResponse.data, null, 2))
+ // Do not log token response body (may contain secrets)
- console.error(`[${label}] unexpected token response shape:`, JSON.stringify(tokenResponse.data, null, 2))
+ console.error(`[${label}] unexpected token response shape`)
- console.log(
- `[${label}] token cached for clientId:`,
- clientId,
- '| expires at:',
- new Date(expiresAt * 1000).toISOString(),
- )
+ // Optional: log only non-sensitive cache state
- console.error(`[${label}] match request failed:`, {
+ console.error(`[${label}] match request failed`, {
url: matchUrl,
status: error.response?.status,
statusText: error.response?.statusText,
- data: error.response?.data,
message: error.message,
})
- console.log(`[${label}] agent type: ${isDedicated ? 'dedicated' : 'shared'}, certificates:`, x509Certificates)
+ // Avoid logging full cert chain and tenant-related valuesAlso applies to: 247-253, 261-267, 272-277, 313-321, 356-357
🧰 Tools
🪛 ESLint
[error] 232-232: Unexpected console statement.
(no-console)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/helpers.ts` around lines 232 - 233, Remove any console.log that
emits raw identifiers, tokens, or certificate bodies (e.g., the console.log
using label, clientId and printing cachedToken) and replace them with
non-sensitive logging: either log only existence/status (e.g., "cached token
found" / "no cached token") or log redacted identifiers (mask clientId and never
print token/certificate contents). Update places that currently print token
payloads or cert chains to use a redaction helper (e.g., redactSecret(value) or
maskLastChars) and the app's structured logger (processLogger or logger.debug)
instead of console.log; target the statements referencing label, clientId,
cachedToken and any console.log statements in this module that output
tokens/certs. Ensure no access_token, full JWT payload, or certificate chain is
written to logs.




Summary by CodeRabbit
New Features
Bug Fixes & Improvements
Refactor
Chores