feat: add Mistral provider adapter for TanStack AI#462
feat: add Mistral provider adapter for TanStack AI#462cstrnt wants to merge 8 commits intoTanStack:mainfrom
Conversation
- Implemented MistralTextAdapter and related functions for chat completions. - Defined Mistral-specific message types and metadata structures. - Created model metadata for various Mistral models with pricing and capabilities. - Added text provider options and validation for Mistral text models. - Developed utility functions for Mistral client configuration and schema conversion. - Implemented function tool conversion for Mistral-specific formats. - Added tests for Mistral adapters, including event emissions and error handling. - Configured TypeScript and Vite for the new package.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a new package Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Adapter as MistralTextAdapter
participant Fetch as Fetch/SSE
participant SDK as `@mistralai/mistralai`
Client->>Adapter: generate() / chatStream(messages, options)
Adapter->>SDK: prepare request payload / map messages & tools
Adapter->>Fetch: POST /v1/chat/completions?stream=true (snake_case payload)
Fetch-->>Adapter: SSE chunks (data: ...), [DONE]
Adapter-->>Client: yield StreamChunk events (RUN_STARTED → TEXT_MESSAGE_START → TEXT_MESSAGE_CONTENT* → TEXT_MESSAGE_END, TOOL_CALL_START/ARGS/END, RUN_FINISHED/RUN_ERROR)
Adapter->>SDK: (non-stream) call parse for structuredOutput → parse JSON → transform nulls→undefined
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 14
🧹 Nitpick comments (10)
packages/typescript/ai-mistral/src/utils/client.ts (1)
17-24: TreattimeoutMs: 0explicitly and forward all relevant SDK options.
timeoutMs ? { timeoutMs }drops the value when it's0. Although0is rarely a useful timeout, the check is truthiness-based and should be an explicit!== undefinedto preserve user intent. Same nit applies toserverURLif callers pass an empty string (though that's less meaningful).Proposed fix
- const { apiKey, serverURL, timeoutMs } = config - return new Mistral({ - apiKey, - ...(serverURL ? { serverURL } : {}), - ...(timeoutMs ? { timeoutMs } : {}), - }) + const { apiKey, serverURL, timeoutMs } = config + return new Mistral({ + apiKey, + ...(serverURL !== undefined ? { serverURL } : {}), + ...(timeoutMs !== undefined ? { timeoutMs } : {}), + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 17 - 24, The createMistralClient function currently uses truthy checks which drop intentional values like timeoutMs: 0 and possibly empty serverURL; update createMistralClient to explicitly check for undefined (e.g., timeoutMs !== undefined and serverURL !== undefined) when spreading options so zero or empty-string values are preserved, and also ensure any other SDK options present on MistralClientConfig are forwarded into the new Mistral({...}) call (refer to createMistralClient and the Mistral constructor to add those properties).packages/typescript/ai-mistral/src/utils/schema-converter.ts (1)
8-29:transformNullsToUndefinedwill mangleDate,Map,Set, class instances, etc.Because
typeofreturns'object'for those, they'll be rebuilt as plainRecord<string, unknown>viaObject.entries. This is likely fine for JSON-parsed Mistral responses (which won't contain such values), but worth a brief code comment or a plain-object guard (Object.getPrototypeOf(obj) === Object.prototype) to avoid surprises if this util gets reused.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts` around lines 8 - 29, transformNullsToUndefined currently treats any typeof === 'object' as a plain object and will rebuild Dates, Maps, Sets and class instances via Object.entries; add a plain-object guard (e.g. check Object.getPrototypeOf(obj) === Object.prototype or an isPlainObject helper) before iterating entries and only recurse/construct a Record for true plain objects, otherwise return the original object unchanged, and add a short comment explaining the guard to prevent mangling Date/Map/Set/class instances; reference the transformNullsToUndefined function and the block that uses Object.entries(obj as Record<string, unknown>) for where to apply the change.packages/typescript/ai-mistral/src/model-meta.ts (2)
33-207: Minor: consider a single data-driven table instead of per-model constants.Eleven nearly-identical
const X = { ... } as const satisfies ModelMeta<…>blocks, plus three parallel lists/maps (MISTRAL_CHAT_MODELS,MistralModelInputModalitiesByName,MistralChatModelProviderOptionsByName) make adding/removing a model a multi-place edit that's easy to get out of sync. Consider defining a singleMISTRAL_MODELS = { 'mistral-large-latest': {...}, ... } as const satisfies Record<string, ModelMeta<MistralTextProviderOptions>>and deriving the union / modality map from it. Not blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 33 - 207, Replace the many per-model constants (e.g., MISTRAL_LARGE_LATEST, MISTRAL_MEDIUM_LATEST, ... OPEN_MISTRAL_NEMO) and the parallel collections (MISTRAL_CHAT_MODELS, MistralModelInputModalitiesByName, MistralChatModelProviderOptionsByName) with a single data-driven map like MISTRAL_MODELS typed as const satisfies Record<string, ModelMeta<MistralTextProviderOptions>>; then derive the chat model list and the modality/provider-option maps from that single MISTRAL_MODELS object so adding/removing a model only requires updating one place and keeps types (and unions) inferred from the map.
251-261: Per-model provider options don't actually differ.
MistralChatModelProviderOptionsByNamemaps every model to the sameMistralTextProviderOptions, soResolveProviderOptions<TModel>is effectively a constant. This means the model-specific provider-options machinery adds indirection without providing any per-model type safety. If Mistral truly has homogeneous request params today, consider collapsingResolveProviderOptionsdown to justMistralTextProviderOptions(and dropping the map) — you can reintroduce per-model specialization later if a model gains unique options.As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 251 - 261, The current MistralChatModelProviderOptionsByName maps every entry of MISTRAL_CHAT_MODELS to the same MistralTextProviderOptions, making ResolveProviderOptions<TModel> redundant; replace the indirection by changing ResolveProviderOptions to simply be MistralTextProviderOptions (and remove or deprecate MistralChatModelProviderOptionsByName), i.e., eliminate the map and export ResolveProviderOptions = MistralTextProviderOptions so the code uses a single concrete provider options type while preserving the ability to reintroduce per-model mappings later if needed (refer to the symbols MistralChatModelProviderOptionsByName, ResolveProviderOptions, MistralTextProviderOptions, and MISTRAL_CHAT_MODELS to locate and update the types).packages/typescript/ai-mistral/src/text/text-provider-options.ts (1)
110-114: No-op validator.
validateTextProviderOptionsis a stub with the comment acknowledging Mistral handles validation server-side. That's fine, but consider at least validating obvious client-catchable issues to fail fast (e.g.,n && n < 1,temperaturerange,messages.length > 0) — these avoid a round-trip to the API for trivial mistakes. Deferring entirely to Mistral means users get slower, less actionable errors.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/text/text-provider-options.ts` around lines 110 - 114, The validator validateTextProviderOptions currently does nothing; add fast client-side checks inside that function (referencing validateTextProviderOptions and type InternalTextProviderOptions) to throw clear errors for obvious misconfigurations: ensure numeric counts like n are integers >= 1, temperature is within the allowed range (e.g. 0–1 or your provider's accepted bounds), messages (or prompt) is present and messages.length > 0, and any other obvious bounds (e.g. max_tokens/top_p if present) are within valid ranges; throw descriptive Error instances so callers fail fast instead of waiting for the Mistral API round-trip.packages/typescript/ai-mistral/src/tools/function-tool.ts (1)
27-42: Remove the redundantadditionalProperties: falseassignment at line 32.
makeMistralStructuredOutputCompatiblealready setsadditionalProperties = falseinside theif (result.type === 'object')block (schema-converter.ts line 91). The assignment here is redundant for objects and incorrectly applied to non-object types where it has no semantic meaning.The type compatibility is fine—
FunctionParametersis{ [key: string]: unknown }, which accepts theRecord<string, any>return type ofmakeMistralStructuredOutputCompatiblewithout requiring a cast.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/tools/function-tool.ts` around lines 27 - 42, Remove the redundant assignment to jsonSchema.additionalProperties = false in the function-tool return path: the helper makeMistralStructuredOutputCompatible already sets additionalProperties = false for object schemas (see makeMistralStructuredOutputCompatible), and setting it here can incorrectly apply to non-object types; simply delete the line that assigns additionalProperties and leave the returned object (type: 'function', function: { name: tool.name, description: tool.description, parameters: jsonSchema, strict: true }) as-is (it remains compatible with FunctionTool/FunctionParameters).packages/typescript/ai-mistral/tests/mistral-adapter.test.ts (1)
378-406: Chunk shape is correct, but test setup is inconsistent.The pre-wrapped
{ data: {...} }shape at lines 378-406 is correct—the Mistral SDK wraps stream events with a.dataproperty. However, this test manually wraps chunks while most others (lines 119, 222, 428) pass raw chunks tosetupMockStream, which does the wrapping. Both approaches work and produce the correct shape, but for consistency, usesetupMockStreamhere as well instead of manually creatingerrorIterablewith pre-wrapped chunks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts` around lines 378 - 406, The test manually constructs an async iterable named errorIterable with pre-wrapped streamChunks, which is inconsistent with other tests; replace the manual async iterable with a call to setupMockStream(streamChunks) so the helper performs the required .data wrapping and error behavior, updating references to errorIterable and reusing the existing setupMockStream helper used elsewhere in this test suite.packages/typescript/ai-mistral/src/adapters/text.ts (2)
93-101: Redundantkindoverride and dropped config.
readonly kind = 'text' as constis already declared onBaseTextAdapter(seepackages/typescript/ai-adapters/.../adapter.tsline 116 in the provided snippet), so the override at Line 93 is dead code. More importantly,super({}, model)at Line 99 discards the caller'sconfig— any future base-class behavior that depends onTextAdapterConfig(timeouts, headers, etc.) will silently ignore user configuration.- readonly kind = 'text' as const readonly name = 'mistral' as const private client: Mistral constructor(config: MistralTextConfig, model: TModel) { - super({}, model) + super(config, model) this.client = createMistralClient(config) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 93 - 101, The constructor in MistralTextAdapter wrongly overrides an already-declared kind and discards the passed config by calling super({}, model); update the class to remove the redundant readonly kind declaration (since BaseTextAdapter provides it) and pass the incoming config into the base constructor (call super(config, model)) so TextAdapterConfig (timeouts/headers/etc.) is preserved; keep the client initialization via createMistralClient(config) and ensure the private client: Mistral and readonly name = 'mistral' remain unchanged.
282-338: Tool-call args are serialized twice per delta.
JSON.stringify(toolCallDelta.function.arguments)runs at Line 308 to accumulate, and again at Line 328 to emitTOOL_CALL_ARGS. For object-valued delta args this is wasted work on the hot streaming path, and — more importantly — the first tool-call chunk (whentoolCall.startedflips from false to true inside this same iteration) won't emitTOOL_CALL_ARGSon that first pass because thetoolCall.startedcheck at Line 324 is evaluated beforestartedis set at Line 313… wait, actuallystartedis set at 313 before 324 is evaluated, so the first chunk's args do get emitted. Still, the double stringify is worth consolidating.♻️ Compute `argsDelta` once per delta
- if (toolCallDelta.function.arguments !== undefined) { - const argsDelta = - typeof toolCallDelta.function.arguments === 'string' - ? toolCallDelta.function.arguments - : JSON.stringify(toolCallDelta.function.arguments) - toolCall.arguments += argsDelta - } + const argsDelta = + toolCallDelta.function.arguments === undefined + ? undefined + : typeof toolCallDelta.function.arguments === 'string' + ? toolCallDelta.function.arguments + : JSON.stringify(toolCallDelta.function.arguments) + if (argsDelta !== undefined) { + toolCall.arguments += argsDelta + } if (toolCall.id && toolCall.name && !toolCall.started) { toolCall.started = true yield { /* TOOL_CALL_START */ ... } } - if (toolCallDelta.function.arguments !== undefined && toolCall.started) { - const argsDelta = - typeof toolCallDelta.function.arguments === 'string' - ? toolCallDelta.function.arguments - : JSON.stringify(toolCallDelta.function.arguments) + if (argsDelta !== undefined && toolCall.started) { yield { type: 'TOOL_CALL_ARGS', ... delta: argsDelta, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 282 - 338, The loop over deltaToolCalls double-serializes object args; compute the argument delta once per iteration and reuse it for both accumulating into toolCallsInProgress and for emitting the TOOL_CALL_ARGS event. Inside the for-loop handling toolCallDelta (referencing toolCallDelta, toolCallsInProgress, toolCall, TOOL_CALL_START and TOOL_CALL_ARGS), when toolCallDelta.function.arguments !== undefined, create a single const argsDelta = typeof ... ? ... : JSON.stringify(...), use argsDelta to append to toolCall.arguments, and use the same argsDelta in the yield for the TOOL_CALL_ARGS event instead of re-running JSON.stringify.packages/typescript/ai-mistral/src/message-types.ts (1)
141-149:schemaDefinitionshould be required forResponseFormatJsonSchema.The adapter's
structuredOutputalways setsschemaDefinitionandstrict(text.ts Line 173-176), and a JSON-schema response format without a schema is meaningless. MakingschemaDefinitionrequired reflects actual usage and catches call sites that forget it.export interface ResponseFormatJsonSchema { type: 'json_schema' jsonSchema: { name: string description?: string - schemaDefinition?: { [key: string]: unknown } + schemaDefinition: { [key: string]: unknown } strict?: boolean } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/message-types.ts` around lines 141 - 149, The ResponseFormatJsonSchema interface defines schemaDefinition as optional but the adapter (structuredOutput in text.ts) always provides schemaDefinition and strict; make schemaDefinition required to match actual usage by changing ResponseFormatJsonSchema so jsonSchema.schemaDefinition is non-optional, update any types/usages that expect it optional, and run typechecks to fix call sites that now must pass schemaDefinition explicitly (look for ResponseFormatJsonSchema, structuredOutput, and any callers constructing json_schema responses).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-mistral/package.json`:
- Around line 15-20: Add explicit /adapters subpath exports so consumers can
import tree-shakeable adapters instead of the whole package; update the
"exports" object to include entries like "./adapters/text" that map "types" to
"./dist/esm/adapters/text.d.ts" and "import" to "./dist/esm/adapters/text.js"
(and add any other planned adapter entrypoints similarly), ensuring the entry
names match the built files under dist/esm and follow the existing "./": {
"types", "import" } shape used for ".".
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 146-150: Remove the leftover console.* debug logging in the
adapter (e.g., the console.error calls inside the chatStream error handler and
the other similar spots around lines shown); instead rely on emitting the
existing RUN_ERROR event with the error payload (error.message/code) so errors
are propagated without writing to stdout/stderr. Locate the error handlers in
the chatStream function and the other two instances and delete the
console.error/console.log statements, ensuring only the RUN_ERROR emission (or
existing telemetry call) remains.
- Around line 116-151: The catch in chatStream currently yields a RUN_ERROR and
then swallows the exception; update chatStream (the try/catch around
this.client.chat.stream(...) and the catch block) to rethrow the original error
after emitting RUN_ERROR so callers receive a rejection; specifically keep the
hasEmittedRunStarted logic and the RUN_STARTED/RUN_ERROR yields as-is, then add
a `throw err` (rethrow the same Error & { code?: string } captured as `err`) at
the end of that catch block so consumers iterating chatStream observe the
failure; alternatively, if you prefer a single error surface, remove the
internal try/catch inside processMistralStreamChunks so errors bubble to
chatStream instead (choose one approach and implement consistently).
- Around line 340-403: The adapter can misreport computedFinishReason as
'tool_calls' based solely on toolCallsInProgress.size; add a boolean flag (e.g.,
hasEmittedToolCall) initialized false and set it true whenever you yield a
TOOL_CALL_END (inside the loop that inspects toolCallsInProgress), then compute
computedFinishReason using (choice.finishReason === 'tool_calls' ||
hasEmittedToolCall) instead of checking toolCallsInProgress.size > 0 so that
RUN_FINISHED only reports 'tool_calls' when a TOOL_CALL_END was actually
emitted.
- Around line 440-489: mapTextOptionsToMistral currently validates modelOptions
via validateTextProviderOptions but never includes them in the returned payload,
dropping all Mistral-specific fields; modify mapTextOptionsToMistral to merge
the validated modelOptions into the returned object (after converting camelCase
keys to the snake_case names expected by the Mistral SDK) so fields like
response_format, frequency_penalty, presence_penalty, stop, random_seed,
safe_prompt, tool_choice, parallel_tool_calls, n, prediction, top_p and
max_tokens are forwarded; keep existing behavior for tools
(convertToolsToProviderFormat) and messages, and ensure the merged object still
matches the expected return shape of mapTextOptionsToMistral and is used by
chatStream/structuredOutput callers.
- Around line 117-119: Replace the unsafe "as any" casts on the request
parameter objects passed to this.client.chat.stream and
this.client.chat.complete with proper inferred parameter types so SDK signature
changes surface at compile time; specifically, type the stream request as
Parameters<typeof this.client.chat.stream>[0] and the complete request as
Parameters<typeof this.client.chat.complete>[0] (remove the casts at the stream
and complete call sites), while still using your custom message-types where
appropriate and mapping them into these strongly-typed parameter objects.
In `@packages/typescript/ai-mistral/src/message-types.ts`:
- Around line 151-183: Replace the empty interface declarations that currently
make `{}` overly permissive — specifically MistralTextMetadata,
MistralAudioMetadata, MistralVideoMetadata, and MistralDocumentMetadata — with a
truly empty-object type such as Record<string, never> (or add a private
placeholder field) so consumers of MistralMessageMetadataByModality get proper
type safety; update the four interface declarations to use Record<string, never>
(or include a clearly named placeholder property) to prevent values like 42 or
string from type-checking as valid metadata.
In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 33-47: The MISTRAL_LARGE_LATEST model meta has outdated context
window and pricing; update the constant MISTRAL_LARGE_LATEST (the object
satisfying ModelMeta<MistralTextProviderOptions>) so context_window is 256_000
and set pricing.input.normal to 0.5 and pricing.output.normal to 1.5 to match
current Mistral specs; keep the other fields (name, max_completion_tokens,
supports) unchanged and ensure the object still uses "as const satisfies
ModelMeta<MistralTextProviderOptions>".
In `@packages/typescript/ai-mistral/src/tools/function-tool.ts`:
- Around line 17-32: The code mutates caller-provided tool.inputSchema via
inputSchema.properties = {}, which can leak state across uses; instead, create a
non-mutating copy before modifying: build a new schema object (e.g., copy
inputSchema and set properties to inputSchema.properties ?? {}) and pass that
copy into makeMistralStructuredOutputCompatible, then set
jsonSchema.additionalProperties = false; reference symbols: tool.inputSchema,
inputSchema, makeMistralStructuredOutputCompatible, jsonSchema.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 51-53: The generateId function produces a short, variable-length,
collision-prone suffix by using Math.random().toString(36).substring(7); update
generateId to produce a stable, high-entropy ID: either replace the random tail
with crypto.randomUUID() (available in Node 14.17+/browsers) and format as
`${prefix}-${crypto.randomUUID()}`, or if you must keep Math.random(), use a
fixed-length slice such as Math.random().toString(36).substring(2,10) (or
repeat/concat to reach desired length) so the suffix is deterministic in length;
locate and update the generateId function in client.ts accordingly.
In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 53-83: The loop in makeMistralStructuredOutputCompatible recurses
into object/array properties but skips adding 'null' for optional nested fields;
update the branches handling prop.type === 'object' and prop.type === 'array'
(and their assignments to properties[propName]) so that after you recurse you
also apply the same nullability logic used in the else branch when wasOptional
is true: if the resulting property has a single string type, convert it to a
union [type, 'null']; if it already has an array type and doesn't include
'null', append 'null'. Ensure this is applied for the object branch (after
calling makeMistralStructuredOutputCompatible) and the array branch (after
wrapping items) while preserving existing type arrays.
In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 371-374: The test currently narrows runFinishedChunk with
chunks.find((c) => c.type === 'RUN_FINISHED') then checks
runFinishedChunk.finishReason, which can silently skip assertions if
runFinishedChunk is undefined; add an explicit assertion
expect(runFinishedChunk).toBeDefined() immediately after computing
runFinishedChunk (before the if / property access) to ensure the test fails when
no RUN_FINISHED chunk is emitted and then assert
expect(runFinishedChunk!.finishReason).toBe('tool_calls').
In `@packages/typescript/ai-mistral/vite.config.ts`:
- Around line 5-27: The Vitest test block inside the defineConfig object in
vite.config.ts is redundant with the dedicated vitest.config.ts and should be
removed: delete the entire test: { ... } block (the object assigned under config
in the defineConfig call) so vite.config.ts only contains non-test Vite config.
Also update any test include patterns to follow project guidelines by using
src/**/*.test.ts (adjust references that previously used tests/**/*.test.ts) and
move existing test files from tests/ into the corresponding src/ locations;
ensure vitest.config.ts remains the single source of truth for test
configuration.
In `@packages/typescript/ai-mistral/vitest.config.ts`:
- Around line 1-22: Remove the redundant vitest.config.ts file (which exports
defineConfig / default config) and consolidate its test block into the existing
vite.config.ts by keeping the test configuration (globals, environment, include,
coverage) only in vite.config.ts; delete the vitest.config.ts file so Vitest
will use the configuration from vite.config.ts like other adapter packages.
---
Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 93-101: The constructor in MistralTextAdapter wrongly overrides an
already-declared kind and discards the passed config by calling super({},
model); update the class to remove the redundant readonly kind declaration
(since BaseTextAdapter provides it) and pass the incoming config into the base
constructor (call super(config, model)) so TextAdapterConfig
(timeouts/headers/etc.) is preserved; keep the client initialization via
createMistralClient(config) and ensure the private client: Mistral and readonly
name = 'mistral' remain unchanged.
- Around line 282-338: The loop over deltaToolCalls double-serializes object
args; compute the argument delta once per iteration and reuse it for both
accumulating into toolCallsInProgress and for emitting the TOOL_CALL_ARGS event.
Inside the for-loop handling toolCallDelta (referencing toolCallDelta,
toolCallsInProgress, toolCall, TOOL_CALL_START and TOOL_CALL_ARGS), when
toolCallDelta.function.arguments !== undefined, create a single const argsDelta
= typeof ... ? ... : JSON.stringify(...), use argsDelta to append to
toolCall.arguments, and use the same argsDelta in the yield for the
TOOL_CALL_ARGS event instead of re-running JSON.stringify.
In `@packages/typescript/ai-mistral/src/message-types.ts`:
- Around line 141-149: The ResponseFormatJsonSchema interface defines
schemaDefinition as optional but the adapter (structuredOutput in text.ts)
always provides schemaDefinition and strict; make schemaDefinition required to
match actual usage by changing ResponseFormatJsonSchema so
jsonSchema.schemaDefinition is non-optional, update any types/usages that expect
it optional, and run typechecks to fix call sites that now must pass
schemaDefinition explicitly (look for ResponseFormatJsonSchema,
structuredOutput, and any callers constructing json_schema responses).
In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 33-207: Replace the many per-model constants (e.g.,
MISTRAL_LARGE_LATEST, MISTRAL_MEDIUM_LATEST, ... OPEN_MISTRAL_NEMO) and the
parallel collections (MISTRAL_CHAT_MODELS, MistralModelInputModalitiesByName,
MistralChatModelProviderOptionsByName) with a single data-driven map like
MISTRAL_MODELS typed as const satisfies Record<string,
ModelMeta<MistralTextProviderOptions>>; then derive the chat model list and the
modality/provider-option maps from that single MISTRAL_MODELS object so
adding/removing a model only requires updating one place and keeps types (and
unions) inferred from the map.
- Around line 251-261: The current MistralChatModelProviderOptionsByName maps
every entry of MISTRAL_CHAT_MODELS to the same MistralTextProviderOptions,
making ResolveProviderOptions<TModel> redundant; replace the indirection by
changing ResolveProviderOptions to simply be MistralTextProviderOptions (and
remove or deprecate MistralChatModelProviderOptionsByName), i.e., eliminate the
map and export ResolveProviderOptions = MistralTextProviderOptions so the code
uses a single concrete provider options type while preserving the ability to
reintroduce per-model mappings later if needed (refer to the symbols
MistralChatModelProviderOptionsByName, ResolveProviderOptions,
MistralTextProviderOptions, and MISTRAL_CHAT_MODELS to locate and update the
types).
In `@packages/typescript/ai-mistral/src/text/text-provider-options.ts`:
- Around line 110-114: The validator validateTextProviderOptions currently does
nothing; add fast client-side checks inside that function (referencing
validateTextProviderOptions and type InternalTextProviderOptions) to throw clear
errors for obvious misconfigurations: ensure numeric counts like n are integers
>= 1, temperature is within the allowed range (e.g. 0–1 or your provider's
accepted bounds), messages (or prompt) is present and messages.length > 0, and
any other obvious bounds (e.g. max_tokens/top_p if present) are within valid
ranges; throw descriptive Error instances so callers fail fast instead of
waiting for the Mistral API round-trip.
In `@packages/typescript/ai-mistral/src/tools/function-tool.ts`:
- Around line 27-42: Remove the redundant assignment to
jsonSchema.additionalProperties = false in the function-tool return path: the
helper makeMistralStructuredOutputCompatible already sets additionalProperties =
false for object schemas (see makeMistralStructuredOutputCompatible), and
setting it here can incorrectly apply to non-object types; simply delete the
line that assigns additionalProperties and leave the returned object (type:
'function', function: { name: tool.name, description: tool.description,
parameters: jsonSchema, strict: true }) as-is (it remains compatible with
FunctionTool/FunctionParameters).
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 17-24: The createMistralClient function currently uses truthy
checks which drop intentional values like timeoutMs: 0 and possibly empty
serverURL; update createMistralClient to explicitly check for undefined (e.g.,
timeoutMs !== undefined and serverURL !== undefined) when spreading options so
zero or empty-string values are preserved, and also ensure any other SDK options
present on MistralClientConfig are forwarded into the new Mistral({...}) call
(refer to createMistralClient and the Mistral constructor to add those
properties).
In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 8-29: transformNullsToUndefined currently treats any typeof ===
'object' as a plain object and will rebuild Dates, Maps, Sets and class
instances via Object.entries; add a plain-object guard (e.g. check
Object.getPrototypeOf(obj) === Object.prototype or an isPlainObject helper)
before iterating entries and only recurse/construct a Record for true plain
objects, otherwise return the original object unchanged, and add a short comment
explaining the guard to prevent mangling Date/Map/Set/class instances; reference
the transformNullsToUndefined function and the block that uses
Object.entries(obj as Record<string, unknown>) for where to apply the change.
In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 378-406: The test manually constructs an async iterable named
errorIterable with pre-wrapped streamChunks, which is inconsistent with other
tests; replace the manual async iterable with a call to
setupMockStream(streamChunks) so the helper performs the required .data wrapping
and error behavior, updating references to errorIterable and reusing the
existing setupMockStream helper used elsewhere in this test suite.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4902c3d3-c076-442e-b548-0ec58eee37bf
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (18)
.changeset/add-ai-mistral.mdpackages/typescript/ai-mistral/README.mdpackages/typescript/ai-mistral/package.jsonpackages/typescript/ai-mistral/src/adapters/text.tspackages/typescript/ai-mistral/src/index.tspackages/typescript/ai-mistral/src/message-types.tspackages/typescript/ai-mistral/src/model-meta.tspackages/typescript/ai-mistral/src/text/text-provider-options.tspackages/typescript/ai-mistral/src/tools/function-tool.tspackages/typescript/ai-mistral/src/tools/index.tspackages/typescript/ai-mistral/src/tools/tool-converter.tspackages/typescript/ai-mistral/src/utils/client.tspackages/typescript/ai-mistral/src/utils/index.tspackages/typescript/ai-mistral/src/utils/schema-converter.tspackages/typescript/ai-mistral/tests/mistral-adapter.test.tspackages/typescript/ai-mistral/tsconfig.jsonpackages/typescript/ai-mistral/vite.config.tspackages/typescript/ai-mistral/vitest.config.ts
| const config = defineConfig({ | ||
| test: { | ||
| name: packageJson.name, | ||
| dir: './', | ||
| watch: false, | ||
| globals: true, | ||
| environment: 'node', | ||
| include: ['tests/**/*.test.ts'], | ||
| coverage: { | ||
| provider: 'v8', | ||
| reporter: ['text', 'json', 'html', 'lcov'], | ||
| exclude: [ | ||
| 'node_modules/', | ||
| 'dist/', | ||
| 'tests/', | ||
| '**/*.test.ts', | ||
| '**/*.config.ts', | ||
| '**/types.ts', | ||
| ], | ||
| include: ['src/**/*.ts'], | ||
| }, | ||
| }, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
fd -t f 'vitest.config.ts|vite.config.ts' packages/typescript/ai-mistral
cat packages/typescript/ai-mistral/vitest.config.ts 2>/dev/null || trueRepository: TanStack/ai
Length of output: 609
Remove duplicate Vitest configuration and align test location with project guidelines.
Both vite.config.ts and vitest.config.ts contain overlapping test configuration. The dedicated vitest.config.ts is the proper entry point for Vitest; the test block in vite.config.ts is redundant and should be removed. Additionally, the test include pattern uses tests/**/*.test.ts, but project guidelines require unit tests to be placed alongside source code at src/**/*.test.ts. Update the include pattern and move existing test files to match this structure.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/vite.config.ts` around lines 5 - 27, The
Vitest test block inside the defineConfig object in vite.config.ts is redundant
with the dedicated vitest.config.ts and should be removed: delete the entire
test: { ... } block (the object assigned under config in the defineConfig call)
so vite.config.ts only contains non-test Vite config. Also update any test
include patterns to follow project guidelines by using src/**/*.test.ts (adjust
references that previously used tests/**/*.test.ts) and move existing test files
from tests/ into the corresponding src/ locations; ensure vitest.config.ts
remains the single source of truth for test configuration.
| import { defineConfig } from 'vitest/config' | ||
|
|
||
| export default defineConfig({ | ||
| test: { | ||
| globals: true, | ||
| environment: 'node', | ||
| include: ['tests/**/*.test.ts'], | ||
| coverage: { | ||
| provider: 'v8', | ||
| reporter: ['text', 'json', 'html', 'lcov'], | ||
| exclude: [ | ||
| 'node_modules/', | ||
| 'dist/', | ||
| 'tests/', | ||
| '**/*.test.ts', | ||
| '**/*.config.ts', | ||
| '**/types.ts', | ||
| ], | ||
| include: ['src/**/*.ts'], | ||
| }, | ||
| }, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
fd -t f 'vite(st)?\.config\.ts' packages/typescript | sort
for f in $(fd -t f 'vite(st)?\.config\.ts' packages/typescript); do
echo "=== $f ==="
cat "$f"
doneRepository: TanStack/ai
Length of output: 32275
Remove the redundant vitest.config.ts file.
The vitest.config.ts duplicates the test configuration already present in vite.config.ts. The test block in vite.config.ts (globals, environment, include, coverage) is identical to the standalone vitest.config.ts. Since Vitest prefers vitest.config.* when present, the simpler approach is to consolidate test configuration into vite.config.ts only and remove the redundant file, matching the pattern used by other adapter packages (ai-code-mode, ai-gemini, ai-groq, ai-openai, etc.).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/vitest.config.ts` around lines 1 - 22, Remove
the redundant vitest.config.ts file (which exports defineConfig / default
config) and consolidate its test block into the existing vite.config.ts by
keeping the test configuration (globals, environment, include, coverage) only in
vite.config.ts; delete the vitest.config.ts file so Vitest will use the
configuration from vite.config.ts like other adapter packages.
|
Would you mind adding this to the e2e test suite as an additional provider, I don't feel comfortable merging new adapters if they don't pass our e2e suite! |
Good shout! Will do :) |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/typescript/ai-mistral/src/adapters/text.ts (1)
226-255:⚠️ Potential issue | 🟡 MinorError-handling asymmetry between
chatStream's outer catch andprocessMistralStreamChunks.Pre-stream failures (e.g.,
fetchRawMistralStreamthrowing before yielding on bad request/401) flow through this outer catch, emitRUN_ERROR, and thenthrow errat line 253 — consumers must handle rejection. Mid-stream failures, however, are swallowed by the innertry/catchinprocessMistralStreamChunks(line 502-515), which yieldsRUN_ERRORwithout rethrowing, so the iterator completes normally. Consumers therefore cannot rely onfor await … of chatStream(...)rejecting or not rejecting consistently on failure.Pick one policy and apply uniformly: either remove the inner catch and let all errors bubble to this outer handler, or have the inner catch also rethrow after yielding
RUN_ERROR.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 226 - 255, The error handling is inconsistent: pre-stream errors are caught by the outer catch in chatStream and rethrown, while processMistralStreamChunks currently catches mid-stream errors and only yields RUN_ERROR without rethrowing; update processMistralStreamChunks so that its inner catch (the try/catch inside processMistralStreamChunks) still yields the RUN_ERROR event like it does now but then rethrows the error (throw err) after yielding, so all errors uniformly propagate to the outer handler (and consumers of chatStream) — locate processMistralStreamChunks and add a rethrow in its catch block after yielding the RUN_ERROR.
🧹 Nitpick comments (6)
packages/typescript/ai-mistral/tests/mistral-adapter.test.ts (1)
90-136: Minor: nomockComplete/fetchreset between tests in this block.Tests in
describe('Text adapter')don't invoke streaming, so the missingbeforeEachreset is harmless today — but if someone later adds a streaming assertion here without noticing the outer block has no reset, priorvi.stubGlobal('fetch', …)state could leak. Adding abeforeEach(() => { vi.clearAllMocks() })here mirrors theMistral AG-UI event emissionblock and is defensive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts` around lines 90 - 136, Add a beforeEach to the "Text adapter" describe block to clear any mocked globals or fetch state to prevent test leakage: inside the describe('Text adapter') block add beforeEach(() => { vi.clearAllMocks() }) (or vi.resetAllMocks()/vi.restoreAllMocks() if you prefer) to mirror the Mistral AG-UI event emission block and ensure prior vi.stubGlobal('fetch', ...) or mockComplete state cannot leak into these tests.packages/typescript/ai-mistral/src/adapters/text.ts (2)
36-36: Array-type style violations flagged by ESLint.
ChatCompletionMessageParam[]/unknown[]should use theArray<T>form used throughout this file. Static analysis flags both on lines 36 and 126.🧹 Proposed change
-function messagesToSnakeCase(messages: ChatCompletionMessageParam[]): unknown[] { +function messagesToSnakeCase( + messages: Array<ChatCompletionMessageParam>, +): Array<unknown> { @@ - messages: ChatCompletionMessageParam[] + messages: Array<ChatCompletionMessageParam>Also applies to: 126-126
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` at line 36, ESLint flags array-type style violations: replace the bracket-style types with the Array<T> form in this file; specifically change the function signature of messagesToSnakeCase from using ChatCompletionMessageParam[] and unknown[] to use Array<ChatCompletionMessageParam> and Array<unknown>, and likewise update the other occurrence around the code referenced at line 126 to use Array<...> instead of the bracket notation so the file consistently uses Array<T>.
148-148: Empty interface that onlyextends.
export interface MistralTextConfig extends MistralClientConfig {}adds no members and will trip@typescript-eslint/no-empty-object-type. Prefer atypealias so the intent (public re-export under a more specific name) is explicit.🧹 Proposed change
-/** - * Configuration for Mistral text adapter. - */ -export interface MistralTextConfig extends MistralClientConfig {} +/** + * Configuration for Mistral text adapter. + */ +export type MistralTextConfig = MistralClientConfig🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` at line 148, The exported empty interface MistralTextConfig that only extends MistralClientConfig should be replaced with a type alias to avoid the `@typescript-eslint/no-empty-object-type` lint rule; change the declaration for MistralTextConfig to a type alias (e.g., export type MistralTextConfig = MistralClientConfig) so the file still publicly re-exports the same shape without an empty interface.packages/typescript/ai-mistral/src/utils/client.ts (1)
46-62: Browserwindow.envfallback usesas anyand reads a non-standard global.
(globalThis as any).window?.envis not a standard browser surface — it only works if a build step injectswindow.env(e.g., some custom Vite/CRA setups). In a plain browser,window.envisundefinedand you fall through toprocess.env, which doesn't exist, so the error message (“Please set it in your environment variables…”) is thrown in a context where “environment variables” isn't meaningful. Consider either documenting this convention or narrowing the fallback to Node-only, and typing the env lookup withoutas any(e.g., via(globalThis as { window?: { env?: Record<string, string> } }).window?.env).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 46 - 62, The getMistralApiKeyFromEnv function currently uses an untyped (globalThis as any).window?.env fallback and then tries process.env, which misbehaves in plain browsers; change the lookup to be Node-only by checking typeof process !== 'undefined' && typeof process.env !== 'undefined' first and read process.env.MISTRAL_API_KEY there, or if you must support injected window.env explicitly cast globalThis to a safe type like (globalThis as { window?: { env?: Record<string,string> } }) and read window.env.MISTRAL_API_KEY; also update the thrown message around MISTRAL_API_KEY to not instruct browser users to set "environment variables" (clarify Node vs injected window.env) so the error is accurate for the execution environment.testing/e2e/tests/test-matrix.ts (1)
14-140: Drift risk: this support matrix fully duplicatestesting/e2e/src/lib/feature-support.ts.Not introduced by this PR, but now that two files need
'mistral'added in lockstep (and 11 individual features each), the duplication is a concrete maintenance hazard — a future provider or feature change will silently skew between them. Consider importing/reusing the matrix fromfeature-support.tsin a follow-up so there is a single source of truth.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/test-matrix.ts` around lines 14 - 140, The file defines a local supportMatrix that duplicates the canonical matrix in feature-support.ts; remove the local supportMatrix and instead import and use the single exported support matrix from feature-support.ts (the module that currently holds the source-of-truth matrix), keeping the same type Record<Feature, Set<Provider>> and replacing all references to the local supportMatrix with the imported symbol (e.g., supportMatrix or getFeatureSupport) so there is one source of truth for provider/feature support.packages/typescript/ai-mistral/src/model-meta.ts (1)
248-269: Per-model provider options map is effectively uniform — intentional?
MistralChatModelProviderOptionsByNamemaps every model to the sameMistralTextProviderOptions, soResolveProviderOptions<TModel>always yields the same type. That still satisfies the tree-shakeable per-model typing mechanism, but if you plan to surface capability differences later (e.g., vision-only fields onpixtral-*, reasoning-specific options onmagistral-*, or prohibitingtoolson models that lack it), consider branching this map now so it actually narrows per model.As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 248 - 269, MistralChatModelProviderOptionsByName currently maps every model key to the same MistralTextProviderOptions so ResolveProviderOptions<TModel> never narrows per-model; update the map to assign model-specific option types (e.g., map pixtral-* to a MistralVisionProviderOptions, magistral-* to a MistralReasoningProviderOptions, or specific entries for each (typeof MISTRAL_CHAT_MODELS)[number]) so ResolveProviderOptions will resolve to the correct per-model type; adjust or add the new provider option types and ensure ResolveProviderOptions and any uses still fall back to MistralTextProviderOptions for unknown models.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 527-531: The current assembly of serverURL in the adapter
unconditionally appends "/v1/chat/completions" and will produce double "/v1/v1"
if a user passes a base that already includes a version segment; update the
logic around config.serverURL (used to build serverURL and url) to normalize and
strip any trailing "/v1" or "/v1/" (in addition to trailing slash) before
appending "/v1/chat/completions", so both bare bases and bases that already
include the version produce a correct single "/v1" in the final url.
- Around line 523-616: Add a brief inline comment next to the Mistral SDK client
initialization (the client created around line 209) stating that the SDK client
is intentionally retained because it's required by the structuredOutput logic
(see structuredOutput function around line 272) and is not used for streaming
paths; mention that e2e tests route Mistral through llmock via providers.ts
(serverURL: base) so the custom SSE streaming path remains covered, to prevent
future contributors from removing the client when refactoring streaming code.
---
Duplicate comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 226-255: The error handling is inconsistent: pre-stream errors are
caught by the outer catch in chatStream and rethrown, while
processMistralStreamChunks currently catches mid-stream errors and only yields
RUN_ERROR without rethrowing; update processMistralStreamChunks so that its
inner catch (the try/catch inside processMistralStreamChunks) still yields the
RUN_ERROR event like it does now but then rethrows the error (throw err) after
yielding, so all errors uniformly propagate to the outer handler (and consumers
of chatStream) — locate processMistralStreamChunks and add a rethrow in its
catch block after yielding the RUN_ERROR.
---
Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Line 36: ESLint flags array-type style violations: replace the bracket-style
types with the Array<T> form in this file; specifically change the function
signature of messagesToSnakeCase from using ChatCompletionMessageParam[] and
unknown[] to use Array<ChatCompletionMessageParam> and Array<unknown>, and
likewise update the other occurrence around the code referenced at line 126 to
use Array<...> instead of the bracket notation so the file consistently uses
Array<T>.
- Line 148: The exported empty interface MistralTextConfig that only extends
MistralClientConfig should be replaced with a type alias to avoid the
`@typescript-eslint/no-empty-object-type` lint rule; change the declaration for
MistralTextConfig to a type alias (e.g., export type MistralTextConfig =
MistralClientConfig) so the file still publicly re-exports the same shape
without an empty interface.
In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 248-269: MistralChatModelProviderOptionsByName currently maps
every model key to the same MistralTextProviderOptions so
ResolveProviderOptions<TModel> never narrows per-model; update the map to assign
model-specific option types (e.g., map pixtral-* to a
MistralVisionProviderOptions, magistral-* to a MistralReasoningProviderOptions,
or specific entries for each (typeof MISTRAL_CHAT_MODELS)[number]) so
ResolveProviderOptions will resolve to the correct per-model type; adjust or add
the new provider option types and ensure ResolveProviderOptions and any uses
still fall back to MistralTextProviderOptions for unknown models.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 46-62: The getMistralApiKeyFromEnv function currently uses an
untyped (globalThis as any).window?.env fallback and then tries process.env,
which misbehaves in plain browsers; change the lookup to be Node-only by
checking typeof process !== 'undefined' && typeof process.env !== 'undefined'
first and read process.env.MISTRAL_API_KEY there, or if you must support
injected window.env explicitly cast globalThis to a safe type like (globalThis
as { window?: { env?: Record<string,string> } }) and read
window.env.MISTRAL_API_KEY; also update the thrown message around
MISTRAL_API_KEY to not instruct browser users to set "environment variables"
(clarify Node vs injected window.env) so the error is accurate for the execution
environment.
In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 90-136: Add a beforeEach to the "Text adapter" describe block to
clear any mocked globals or fetch state to prevent test leakage: inside the
describe('Text adapter') block add beforeEach(() => { vi.clearAllMocks() }) (or
vi.resetAllMocks()/vi.restoreAllMocks() if you prefer) to mirror the Mistral
AG-UI event emission block and ensure prior vi.stubGlobal('fetch', ...) or
mockComplete state cannot leak into these tests.
In `@testing/e2e/tests/test-matrix.ts`:
- Around line 14-140: The file defines a local supportMatrix that duplicates the
canonical matrix in feature-support.ts; remove the local supportMatrix and
instead import and use the single exported support matrix from
feature-support.ts (the module that currently holds the source-of-truth matrix),
keeping the same type Record<Feature, Set<Provider>> and replacing all
references to the local supportMatrix with the imported symbol (e.g.,
supportMatrix or getFeatureSupport) so there is one source of truth for
provider/feature support.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3653946e-37c6-4081-89cf-360a99c9e041
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
packages/typescript/ai-mistral/package.jsonpackages/typescript/ai-mistral/src/adapters/text.tspackages/typescript/ai-mistral/src/message-types.tspackages/typescript/ai-mistral/src/model-meta.tspackages/typescript/ai-mistral/src/tools/function-tool.tspackages/typescript/ai-mistral/src/utils/client.tspackages/typescript/ai-mistral/src/utils/schema-converter.tspackages/typescript/ai-mistral/tests/mistral-adapter.test.tstesting/e2e/package.jsontesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/providers.tstesting/e2e/src/lib/types.tstesting/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (3)
- testing/e2e/package.json
- testing/e2e/src/lib/types.ts
- packages/typescript/ai-mistral/package.json
🚧 Files skipped from review as they are similar to previous changes (3)
- packages/typescript/ai-mistral/src/tools/function-tool.ts
- packages/typescript/ai-mistral/src/utils/schema-converter.ts
- packages/typescript/ai-mistral/src/message-types.ts
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (5)
packages/typescript/ai-mistral/src/adapters/text.ts (4)
566-578: Unknown extramodelOptionskeys bypass the snake_case conversion.
mapTextOptionsToMistral(Line 687‑698) only copies a fixed allowlist frommodelOptionsinto the request (stop,random_seed,response_format, …,safe_prompt,n,prediction). Here infetchRawMistralStream, the destructure pulls out the camelCase names produced by the mapper, and...restspreads the remainder. If the mapper ever forwards a new camelCase field (or a future edit tomapTextOptionsToMistralspreads rawmodelOptions), those keys will land in the JSON body unchanged (e.g.randomSeedinstead ofrandom_seed) and silently be ignored by the Mistral API. Consider applying a generic camelCase → snake_case transform on the final body (or centralize the request builder) so new fields don't silently regress.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 566 - 578, The request body built in fetchRawMistralStream is allowing unknown camelCase keys from mapTextOptionsToMistral to slip through via the rest spread, causing Mistral to ignore them; update fetchRawMistralStream (or centralize the request builder) to normalize all keys to snake_case before sending—e.g., after destructuring the known camelCase props (responseFormat, toolChoice, parallelToolCalls, frequencyPenalty, presencePenalty, safePrompt, etc.) convert the remaining rest object's keys from camelCase to snake_case and merge that normalized object into the final body so any future or unknown modelOptions are correctly named for the Mistral API.
280-306:structuredOutputforwardsresponseFormatfrommodelOptionsand then overrides it — confirm precedence is intentional.
mapTextOptionsToMistralcopiesmodelOptions.response_formatintorequestParams.responseFormat(Line 690). InstructuredOutput, you then spread...nonStreamParamsinto the request and immediately re-setresponseFormat: { type: 'json_schema', ... }, so the user-providedresponse_formatis silently overridden. That's probably the right semantics forstructuredOutput(callers shouldn't be able to fight the adapter), but it means users who pass a customresponse_formatviamodelOptionsget different behavior betweenchatStreamandstructuredOutputwithout any warning. Consider either droppingresponse_formatfromnonStreamParamsexplicitly or documenting the override.Additionally, the raw JSON error path at Line 303‑305 slices the first 200 chars of the returned content for the error message — if Mistral returns an error envelope with leading metadata, that truncation may hide the useful part. Including
response.id/finishReasonin the error would aid debugging.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 280 - 306, structuredOutput currently spreads nonStreamParams (populated by mapTextOptionsToMistral which copies modelOptions.response_format) and then unconditionally overrides responseFormat, silently discarding user-supplied response_format; remove responseFormat from nonStreamParams before spreading (or explicitly delete requestParams.responseFormat in structuredOutput) so callers can't unintentionally override, and add response metadata to the JSON-parse error by including response.id and response.finishReason (and avoid hiding useful info by not truncating or by trimming intelligently) in the thrown Error to aid debugging; locate symbols: mapTextOptionsToMistral, structuredOutput, nonStreamParams, responseFormat, and response (response.id/finishReason).
606-622: SSE parsing splits on\nrather than event boundaries — multi‑linedata:payloads will be misread.The SSE spec delimits events by blank lines and allows a single event to span multiple
data:lines (each concatenated with\n). The current parser treats every newline-separated line independently and only keeps lines starting withdata:. For Mistral today this happens to be fine (single-linedata:per chunk), but any future chunk that spans multiple lines (e.g. pretty-printed JSON, or adata:line followed by adata:continuation) will be split mid-JSON and silently dropped by thecatch {}in Line 619–621, which also hides real parse errors.Consider buffering on event boundaries (
\n\n) and joining all consecutivedata:lines within an event beforeJSON.parse. At minimum, log (or surface viaRUN_ERROR) whenJSON.parsefails so silent data loss is observable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 606 - 622, The SSE loop currently splits on '\n' and parses each 'data:' line independently causing multi-line SSE events to be broken and parse errors to be swallowed; change the logic around buffer/decoder and the loop that uses buffer.split('\n') so you accumulate until an event boundary (blank line '\n\n'), then for each complete event collect and join all lines that start with 'data:' (preserving '\n' between them) into one payload string, JSON.parse that joined payload and pass the result into rawChunkToCamelCase before yielding; also replace the empty catch in the JSON.parse block with explicit error handling that surfaces the parsing error (e.g., log via RUN_ERROR or processLogger) so malformed JSON is visible rather than silently dropped.
647-700:mapTextOptionsToMistraldrops theTModelgeneric onoptions.The signature is
mapTextOptionsToMistral(options: TextOptions)— unparameterized — so inside the methodoptions.modelOptionswidens to the baseTextOptions['modelOptions']type, which is why theas Omit<InternalTextProviderOptions, ...>cast is needed at Line 648‑653. Threading the generic through (mapTextOptionsToMistral(options: TextOptions<ResolveProviderOptions<TModel>>)) would let the type system enforce the shape and let you drop the cast, keeping the per‑model provider‑options guarantee advertised by the class.As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 647 - 700, The mapTextOptionsToMistral function drops the TModel generic by accepting an unparameterized TextOptions, causing model-specific provider options to widen and forcing the unsafe cast; change the signature to accept the genericed type (e.g., mapTextOptionsToMistral<TModel>(options: TextOptions<ResolveProviderOptions<TModel>>)) so options.modelOptions retains the per-model shape and remove the Omit/unsafe cast, then update any callers to pass through the same TModel generic from the surrounding class/method where appropriate (referencing mapTextOptionsToMistral, TextOptions, ResolveProviderOptions, TModel, and InternalTextProviderOptions to locate the implementation).packages/typescript/ai-mistral/src/model-meta.ts (1)
4-7:MistralVisionProviderOptionsandMistralReasoningProviderOptionsare pure aliases and add no type safety.Both are declared as
= MistralTextProviderOptionswith no additional or narrowed fields, soMistralChatModelProviderOptionsByName(Lines 257‑269) ends up mapping every model to the same provider‑options shape. This defeats the purpose of per‑model provider options. If Mistral truly exposes vision/reasoning‑specific knobs (e.g. imagedetail, reasoning toggles,safePrompt), model these as distinct types (e.g.MistralTextProviderOptions & { reasoning?: ... }/& { vision?: ... }) so consumers get type‑level differentiation. Otherwise, consider dropping the aliases and mapping everything toMistralTextProviderOptionsdirectly.As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 4 - 7, MistralVisionProviderOptions and MistralReasoningProviderOptions are just aliases of MistralTextProviderOptions so they provide no model-specific type safety; update them to either (A) declare distinct option shapes that extend MistralTextProviderOptions with vision- or reasoning-specific fields (e.g., MistralVisionProviderOptions = MistralTextProviderOptions & { vision?: { detail?: string; safePrompt?: boolean } } and MistralReasoningProviderOptions = MistralTextProviderOptions & { reasoning?: { enableChainOfThought?: boolean; maxSteps?: number } }), or (B) remove the aliases and map models directly to MistralTextProviderOptions in MistralChatModelProviderOptionsByName if no extra fields exist; adjust any usages of MistralVisionProviderOptions, MistralReasoningProviderOptions, and MistralChatModelProviderOptionsByName accordingly so each model has the intended distinct provider option type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 447-510: The finishReason block currently emits TOOL_CALL_END for
every started toolCall in toolCallsInProgress each time a chunk carries
choice.finishReason, causing duplicates; modify the code to mark individual tool
calls as ended (e.g., set toolCall.ended = true or remove the entry from
toolCallsInProgress) when you yield TOOL_CALL_END so subsequent chunks skip
already-ended calls, and ensure hasEmittedToolCall is only set once per tool
call; update references in the loop over toolCallsInProgress and the emission of
TOOL_CALL_END and RUN_FINISHED (symbols: toolCallsInProgress, toolCall,
hasEmittedToolCall, TOOL_CALL_END, RUN_FINISHED, aguiState) accordingly.
- Around line 708-717: The code currently coerces a missing toolCallId to an
empty string when handling a tool message (the branch where message.role ===
'tool'), which can corrupt tool-response roundtrips; update that branch (the
object returned for tool messages) to validate message.toolCallId and throw a
clear Error (e.g., "Missing toolCallId for tool message") when it's falsy,
instead of emitting toolCallId: '', and retain the existing content
serialization logic for message.content.
- Around line 535-627: fetchRawMistralStream leaks HTTP connections because it
doesn't accept or forward an AbortSignal and only calls reader.releaseLock() in
finally; update the function and its callers to accept an AbortSignal (add it to
MistralClientConfig or TextOptions), pass that signal into fetch({ signal }),
and in the finally block call await reader.cancel().catch(() => {}) before
reader.releaseLock() so aborting the async generator or throwing from a consumer
tears down the server stream; update any call sites (e.g., chatStream) that
create the generator to forward the provided signal.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 68-70: The generateId function uses crypto.randomUUID() which may
be undefined on older Node versions; update the package to either require
Node>=19 by adding an "engines": { "node": ">=19" } entry to
packages/typescript/ai-mistral/package.json, or change generateId to use a safe
import/fallback: import { randomUUID } from 'node:crypto' and call randomUUID(),
or call globalThis.crypto?.randomUUID() and fall back to a polyfill (e.g., UUID
v4) when undefined; modify the generateId function accordingly to use the chosen
approach and ensure no ReferenceError occurs on Node 18 and below.
In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 94-106: The optional-property branch only handles cases where
prop.type exists, so when wasOptional is true but prop lacks a top-level type
(e.g., uses anyOf/oneOf/$ref/enum-only/const) the property is not made nullable;
update the else branch for wasOptional to detect these cases and wrap the schema
in a nullable composition, e.g. replace the prop with a schema that expresses
"prop OR null" (using anyOf or oneOf) so properties[propName] becomes something
like { anyOf: [prop, { type: 'null' }] } when prop.type is absent or not usable;
ensure you preserve existing keys on prop and reuse the same
properties[propName] assignment pattern so downstream required logic (properties
and required handling) treats the field as nullable.
- Around line 13-15: The array branch in transformNullsToUndefined maps nulls to
undefined but does not remove them, leaving holes; update the Array.isArray
branch in transformNullsToUndefined to both map and then filter out undefined
entries (e.g., map each item with transformNullsToUndefined(item) and then
filter(item => item !== undefined)) so arrays do not contain undefined holes and
maintain the same "drop nullable" behavior as the object branch; ensure the
returned value is still cast to T or the appropriate array type after filtering.
---
Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 566-578: The request body built in fetchRawMistralStream is
allowing unknown camelCase keys from mapTextOptionsToMistral to slip through via
the rest spread, causing Mistral to ignore them; update fetchRawMistralStream
(or centralize the request builder) to normalize all keys to snake_case before
sending—e.g., after destructuring the known camelCase props (responseFormat,
toolChoice, parallelToolCalls, frequencyPenalty, presencePenalty, safePrompt,
etc.) convert the remaining rest object's keys from camelCase to snake_case and
merge that normalized object into the final body so any future or unknown
modelOptions are correctly named for the Mistral API.
- Around line 280-306: structuredOutput currently spreads nonStreamParams
(populated by mapTextOptionsToMistral which copies modelOptions.response_format)
and then unconditionally overrides responseFormat, silently discarding
user-supplied response_format; remove responseFormat from nonStreamParams before
spreading (or explicitly delete requestParams.responseFormat in
structuredOutput) so callers can't unintentionally override, and add response
metadata to the JSON-parse error by including response.id and
response.finishReason (and avoid hiding useful info by not truncating or by
trimming intelligently) in the thrown Error to aid debugging; locate symbols:
mapTextOptionsToMistral, structuredOutput, nonStreamParams, responseFormat, and
response (response.id/finishReason).
- Around line 606-622: The SSE loop currently splits on '\n' and parses each
'data:' line independently causing multi-line SSE events to be broken and parse
errors to be swallowed; change the logic around buffer/decoder and the loop that
uses buffer.split('\n') so you accumulate until an event boundary (blank line
'\n\n'), then for each complete event collect and join all lines that start with
'data:' (preserving '\n' between them) into one payload string, JSON.parse that
joined payload and pass the result into rawChunkToCamelCase before yielding;
also replace the empty catch in the JSON.parse block with explicit error
handling that surfaces the parsing error (e.g., log via RUN_ERROR or
processLogger) so malformed JSON is visible rather than silently dropped.
- Around line 647-700: The mapTextOptionsToMistral function drops the TModel
generic by accepting an unparameterized TextOptions, causing model-specific
provider options to widen and forcing the unsafe cast; change the signature to
accept the genericed type (e.g., mapTextOptionsToMistral<TModel>(options:
TextOptions<ResolveProviderOptions<TModel>>)) so options.modelOptions retains
the per-model shape and remove the Omit/unsafe cast, then update any callers to
pass through the same TModel generic from the surrounding class/method where
appropriate (referencing mapTextOptionsToMistral, TextOptions,
ResolveProviderOptions, TModel, and InternalTextProviderOptions to locate the
implementation).
In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 4-7: MistralVisionProviderOptions and
MistralReasoningProviderOptions are just aliases of MistralTextProviderOptions
so they provide no model-specific type safety; update them to either (A) declare
distinct option shapes that extend MistralTextProviderOptions with vision- or
reasoning-specific fields (e.g., MistralVisionProviderOptions =
MistralTextProviderOptions & { vision?: { detail?: string; safePrompt?: boolean
} } and MistralReasoningProviderOptions = MistralTextProviderOptions & {
reasoning?: { enableChainOfThought?: boolean; maxSteps?: number } }), or (B)
remove the aliases and map models directly to MistralTextProviderOptions in
MistralChatModelProviderOptionsByName if no extra fields exist; adjust any
usages of MistralVisionProviderOptions, MistralReasoningProviderOptions, and
MistralChatModelProviderOptionsByName accordingly so each model has the intended
distinct provider option type.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c74b9bcc-dcea-4926-b4c2-66ab39cbc9ea
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
packages/typescript/ai-mistral/package.jsonpackages/typescript/ai-mistral/src/adapters/text.tspackages/typescript/ai-mistral/src/message-types.tspackages/typescript/ai-mistral/src/model-meta.tspackages/typescript/ai-mistral/src/tools/function-tool.tspackages/typescript/ai-mistral/src/utils/client.tspackages/typescript/ai-mistral/src/utils/schema-converter.tspackages/typescript/ai-mistral/tests/mistral-adapter.test.tstesting/e2e/package.jsontesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/providers.tstesting/e2e/src/lib/types.tstesting/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (4)
- testing/e2e/src/lib/types.ts
- testing/e2e/package.json
- packages/typescript/ai-mistral/package.json
- packages/typescript/ai-mistral/src/message-types.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- testing/e2e/src/lib/providers.ts
- packages/typescript/ai-mistral/src/tools/function-tool.ts
- testing/e2e/tests/test-matrix.ts
- packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
| private async *fetchRawMistralStream( | ||
| params: RawStreamParams, | ||
| config: MistralClientConfig, | ||
| ): AsyncGenerator<MistralStreamEvent> { | ||
| const serverURL = (config.serverURL ?? 'https://api.mistral.ai') | ||
| .replace(/\/+$/, '') | ||
| .replace(/\/v1$/, '') | ||
| const url = `${serverURL}/v1/chat/completions` | ||
|
|
||
| const { | ||
| stream: _stream, | ||
| messages, | ||
| maxTokens, | ||
| topP, | ||
| randomSeed, | ||
| responseFormat, | ||
| toolChoice, | ||
| parallelToolCalls, | ||
| frequencyPenalty, | ||
| presencePenalty, | ||
| safePrompt, | ||
| ...rest | ||
| } = params | ||
|
|
||
| const body: Record<string, unknown> = { | ||
| ...rest, | ||
| messages: messagesToSnakeCase(messages), | ||
| stream: true, | ||
| ...(maxTokens != null && { max_tokens: maxTokens }), | ||
| ...(topP != null && { top_p: topP }), | ||
| ...(randomSeed != null && { random_seed: randomSeed }), | ||
| ...(responseFormat != null && { response_format: responseFormat }), | ||
| ...(toolChoice != null && { tool_choice: toolChoice }), | ||
| ...(parallelToolCalls != null && { | ||
| parallel_tool_calls: parallelToolCalls, | ||
| }), | ||
| ...(frequencyPenalty != null && { | ||
| frequency_penalty: frequencyPenalty, | ||
| }), | ||
| ...(presencePenalty != null && { | ||
| presence_penalty: presencePenalty, | ||
| }), | ||
| ...(safePrompt != null && { safe_prompt: safePrompt }), | ||
| } | ||
|
|
||
| const headers: Record<string, string> = { | ||
| 'Content-Type': 'application/json', | ||
| Authorization: `Bearer ${config.apiKey}`, | ||
| ...config.defaultHeaders, | ||
| } | ||
|
|
||
| const response = await fetch(url, { | ||
| method: 'POST', | ||
| headers, | ||
| body: JSON.stringify(body), | ||
| }) | ||
|
|
||
| if (!response.ok || !response.body) { | ||
| const errorText = await response.text() | ||
| throw new Error(`Mistral API error ${response.status}: ${errorText}`) | ||
| } | ||
|
|
||
| const reader = response.body.getReader() | ||
| const decoder = new TextDecoder() | ||
| let buffer = '' | ||
|
|
||
| try { | ||
| while (true) { | ||
| const { done, value } = await reader.read() | ||
| if (done) break | ||
|
|
||
| buffer += decoder.decode(value, { stream: true }) | ||
| const lines = buffer.split('\n') | ||
| buffer = lines.pop()! | ||
|
|
||
| for (const line of lines) { | ||
| const trimmed = line.trim() | ||
| if (!trimmed.startsWith('data:')) continue | ||
| const data = trimmed.slice(5).trimStart() | ||
| if (data === '[DONE]') return | ||
|
|
||
| try { | ||
| const raw = JSON.parse(data) as Record<string, unknown> | ||
| yield { data: rawChunkToCamelCase(raw) } | ||
| } catch { | ||
| // skip malformed chunks | ||
| } | ||
| } | ||
| } | ||
| } finally { | ||
| reader.releaseLock() | ||
| } | ||
| } |
There was a problem hiding this comment.
fetchRawMistralStream has no cancellation path — abandoned consumers leak HTTP connections.
The manual fetch omits AbortSignal support and the finally block only calls reader.releaseLock(). If the consumer of chatStream stops iterating (e.g. user navigates away, downstream for await throws, outer run is aborted), the underlying fetch request and server-side stream keep running until Mistral closes them. This is a reliability/resource leak that compounds under load.
Suggested fix:
- Accept an
AbortSignalthroughTextOptions/MistralClientConfigand pass it tofetch({ signal }). - In the
finallybranch, callreader.cancel().catch(() => {})beforereleaseLock()so aborting the generator (or the consumer throwing mid-iteration) tears down the HTTP connection.
🛠 Sketch
const response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(body),
+ signal: config.signal,
})
@@
try {
while (true) {
const { done, value } = await reader.read()
if (done) break
...
}
} finally {
- reader.releaseLock()
+ try {
+ await reader.cancel()
+ } catch {
+ /* ignore */
+ }
+ reader.releaseLock()
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 535 - 627,
fetchRawMistralStream leaks HTTP connections because it doesn't accept or
forward an AbortSignal and only calls reader.releaseLock() in finally; update
the function and its callers to accept an AbortSignal (add it to
MistralClientConfig or TextOptions), pass that signal into fetch({ signal }),
and in the finally block call await reader.cancel().catch(() => {}) before
reader.releaseLock() so aborting the async generator or throwing from a consumer
tears down the server stream; update any call sites (e.g., chatStream) that
create the generator to forward the provided signal.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
packages/typescript/ai-mistral/src/adapters/text.ts (1)
538-631:⚠️ Potential issue | 🟠 Major
fetchRawMistralStreamstill has noAbortSignalwiring — connection leak on abandoned consumers.The
finallynow correctly callsreader.cancel().catch(...)(good), but thefetch(url, { method, headers, body })call still omitssignal. If the adapter consumer aborts the outer run (AbortController.abort()), the HTTP request keeps running to completion server-side because no abort signal reachesfetch. Plumb anAbortSignalthroughTextOptions/MistralClientConfig(or read fromoptions.signal) and pass it to bothfetch({ signal })and the async iterator loop so producers can actually be cancelled.Sketch
const response = await fetch(url, { method: 'POST', headers, body: JSON.stringify(body), + signal: config.signal, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 538 - 631, fetchRawMistralStream is not wiring an AbortSignal into the fetch call or iterator, causing HTTP requests to keep running when consumers abort; update RawStreamParams and/or MistralClientConfig to accept an AbortSignal (e.g., signal) and read it in fetchRawMistralStream, then pass that signal into fetch({... signal }) and check signal.aborted inside the read loop (or attach signal.addEventListener('abort') to cancel/throw and ensure reader.cancel() runs). Ensure references in your change point to fetchRawMistralStream, RawStreamParams, MistralClientConfig (or the options.signal field) so the HTTP request and async iterator properly cancel when aborted.
🧹 Nitpick comments (2)
packages/typescript/ai-mistral/src/utils/client.ts (1)
65-70: Minor: fallback usesMath.random()which is not cryptographically secure.For correlation IDs this is fine, but if
generateIdis ever used for anything security-sensitive (e.g. run IDs that end up in auth contexts), prefer a crypto-backed fallback. As a defensive improvement, you could fall back tonode:crypto.randomUUIDbefore theMath.random()version, since the primary path already covers modern environments.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 65 - 70, The uuidv4Fallback currently generates UUIDs using Math.random (in uuidv4Fallback), which is not crypto-secure; update uuidv4Fallback to try using a crypto-backed generator first (e.g., call node:crypto.randomUUID or globalThis.crypto.randomUUID if available) and only fall back to the existing Math.random implementation if no crypto API exists, ensuring generateId and any other callers use the stronger source when possible.packages/typescript/ai-mistral/src/adapters/text.ts (1)
619-625: Silent JSON-parse skip can mask real protocol errors.Malformed SSE payloads are silently dropped (
catch {}). That's fine for truly partial frames, but in practice it also hides provider-side protocol changes or proxy corruption that would otherwise surface as noisy user errors. Consider at minimum logging atdebuglevel (gated) or surfacing after N consecutive failures, so a full-stream of unparseable data doesn't present as a silent empty response.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 619 - 625, The SSE chunk parsing currently swallows JSON parse errors (catch { }) which can hide protocol issues; update the parsing loop that uses rawChunkToCamelCase(raw) to (1) log parse failures at debug level including the raw chunk and error (gated by logger.debug), (2) track consecutive parse failures with a counter (reset on success), and (3) after a small threshold (e.g. N consecutive failures) surface an error/emit a failure event instead of silently skipping so a full stream of unparseable data is not treated as an empty response. Ensure the log and counter live alongside the existing parsing loop and that successful yields call rawChunkToCamelCase(raw) as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 449-513: The code currently yields TEXT_MESSAGE_END and
RUN_FINISHED every time a chunk has choice.finishReason, causing duplicates; add
guards to prevent repeated emits by introducing hasEmittedTextMessageEnd and
hasEmittedRunFinished (similar to
hasEmittedToolCall/hasEmittedTextMessageStart), check hasEmittedTextMessageEnd
before yielding the TEXT_MESSAGE_END (and set it true after yielding), and check
hasEmittedRunFinished before yielding RUN_FINISHED (and set it true after
yielding); alternatively you can break out of the outer for-await loop after the
first terminal chunk is processed—update references around the
choice.finishReason handling, the TEXT_MESSAGE_END yield, and the RUN_FINISHED
yield to use these guards.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Line 1: Change the incorrect combined import so HTTPClient is imported from
'@mistralai/mistralai/lib/http' while Mistral remains from
'@mistralai/mistralai'; update the import statements to use separate imports and
order them alphabetically (HTTPClient before Mistral) to satisfy ESLint. Locate
the existing import line that references Mistral and HTTPClient and replace it
with two imports that reference the correct modules for the symbols HTTPClient
and Mistral.
---
Duplicate comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 538-631: fetchRawMistralStream is not wiring an AbortSignal into
the fetch call or iterator, causing HTTP requests to keep running when consumers
abort; update RawStreamParams and/or MistralClientConfig to accept an
AbortSignal (e.g., signal) and read it in fetchRawMistralStream, then pass that
signal into fetch({... signal }) and check signal.aborted inside the read loop
(or attach signal.addEventListener('abort') to cancel/throw and ensure
reader.cancel() runs). Ensure references in your change point to
fetchRawMistralStream, RawStreamParams, MistralClientConfig (or the
options.signal field) so the HTTP request and async iterator properly cancel
when aborted.
---
Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 619-625: The SSE chunk parsing currently swallows JSON parse
errors (catch { }) which can hide protocol issues; update the parsing loop that
uses rawChunkToCamelCase(raw) to (1) log parse failures at debug level including
the raw chunk and error (gated by logger.debug), (2) track consecutive parse
failures with a counter (reset on success), and (3) after a small threshold
(e.g. N consecutive failures) surface an error/emit a failure event instead of
silently skipping so a full stream of unparseable data is not treated as an
empty response. Ensure the log and counter live alongside the existing parsing
loop and that successful yields call rawChunkToCamelCase(raw) as before.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 65-70: The uuidv4Fallback currently generates UUIDs using
Math.random (in uuidv4Fallback), which is not crypto-secure; update
uuidv4Fallback to try using a crypto-backed generator first (e.g., call
node:crypto.randomUUID or globalThis.crypto.randomUUID if available) and only
fall back to the existing Math.random implementation if no crypto API exists,
ensuring generateId and any other callers use the stronger source when possible.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ea2be8ef-108b-43e5-a0ab-bfd74da759f2
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
packages/typescript/ai-mistral/package.jsonpackages/typescript/ai-mistral/src/adapters/text.tspackages/typescript/ai-mistral/src/message-types.tspackages/typescript/ai-mistral/src/model-meta.tspackages/typescript/ai-mistral/src/tools/function-tool.tspackages/typescript/ai-mistral/src/utils/client.tspackages/typescript/ai-mistral/src/utils/schema-converter.tspackages/typescript/ai-mistral/tests/mistral-adapter.test.tstesting/e2e/package.jsontesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/providers.tstesting/e2e/src/lib/types.tstesting/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (3)
- testing/e2e/package.json
- testing/e2e/src/lib/types.ts
- packages/typescript/ai-mistral/package.json
🚧 Files skipped from review as they are similar to previous changes (6)
- testing/e2e/src/lib/providers.ts
- packages/typescript/ai-mistral/src/tools/function-tool.ts
- testing/e2e/src/lib/feature-support.ts
- packages/typescript/ai-mistral/src/utils/schema-converter.ts
- packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
- packages/typescript/ai-mistral/src/model-meta.ts
| @@ -0,0 +1,78 @@ | |||
| import { Mistral, HTTPClient } from '@mistralai/mistralai' | |||
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does the @mistralai/mistralai npm package export HTTPClient from the package root, and does HTTPClient support addHook('beforeRequest', ...)?
💡 Result:
No, the @mistralai/mistralai npm package does not export HTTPClient from the package root. It is exported from a subpath "@mistralai/mistralai/lib/http". However, HTTPClient does support addHook('beforeRequest', ...), as demonstrated in the official documentation and README examples where httpClient.addHook("beforeRequest", (request) => { ... }) is used to modify requests, such as adding headers or timeouts.
Citations:
- 1: https://www.npmjs.com/package/@mistralai/mistralai/
- 2: https://registry.npmjs.org/%40mistralai%2Fmistralai
- 3: https://www.npmjs.com/package/@mistralai/mistralai?activeTab=readme
- 4: https://www.npmjs.com/package/@mistralai/mistralai
- 5: https://github.com/mistralai/client-ts
- 6: https://github.com/mistralai/client-ts/tree/main
- 7: https://github.com/mistralai/client-ts/blob/main/README.md
Fix HTTPClient import path — currently importing from the wrong module.
HTTPClient is not exported from the @mistralai/mistralai package root; it must be imported from @mistralai/mistralai/lib/http. The current import will fail at runtime. Additionally, sort imports alphabetically per ESLint.
Fix
-import { Mistral, HTTPClient } from '@mistralai/mistralai'
+import { Mistral } from '@mistralai/mistralai'
+import { HTTPClient } from '@mistralai/mistralai/lib/http'📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { Mistral, HTTPClient } from '@mistralai/mistralai' | |
| import { Mistral } from '@mistralai/mistralai' | |
| import { HTTPClient } from '@mistralai/mistralai/lib/http' |
🧰 Tools
🪛 ESLint
[error] 1-1: Member 'HTTPClient' of the import declaration should be sorted alphabetically.
(sort-imports)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-mistral/src/utils/client.ts` at line 1, Change the
incorrect combined import so HTTPClient is imported from
'@mistralai/mistralai/lib/http' while Mistral remains from
'@mistralai/mistralai'; update the import statements to use separate imports and
order them alphabetically (HTTPClient before Mistral) to satisfy ESLint. Locate
the existing import line that references Mistral and HTTPClient and replace it
with two imports that reference the correct modules for the symbols HTTPClient
and Mistral.
There was a problem hiding this comment.
♻️ Duplicate comments (3)
packages/typescript/ai-mistral/src/adapters/text.ts (2)
538-631:⚠️ Potential issue | 🟠 Major
fetchRawMistralStreamstill lacksAbortSignalsupport.The fetch call on Line 589 does not accept or forward a signal, so if the consumer of
chatStreamstops iterating (user navigation, outer run abort, downstream throw), the underlying HTTP request keeps running until the server closes it. Thereader.cancel()infinallyhelps once the generator is actually disposed, but there is no way for callers to externally abort the request in flight.Consider threading an
AbortSignalthroughTextOptions/MistralClientConfigintofetch({ signal })so cancellation propagates to the server.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 538 - 631, The fetchRawMistralStream generator doesn't accept an AbortSignal; update the types (e.g., add signal?: AbortSignal to RawStreamParams or MistralClientConfig / TextOptions) and thread that signal into the fetch call (call fetch(url, { method: 'POST', headers, body: JSON.stringify(body), signal })) so the HTTP request can be cancelled externally; keep the existing finally block that calls reader.cancel() and releaseLock(), and ensure any callers of fetchRawMistralStream / chatStream propagate their AbortSignal into the new parameter.
449-513:⚠️ Potential issue | 🟡 Minor
RUN_FINISHED/TEXT_MESSAGE_ENDcan still be emitted multiple times.The
endedflag prevents duplicateTOOL_CALL_END, but theTEXT_MESSAGE_END(Line 489) andRUN_FINISHED(Line 499) emissions are still unconditional for every chunk wherechoice.finishReasonis truthy. If Mistral (or a proxy) ever sends more than one terminal frame, downstream consumers will observe duplicate run-completion events.Proposed guard
let hasEmittedTextMessageStart = false let hasEmittedToolCall = false + let hasEmittedTextMessageEnd = false + let hasEmittedRunFinished = false @@ - if (choice.finishReason) { + if (choice.finishReason && !hasEmittedRunFinished) { @@ - if (hasEmittedTextMessageStart) { + if (hasEmittedTextMessageStart && !hasEmittedTextMessageEnd) { + hasEmittedTextMessageEnd = true yield { type: 'TEXT_MESSAGE_END', ... } } @@ + hasEmittedRunFinished = true yield { type: 'RUN_FINISHED', ... } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 449 - 513, The code can emit TEXT_MESSAGE_END and RUN_FINISHED repeatedly when multiple terminal frames arrive; add idempotency guards: introduce a boolean like hasEmittedTextMessageEnd and another like hasEmittedRunFinished (or a single hasEmittedRunFinished used to gate both) and check them before yielding TEXT_MESSAGE_END and RUN_FINISHED (use existing aguiState.messageId and aguiState.runId to identify the run/message), set the flag(s) to true immediately after emitting so subsequent chunks with choice.finishReason do not emit duplicates; keep the TOOL_CALL_END behavior unchanged.packages/typescript/ai-mistral/src/utils/client.ts (1)
1-1:⚠️ Potential issue | 🔴 CriticalHTTPClient import path still appears incorrect.
HTTPClientis not exported from the@mistralai/mistralaipackage root — it must be imported from@mistralai/mistralai/lib/http. As written, the destructuredHTTPClientwill resolve toundefinedandnew HTTPClient()on Line 25 will throw at runtime wheneverdefaultHeadersis provided.Proposed fix
-import { Mistral, HTTPClient } from '@mistralai/mistralai' +import { Mistral } from '@mistralai/mistralai' +import { HTTPClient } from '@mistralai/mistralai/lib/http'Does `@mistralai/mistralai` 2.2.0 export HTTPClient from the package root, or only from `@mistralai/mistralai/lib/http`?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/utils/client.ts` at line 1, The import for HTTPClient is wrong: change the module import so Mistral still comes from '@mistralai/mistralai' but HTTPClient is imported from '@mistralai/mistralai/lib/http' (so that the constructor used by new HTTPClient(...) in this file resolves correctly); update the import statement that currently destructures HTTPClient from the package root to instead import HTTPClient from the lib/http path and keep the Mistral import as-is, then verify usages like new HTTPClient(...) and any defaultHeaders handling still work with the corrected class.
🧹 Nitpick comments (1)
packages/typescript/ai-mistral/src/adapters/text.ts (1)
295-297: Redundant string coercion.
rawTextis already narrowed tostringby the|| ''fallback on Line 295, so thetypeof rawText === 'string' ? rawText : String(rawText)ternary on Lines 296-297 is dead code. Just userawTextdirectly (or drop the||''and keep theString(...)path — but not both).Proposed simplification
- const rawText = response.choices?.[0]?.message?.content || '' - const textContent = - typeof rawText === 'string' ? rawText : String(rawText) + const textContent = response.choices?.[0]?.message?.content ?? ''🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 295 - 297, The code performs redundant coercion: `rawText` is already a string due to the `|| ''` fallback, so the ternary that produces `textContent` is dead code; replace the ternary with a direct assignment `const textContent = rawText` (or alternatively remove the `|| ''` fallback and keep `const textContent = String(response.choices?.[0]?.message?.content)`), updating the variables `rawText`, `textContent`, and the `response.choices?.[0]?.message?.content` expression accordingly to remove the unnecessary conversion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 538-631: The fetchRawMistralStream generator doesn't accept an
AbortSignal; update the types (e.g., add signal?: AbortSignal to RawStreamParams
or MistralClientConfig / TextOptions) and thread that signal into the fetch call
(call fetch(url, { method: 'POST', headers, body: JSON.stringify(body), signal
})) so the HTTP request can be cancelled externally; keep the existing finally
block that calls reader.cancel() and releaseLock(), and ensure any callers of
fetchRawMistralStream / chatStream propagate their AbortSignal into the new
parameter.
- Around line 449-513: The code can emit TEXT_MESSAGE_END and RUN_FINISHED
repeatedly when multiple terminal frames arrive; add idempotency guards:
introduce a boolean like hasEmittedTextMessageEnd and another like
hasEmittedRunFinished (or a single hasEmittedRunFinished used to gate both) and
check them before yielding TEXT_MESSAGE_END and RUN_FINISHED (use existing
aguiState.messageId and aguiState.runId to identify the run/message), set the
flag(s) to true immediately after emitting so subsequent chunks with
choice.finishReason do not emit duplicates; keep the TOOL_CALL_END behavior
unchanged.
In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Line 1: The import for HTTPClient is wrong: change the module import so
Mistral still comes from '@mistralai/mistralai' but HTTPClient is imported from
'@mistralai/mistralai/lib/http' (so that the constructor used by new
HTTPClient(...) in this file resolves correctly); update the import statement
that currently destructures HTTPClient from the package root to instead import
HTTPClient from the lib/http path and keep the Mistral import as-is, then verify
usages like new HTTPClient(...) and any defaultHeaders handling still work with
the corrected class.
---
Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 295-297: The code performs redundant coercion: `rawText` is
already a string due to the `|| ''` fallback, so the ternary that produces
`textContent` is dead code; replace the ternary with a direct assignment `const
textContent = rawText` (or alternatively remove the `|| ''` fallback and keep
`const textContent = String(response.choices?.[0]?.message?.content)`), updating
the variables `rawText`, `textContent`, and the
`response.choices?.[0]?.message?.content` expression accordingly to remove the
unnecessary conversion.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 4446ad0a-e71c-4b1d-bf33-4890ae9ec538
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (13)
packages/typescript/ai-mistral/package.jsonpackages/typescript/ai-mistral/src/adapters/text.tspackages/typescript/ai-mistral/src/message-types.tspackages/typescript/ai-mistral/src/model-meta.tspackages/typescript/ai-mistral/src/tools/function-tool.tspackages/typescript/ai-mistral/src/utils/client.tspackages/typescript/ai-mistral/src/utils/schema-converter.tspackages/typescript/ai-mistral/tests/mistral-adapter.test.tstesting/e2e/package.jsontesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/providers.tstesting/e2e/src/lib/types.tstesting/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (6)
- testing/e2e/package.json
- packages/typescript/ai-mistral/src/utils/schema-converter.ts
- testing/e2e/src/lib/types.ts
- packages/typescript/ai-mistral/package.json
- packages/typescript/ai-mistral/src/model-meta.ts
- packages/typescript/ai-mistral/src/message-types.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- testing/e2e/tests/test-matrix.ts
- testing/e2e/src/lib/providers.ts
- packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
|
@AlemTuzlak what do you think? |
|
View your CI Pipeline Execution ↗ for commit 23ad8e0
☁️ Nx Cloud last updated this comment at |
|
View your CI Pipeline Execution ↗ for commit 018ed5e ☁️ Nx Cloud last updated this comment at |
|
@AlemTuzlak would love to get started using the mistral provider :) |
|
@cstrnt mind making sure the ci checks are green first? |
Co-authored-by: Copilot <copilot@github.com>
🎯 Changes
We're working with Mistral's tooling but love the ai package and therefore I added an adapter for Mistral's API using their client package.
✅ Checklist
the contributing.md file doesn't exist anymore :(
pnpm run test:pr.🚀 Release Impact
Summary by CodeRabbit