Skip to content

feat: add Mistral provider adapter for TanStack AI#462

Open
cstrnt wants to merge 8 commits intoTanStack:mainfrom
cstrnt:main
Open

feat: add Mistral provider adapter for TanStack AI#462
cstrnt wants to merge 8 commits intoTanStack:mainfrom
cstrnt:main

Conversation

@cstrnt
Copy link
Copy Markdown

@cstrnt cstrnt commented Apr 17, 2026

🎯 Changes

We're working with Mistral's tooling but love the ai package and therefore I added an adapter for Mistral's API using their client package.

✅ Checklist

  • I have followed the steps in the Contributing guide.
    the contributing.md file doesn't exist anymore :(
  • I have tested this code locally with pnpm run test:pr.

🚀 Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • New Features
    • Added a Mistral adapter: streaming chat, tool/function calling, multimodal text+image input, structured JSON Schema output, and support for multiple Mistral model variants.
  • Documentation
    • New README with installation, env setup, usage examples, supported models, and feature details.
  • Tests
    • New unit tests for streaming/tool-call behaviors and E2E updates: Mistral added to test matrix and provider tooling.

- Implemented MistralTextAdapter and related functions for chat completions.
- Defined Mistral-specific message types and metadata structures.
- Created model metadata for various Mistral models with pricing and capabilities.
- Added text provider options and validation for Mistral text models.
- Developed utility functions for Mistral client configuration and schema conversion.
- Implemented function tool conversion for Mistral-specific formats.
- Added tests for Mistral adapters, including event emissions and error handling.
- Configured TypeScript and Vite for the new package.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 17, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new package @tanstack/ai-mistral implementing a Mistral adapter for TanStack AI: streaming chat (SSE), structured JSON-schema output, tool/function calling, model metadata, utilities, tests, build/config, README, and E2E integration.

Changes

Cohort / File(s) Summary
Release Note
\.changeset/add-ai-mistral.md
Adds a changeset marking @tanstack/ai-mistral as a minor release.
Package Manifest & README
packages/typescript/ai-mistral/package.json, packages/typescript/ai-mistral/README.md
New package manifest, exports, scripts, runtime dependency on @mistralai/mistralai, peer deps, and README with install, env, usage, supported models and features.
Core Adapter & Entrypoint
packages/typescript/ai-mistral/src/adapters/text.ts, packages/typescript/ai-mistral/src/index.ts
Adds MistralTextAdapter with streaming chatStream() (manual SSE parsing) and structuredOutput() (JSON-schema mode), plus createMistralText/mistralText factories and exported types.
Message Types & Provider Options
packages/typescript/ai-mistral/src/message-types.ts, packages/typescript/ai-mistral/src/text/text-provider-options.ts
Defines Mistral request/response types, content part unions, tool/function schemas, response-format descriptors, and provider option types with a validate entrypoint.
Model Metadata
packages/typescript/ai-mistral/src/model-meta.ts
Per-model metadata objects, MISTRAL_CHAT_MODELS list, modality/provider-option maps, and conditional utility types for resolving options/modalities.
Tools Conversion
packages/typescript/ai-mistral/src/tools/*
.../function-tool.ts, .../tool-converter.ts, .../index.ts
Converters to map generic Tool definitions to Mistral function-tool payloads (JSON Schema normalization) and tools index re-exports.
Utilities
packages/typescript/ai-mistral/src/utils/*
.../client.ts, .../schema-converter.ts, .../index.ts
Client factory, env API-key resolver, ID generator, schema converters (makeMistralStructuredOutputCompatible, transformNullsToUndefined) and re-exports.
Tests
packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
Vitest suite mocking @mistralai/mistralai and fetch SSE; verifies adapter factories, env handling, streaming events (RUN_STARTED, TEXT_MESSAGE_, TOOL_CALL_, RUN_FINISHED, RUN_ERROR), tool-call parsing, and accumulation semantics.
Build & Test Config
packages/typescript/ai-mistral/tsconfig.json, packages/typescript/ai-mistral/vite.config.ts, packages/typescript/ai-mistral/vitest.config.ts
Adds TypeScript, Vite, and Vitest configs (build entry, test patterns, coverage).
E2E Test Integration
testing/e2e/package.json, testing/e2e/src/lib/feature-support.ts, testing/e2e/src/lib/providers.ts, testing/e2e/src/lib/types.ts, testing/e2e/tests/test-matrix.ts
Adds workspace dependency on @tanstack/ai-mistral; registers mistral provider & default model; updates feature-support matrix and test matrix inclusion.

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client
  participant Adapter as MistralTextAdapter
  participant Fetch as Fetch/SSE
  participant SDK as `@mistralai/mistralai`

  Client->>Adapter: generate() / chatStream(messages, options)
  Adapter->>SDK: prepare request payload / map messages & tools
  Adapter->>Fetch: POST /v1/chat/completions?stream=true (snake_case payload)
  Fetch-->>Adapter: SSE chunks (data: ...), [DONE]
  Adapter-->>Client: yield StreamChunk events (RUN_STARTED → TEXT_MESSAGE_START → TEXT_MESSAGE_CONTENT* → TEXT_MESSAGE_END, TOOL_CALL_START/ARGS/END, RUN_FINISHED/RUN_ERROR)
  Adapter->>SDK: (non-stream) call parse for structuredOutput → parse JSON → transform nulls→undefined
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🐰 A Mistral breeze across the branch,
New adapters hop and streams advance,
Tools hum JSON, chunks arrive in time,
Schemas strict, events in rhyme,
A rabbit nods: "Release—savour the climb!"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 64.71% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat: add Mistral provider adapter for TanStack AI' clearly and specifically describes the main change—adding a new Mistral provider adapter to the TanStack AI package.
Description check ✅ Passed The PR description addresses the 'Changes' section with adequate explanation, includes the Release Impact checklist with the changeset marked as completed, but does not fully complete all contributing guide steps (noting the file no longer exists).

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🧹 Nitpick comments (10)
packages/typescript/ai-mistral/src/utils/client.ts (1)

17-24: Treat timeoutMs: 0 explicitly and forward all relevant SDK options.

timeoutMs ? { timeoutMs } drops the value when it's 0. Although 0 is rarely a useful timeout, the check is truthiness-based and should be an explicit !== undefined to preserve user intent. Same nit applies to serverURL if callers pass an empty string (though that's less meaningful).

Proposed fix
-  const { apiKey, serverURL, timeoutMs } = config
-  return new Mistral({
-    apiKey,
-    ...(serverURL ? { serverURL } : {}),
-    ...(timeoutMs ? { timeoutMs } : {}),
-  })
+  const { apiKey, serverURL, timeoutMs } = config
+  return new Mistral({
+    apiKey,
+    ...(serverURL !== undefined ? { serverURL } : {}),
+    ...(timeoutMs !== undefined ? { timeoutMs } : {}),
+  })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 17 - 24, The
createMistralClient function currently uses truthy checks which drop intentional
values like timeoutMs: 0 and possibly empty serverURL; update
createMistralClient to explicitly check for undefined (e.g., timeoutMs !==
undefined and serverURL !== undefined) when spreading options so zero or
empty-string values are preserved, and also ensure any other SDK options present
on MistralClientConfig are forwarded into the new Mistral({...}) call (refer to
createMistralClient and the Mistral constructor to add those properties).
packages/typescript/ai-mistral/src/utils/schema-converter.ts (1)

8-29: transformNullsToUndefined will mangle Date, Map, Set, class instances, etc.

Because typeof returns 'object' for those, they'll be rebuilt as plain Record<string, unknown> via Object.entries. This is likely fine for JSON-parsed Mistral responses (which won't contain such values), but worth a brief code comment or a plain-object guard (Object.getPrototypeOf(obj) === Object.prototype) to avoid surprises if this util gets reused.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts` around lines 8
- 29, transformNullsToUndefined currently treats any typeof === 'object' as a
plain object and will rebuild Dates, Maps, Sets and class instances via
Object.entries; add a plain-object guard (e.g. check Object.getPrototypeOf(obj)
=== Object.prototype or an isPlainObject helper) before iterating entries and
only recurse/construct a Record for true plain objects, otherwise return the
original object unchanged, and add a short comment explaining the guard to
prevent mangling Date/Map/Set/class instances; reference the
transformNullsToUndefined function and the block that uses Object.entries(obj as
Record<string, unknown>) for where to apply the change.
packages/typescript/ai-mistral/src/model-meta.ts (2)

33-207: Minor: consider a single data-driven table instead of per-model constants.

Eleven nearly-identical const X = { ... } as const satisfies ModelMeta<…> blocks, plus three parallel lists/maps (MISTRAL_CHAT_MODELS, MistralModelInputModalitiesByName, MistralChatModelProviderOptionsByName) make adding/removing a model a multi-place edit that's easy to get out of sync. Consider defining a single MISTRAL_MODELS = { 'mistral-large-latest': {...}, ... } as const satisfies Record<string, ModelMeta<MistralTextProviderOptions>> and deriving the union / modality map from it. Not blocking.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 33 - 207,
Replace the many per-model constants (e.g., MISTRAL_LARGE_LATEST,
MISTRAL_MEDIUM_LATEST, ... OPEN_MISTRAL_NEMO) and the parallel collections
(MISTRAL_CHAT_MODELS, MistralModelInputModalitiesByName,
MistralChatModelProviderOptionsByName) with a single data-driven map like
MISTRAL_MODELS typed as const satisfies Record<string,
ModelMeta<MistralTextProviderOptions>>; then derive the chat model list and the
modality/provider-option maps from that single MISTRAL_MODELS object so
adding/removing a model only requires updating one place and keeps types (and
unions) inferred from the map.

251-261: Per-model provider options don't actually differ.

MistralChatModelProviderOptionsByName maps every model to the same MistralTextProviderOptions, so ResolveProviderOptions<TModel> is effectively a constant. This means the model-specific provider-options machinery adds indirection without providing any per-model type safety. If Mistral truly has homogeneous request params today, consider collapsing ResolveProviderOptions down to just MistralTextProviderOptions (and dropping the map) — you can reintroduce per-model specialization later if a model gains unique options.

As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 251 - 261, The
current MistralChatModelProviderOptionsByName maps every entry of
MISTRAL_CHAT_MODELS to the same MistralTextProviderOptions, making
ResolveProviderOptions<TModel> redundant; replace the indirection by changing
ResolveProviderOptions to simply be MistralTextProviderOptions (and remove or
deprecate MistralChatModelProviderOptionsByName), i.e., eliminate the map and
export ResolveProviderOptions = MistralTextProviderOptions so the code uses a
single concrete provider options type while preserving the ability to
reintroduce per-model mappings later if needed (refer to the symbols
MistralChatModelProviderOptionsByName, ResolveProviderOptions,
MistralTextProviderOptions, and MISTRAL_CHAT_MODELS to locate and update the
types).
packages/typescript/ai-mistral/src/text/text-provider-options.ts (1)

110-114: No-op validator.

validateTextProviderOptions is a stub with the comment acknowledging Mistral handles validation server-side. That's fine, but consider at least validating obvious client-catchable issues to fail fast (e.g., n && n < 1, temperature range, messages.length > 0) — these avoid a round-trip to the API for trivial mistakes. Deferring entirely to Mistral means users get slower, less actionable errors.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/text/text-provider-options.ts` around
lines 110 - 114, The validator validateTextProviderOptions currently does
nothing; add fast client-side checks inside that function (referencing
validateTextProviderOptions and type InternalTextProviderOptions) to throw clear
errors for obvious misconfigurations: ensure numeric counts like n are integers
>= 1, temperature is within the allowed range (e.g. 0–1 or your provider's
accepted bounds), messages (or prompt) is present and messages.length > 0, and
any other obvious bounds (e.g. max_tokens/top_p if present) are within valid
ranges; throw descriptive Error instances so callers fail fast instead of
waiting for the Mistral API round-trip.
packages/typescript/ai-mistral/src/tools/function-tool.ts (1)

27-42: Remove the redundant additionalProperties: false assignment at line 32.

makeMistralStructuredOutputCompatible already sets additionalProperties = false inside the if (result.type === 'object') block (schema-converter.ts line 91). The assignment here is redundant for objects and incorrectly applied to non-object types where it has no semantic meaning.

The type compatibility is fine—FunctionParameters is { [key: string]: unknown }, which accepts the Record<string, any> return type of makeMistralStructuredOutputCompatible without requiring a cast.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/tools/function-tool.ts` around lines 27 -
42, Remove the redundant assignment to jsonSchema.additionalProperties = false
in the function-tool return path: the helper
makeMistralStructuredOutputCompatible already sets additionalProperties = false
for object schemas (see makeMistralStructuredOutputCompatible), and setting it
here can incorrectly apply to non-object types; simply delete the line that
assigns additionalProperties and leave the returned object (type: 'function',
function: { name: tool.name, description: tool.description, parameters:
jsonSchema, strict: true }) as-is (it remains compatible with
FunctionTool/FunctionParameters).
packages/typescript/ai-mistral/tests/mistral-adapter.test.ts (1)

378-406: Chunk shape is correct, but test setup is inconsistent.

The pre-wrapped { data: {...} } shape at lines 378-406 is correct—the Mistral SDK wraps stream events with a .data property. However, this test manually wraps chunks while most others (lines 119, 222, 428) pass raw chunks to setupMockStream, which does the wrapping. Both approaches work and produce the correct shape, but for consistency, use setupMockStream here as well instead of manually creating errorIterable with pre-wrapped chunks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts` around lines
378 - 406, The test manually constructs an async iterable named errorIterable
with pre-wrapped streamChunks, which is inconsistent with other tests; replace
the manual async iterable with a call to setupMockStream(streamChunks) so the
helper performs the required .data wrapping and error behavior, updating
references to errorIterable and reusing the existing setupMockStream helper used
elsewhere in this test suite.
packages/typescript/ai-mistral/src/adapters/text.ts (2)

93-101: Redundant kind override and dropped config.

readonly kind = 'text' as const is already declared on BaseTextAdapter (see packages/typescript/ai-adapters/.../adapter.ts line 116 in the provided snippet), so the override at Line 93 is dead code. More importantly, super({}, model) at Line 99 discards the caller's config — any future base-class behavior that depends on TextAdapterConfig (timeouts, headers, etc.) will silently ignore user configuration.

-  readonly kind = 'text' as const
   readonly name = 'mistral' as const

   private client: Mistral

   constructor(config: MistralTextConfig, model: TModel) {
-    super({}, model)
+    super(config, model)
     this.client = createMistralClient(config)
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 93 - 101,
The constructor in MistralTextAdapter wrongly overrides an already-declared kind
and discards the passed config by calling super({}, model); update the class to
remove the redundant readonly kind declaration (since BaseTextAdapter provides
it) and pass the incoming config into the base constructor (call super(config,
model)) so TextAdapterConfig (timeouts/headers/etc.) is preserved; keep the
client initialization via createMistralClient(config) and ensure the private
client: Mistral and readonly name = 'mistral' remain unchanged.

282-338: Tool-call args are serialized twice per delta.

JSON.stringify(toolCallDelta.function.arguments) runs at Line 308 to accumulate, and again at Line 328 to emit TOOL_CALL_ARGS. For object-valued delta args this is wasted work on the hot streaming path, and — more importantly — the first tool-call chunk (when toolCall.started flips from false to true inside this same iteration) won't emit TOOL_CALL_ARGS on that first pass because the toolCall.started check at Line 324 is evaluated before started is set at Line 313… wait, actually started is set at 313 before 324 is evaluated, so the first chunk's args do get emitted. Still, the double stringify is worth consolidating.

♻️ Compute `argsDelta` once per delta
-            if (toolCallDelta.function.arguments !== undefined) {
-              const argsDelta =
-                typeof toolCallDelta.function.arguments === 'string'
-                  ? toolCallDelta.function.arguments
-                  : JSON.stringify(toolCallDelta.function.arguments)
-              toolCall.arguments += argsDelta
-            }
+            const argsDelta =
+              toolCallDelta.function.arguments === undefined
+                ? undefined
+                : typeof toolCallDelta.function.arguments === 'string'
+                  ? toolCallDelta.function.arguments
+                  : JSON.stringify(toolCallDelta.function.arguments)
+            if (argsDelta !== undefined) {
+              toolCall.arguments += argsDelta
+            }

             if (toolCall.id && toolCall.name && !toolCall.started) {
               toolCall.started = true
               yield { /* TOOL_CALL_START */ ... }
             }

-            if (toolCallDelta.function.arguments !== undefined && toolCall.started) {
-              const argsDelta =
-                typeof toolCallDelta.function.arguments === 'string'
-                  ? toolCallDelta.function.arguments
-                  : JSON.stringify(toolCallDelta.function.arguments)
+            if (argsDelta !== undefined && toolCall.started) {
               yield {
                 type: 'TOOL_CALL_ARGS',
                 ...
                 delta: argsDelta,
               }
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 282 - 338,
The loop over deltaToolCalls double-serializes object args; compute the argument
delta once per iteration and reuse it for both accumulating into
toolCallsInProgress and for emitting the TOOL_CALL_ARGS event. Inside the
for-loop handling toolCallDelta (referencing toolCallDelta, toolCallsInProgress,
toolCall, TOOL_CALL_START and TOOL_CALL_ARGS), when
toolCallDelta.function.arguments !== undefined, create a single const argsDelta
= typeof ... ? ... : JSON.stringify(...), use argsDelta to append to
toolCall.arguments, and use the same argsDelta in the yield for the
TOOL_CALL_ARGS event instead of re-running JSON.stringify.
packages/typescript/ai-mistral/src/message-types.ts (1)

141-149: schemaDefinition should be required for ResponseFormatJsonSchema.

The adapter's structuredOutput always sets schemaDefinition and strict (text.ts Line 173-176), and a JSON-schema response format without a schema is meaningless. Making schemaDefinition required reflects actual usage and catches call sites that forget it.

 export interface ResponseFormatJsonSchema {
   type: 'json_schema'
   jsonSchema: {
     name: string
     description?: string
-    schemaDefinition?: { [key: string]: unknown }
+    schemaDefinition: { [key: string]: unknown }
     strict?: boolean
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/message-types.ts` around lines 141 - 149,
The ResponseFormatJsonSchema interface defines schemaDefinition as optional but
the adapter (structuredOutput in text.ts) always provides schemaDefinition and
strict; make schemaDefinition required to match actual usage by changing
ResponseFormatJsonSchema so jsonSchema.schemaDefinition is non-optional, update
any types/usages that expect it optional, and run typechecks to fix call sites
that now must pass schemaDefinition explicitly (look for
ResponseFormatJsonSchema, structuredOutput, and any callers constructing
json_schema responses).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-mistral/package.json`:
- Around line 15-20: Add explicit /adapters subpath exports so consumers can
import tree-shakeable adapters instead of the whole package; update the
"exports" object to include entries like "./adapters/text" that map "types" to
"./dist/esm/adapters/text.d.ts" and "import" to "./dist/esm/adapters/text.js"
(and add any other planned adapter entrypoints similarly), ensuring the entry
names match the built files under dist/esm and follow the existing "./": {
"types", "import" } shape used for ".".

In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 146-150: Remove the leftover console.* debug logging in the
adapter (e.g., the console.error calls inside the chatStream error handler and
the other similar spots around lines shown); instead rely on emitting the
existing RUN_ERROR event with the error payload (error.message/code) so errors
are propagated without writing to stdout/stderr. Locate the error handlers in
the chatStream function and the other two instances and delete the
console.error/console.log statements, ensuring only the RUN_ERROR emission (or
existing telemetry call) remains.
- Around line 116-151: The catch in chatStream currently yields a RUN_ERROR and
then swallows the exception; update chatStream (the try/catch around
this.client.chat.stream(...) and the catch block) to rethrow the original error
after emitting RUN_ERROR so callers receive a rejection; specifically keep the
hasEmittedRunStarted logic and the RUN_STARTED/RUN_ERROR yields as-is, then add
a `throw err` (rethrow the same Error & { code?: string } captured as `err`) at
the end of that catch block so consumers iterating chatStream observe the
failure; alternatively, if you prefer a single error surface, remove the
internal try/catch inside processMistralStreamChunks so errors bubble to
chatStream instead (choose one approach and implement consistently).
- Around line 340-403: The adapter can misreport computedFinishReason as
'tool_calls' based solely on toolCallsInProgress.size; add a boolean flag (e.g.,
hasEmittedToolCall) initialized false and set it true whenever you yield a
TOOL_CALL_END (inside the loop that inspects toolCallsInProgress), then compute
computedFinishReason using (choice.finishReason === 'tool_calls' ||
hasEmittedToolCall) instead of checking toolCallsInProgress.size > 0 so that
RUN_FINISHED only reports 'tool_calls' when a TOOL_CALL_END was actually
emitted.
- Around line 440-489: mapTextOptionsToMistral currently validates modelOptions
via validateTextProviderOptions but never includes them in the returned payload,
dropping all Mistral-specific fields; modify mapTextOptionsToMistral to merge
the validated modelOptions into the returned object (after converting camelCase
keys to the snake_case names expected by the Mistral SDK) so fields like
response_format, frequency_penalty, presence_penalty, stop, random_seed,
safe_prompt, tool_choice, parallel_tool_calls, n, prediction, top_p and
max_tokens are forwarded; keep existing behavior for tools
(convertToolsToProviderFormat) and messages, and ensure the merged object still
matches the expected return shape of mapTextOptionsToMistral and is used by
chatStream/structuredOutput callers.
- Around line 117-119: Replace the unsafe "as any" casts on the request
parameter objects passed to this.client.chat.stream and
this.client.chat.complete with proper inferred parameter types so SDK signature
changes surface at compile time; specifically, type the stream request as
Parameters<typeof this.client.chat.stream>[0] and the complete request as
Parameters<typeof this.client.chat.complete>[0] (remove the casts at the stream
and complete call sites), while still using your custom message-types where
appropriate and mapping them into these strongly-typed parameter objects.

In `@packages/typescript/ai-mistral/src/message-types.ts`:
- Around line 151-183: Replace the empty interface declarations that currently
make `{}` overly permissive — specifically MistralTextMetadata,
MistralAudioMetadata, MistralVideoMetadata, and MistralDocumentMetadata — with a
truly empty-object type such as Record<string, never> (or add a private
placeholder field) so consumers of MistralMessageMetadataByModality get proper
type safety; update the four interface declarations to use Record<string, never>
(or include a clearly named placeholder property) to prevent values like 42 or
string from type-checking as valid metadata.

In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 33-47: The MISTRAL_LARGE_LATEST model meta has outdated context
window and pricing; update the constant MISTRAL_LARGE_LATEST (the object
satisfying ModelMeta<MistralTextProviderOptions>) so context_window is 256_000
and set pricing.input.normal to 0.5 and pricing.output.normal to 1.5 to match
current Mistral specs; keep the other fields (name, max_completion_tokens,
supports) unchanged and ensure the object still uses "as const satisfies
ModelMeta<MistralTextProviderOptions>".

In `@packages/typescript/ai-mistral/src/tools/function-tool.ts`:
- Around line 17-32: The code mutates caller-provided tool.inputSchema via
inputSchema.properties = {}, which can leak state across uses; instead, create a
non-mutating copy before modifying: build a new schema object (e.g., copy
inputSchema and set properties to inputSchema.properties ?? {}) and pass that
copy into makeMistralStructuredOutputCompatible, then set
jsonSchema.additionalProperties = false; reference symbols: tool.inputSchema,
inputSchema, makeMistralStructuredOutputCompatible, jsonSchema.

In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 51-53: The generateId function produces a short, variable-length,
collision-prone suffix by using Math.random().toString(36).substring(7); update
generateId to produce a stable, high-entropy ID: either replace the random tail
with crypto.randomUUID() (available in Node 14.17+/browsers) and format as
`${prefix}-${crypto.randomUUID()}`, or if you must keep Math.random(), use a
fixed-length slice such as Math.random().toString(36).substring(2,10) (or
repeat/concat to reach desired length) so the suffix is deterministic in length;
locate and update the generateId function in client.ts accordingly.

In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 53-83: The loop in makeMistralStructuredOutputCompatible recurses
into object/array properties but skips adding 'null' for optional nested fields;
update the branches handling prop.type === 'object' and prop.type === 'array'
(and their assignments to properties[propName]) so that after you recurse you
also apply the same nullability logic used in the else branch when wasOptional
is true: if the resulting property has a single string type, convert it to a
union [type, 'null']; if it already has an array type and doesn't include
'null', append 'null'. Ensure this is applied for the object branch (after
calling makeMistralStructuredOutputCompatible) and the array branch (after
wrapping items) while preserving existing type arrays.

In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 371-374: The test currently narrows runFinishedChunk with
chunks.find((c) => c.type === 'RUN_FINISHED') then checks
runFinishedChunk.finishReason, which can silently skip assertions if
runFinishedChunk is undefined; add an explicit assertion
expect(runFinishedChunk).toBeDefined() immediately after computing
runFinishedChunk (before the if / property access) to ensure the test fails when
no RUN_FINISHED chunk is emitted and then assert
expect(runFinishedChunk!.finishReason).toBe('tool_calls').

In `@packages/typescript/ai-mistral/vite.config.ts`:
- Around line 5-27: The Vitest test block inside the defineConfig object in
vite.config.ts is redundant with the dedicated vitest.config.ts and should be
removed: delete the entire test: { ... } block (the object assigned under config
in the defineConfig call) so vite.config.ts only contains non-test Vite config.
Also update any test include patterns to follow project guidelines by using
src/**/*.test.ts (adjust references that previously used tests/**/*.test.ts) and
move existing test files from tests/ into the corresponding src/ locations;
ensure vitest.config.ts remains the single source of truth for test
configuration.

In `@packages/typescript/ai-mistral/vitest.config.ts`:
- Around line 1-22: Remove the redundant vitest.config.ts file (which exports
defineConfig / default config) and consolidate its test block into the existing
vite.config.ts by keeping the test configuration (globals, environment, include,
coverage) only in vite.config.ts; delete the vitest.config.ts file so Vitest
will use the configuration from vite.config.ts like other adapter packages.

---

Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 93-101: The constructor in MistralTextAdapter wrongly overrides an
already-declared kind and discards the passed config by calling super({},
model); update the class to remove the redundant readonly kind declaration
(since BaseTextAdapter provides it) and pass the incoming config into the base
constructor (call super(config, model)) so TextAdapterConfig
(timeouts/headers/etc.) is preserved; keep the client initialization via
createMistralClient(config) and ensure the private client: Mistral and readonly
name = 'mistral' remain unchanged.
- Around line 282-338: The loop over deltaToolCalls double-serializes object
args; compute the argument delta once per iteration and reuse it for both
accumulating into toolCallsInProgress and for emitting the TOOL_CALL_ARGS event.
Inside the for-loop handling toolCallDelta (referencing toolCallDelta,
toolCallsInProgress, toolCall, TOOL_CALL_START and TOOL_CALL_ARGS), when
toolCallDelta.function.arguments !== undefined, create a single const argsDelta
= typeof ... ? ... : JSON.stringify(...), use argsDelta to append to
toolCall.arguments, and use the same argsDelta in the yield for the
TOOL_CALL_ARGS event instead of re-running JSON.stringify.

In `@packages/typescript/ai-mistral/src/message-types.ts`:
- Around line 141-149: The ResponseFormatJsonSchema interface defines
schemaDefinition as optional but the adapter (structuredOutput in text.ts)
always provides schemaDefinition and strict; make schemaDefinition required to
match actual usage by changing ResponseFormatJsonSchema so
jsonSchema.schemaDefinition is non-optional, update any types/usages that expect
it optional, and run typechecks to fix call sites that now must pass
schemaDefinition explicitly (look for ResponseFormatJsonSchema,
structuredOutput, and any callers constructing json_schema responses).

In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 33-207: Replace the many per-model constants (e.g.,
MISTRAL_LARGE_LATEST, MISTRAL_MEDIUM_LATEST, ... OPEN_MISTRAL_NEMO) and the
parallel collections (MISTRAL_CHAT_MODELS, MistralModelInputModalitiesByName,
MistralChatModelProviderOptionsByName) with a single data-driven map like
MISTRAL_MODELS typed as const satisfies Record<string,
ModelMeta<MistralTextProviderOptions>>; then derive the chat model list and the
modality/provider-option maps from that single MISTRAL_MODELS object so
adding/removing a model only requires updating one place and keeps types (and
unions) inferred from the map.
- Around line 251-261: The current MistralChatModelProviderOptionsByName maps
every entry of MISTRAL_CHAT_MODELS to the same MistralTextProviderOptions,
making ResolveProviderOptions<TModel> redundant; replace the indirection by
changing ResolveProviderOptions to simply be MistralTextProviderOptions (and
remove or deprecate MistralChatModelProviderOptionsByName), i.e., eliminate the
map and export ResolveProviderOptions = MistralTextProviderOptions so the code
uses a single concrete provider options type while preserving the ability to
reintroduce per-model mappings later if needed (refer to the symbols
MistralChatModelProviderOptionsByName, ResolveProviderOptions,
MistralTextProviderOptions, and MISTRAL_CHAT_MODELS to locate and update the
types).

In `@packages/typescript/ai-mistral/src/text/text-provider-options.ts`:
- Around line 110-114: The validator validateTextProviderOptions currently does
nothing; add fast client-side checks inside that function (referencing
validateTextProviderOptions and type InternalTextProviderOptions) to throw clear
errors for obvious misconfigurations: ensure numeric counts like n are integers
>= 1, temperature is within the allowed range (e.g. 0–1 or your provider's
accepted bounds), messages (or prompt) is present and messages.length > 0, and
any other obvious bounds (e.g. max_tokens/top_p if present) are within valid
ranges; throw descriptive Error instances so callers fail fast instead of
waiting for the Mistral API round-trip.

In `@packages/typescript/ai-mistral/src/tools/function-tool.ts`:
- Around line 27-42: Remove the redundant assignment to
jsonSchema.additionalProperties = false in the function-tool return path: the
helper makeMistralStructuredOutputCompatible already sets additionalProperties =
false for object schemas (see makeMistralStructuredOutputCompatible), and
setting it here can incorrectly apply to non-object types; simply delete the
line that assigns additionalProperties and leave the returned object (type:
'function', function: { name: tool.name, description: tool.description,
parameters: jsonSchema, strict: true }) as-is (it remains compatible with
FunctionTool/FunctionParameters).

In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 17-24: The createMistralClient function currently uses truthy
checks which drop intentional values like timeoutMs: 0 and possibly empty
serverURL; update createMistralClient to explicitly check for undefined (e.g.,
timeoutMs !== undefined and serverURL !== undefined) when spreading options so
zero or empty-string values are preserved, and also ensure any other SDK options
present on MistralClientConfig are forwarded into the new Mistral({...}) call
(refer to createMistralClient and the Mistral constructor to add those
properties).

In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 8-29: transformNullsToUndefined currently treats any typeof ===
'object' as a plain object and will rebuild Dates, Maps, Sets and class
instances via Object.entries; add a plain-object guard (e.g. check
Object.getPrototypeOf(obj) === Object.prototype or an isPlainObject helper)
before iterating entries and only recurse/construct a Record for true plain
objects, otherwise return the original object unchanged, and add a short comment
explaining the guard to prevent mangling Date/Map/Set/class instances; reference
the transformNullsToUndefined function and the block that uses
Object.entries(obj as Record<string, unknown>) for where to apply the change.

In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 378-406: The test manually constructs an async iterable named
errorIterable with pre-wrapped streamChunks, which is inconsistent with other
tests; replace the manual async iterable with a call to
setupMockStream(streamChunks) so the helper performs the required .data wrapping
and error behavior, updating references to errorIterable and reusing the
existing setupMockStream helper used elsewhere in this test suite.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4902c3d3-c076-442e-b548-0ec58eee37bf

📥 Commits

Reviewing files that changed from the base of the PR and between cb07e01 and a87d242.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (18)
  • .changeset/add-ai-mistral.md
  • packages/typescript/ai-mistral/README.md
  • packages/typescript/ai-mistral/package.json
  • packages/typescript/ai-mistral/src/adapters/text.ts
  • packages/typescript/ai-mistral/src/index.ts
  • packages/typescript/ai-mistral/src/message-types.ts
  • packages/typescript/ai-mistral/src/model-meta.ts
  • packages/typescript/ai-mistral/src/text/text-provider-options.ts
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • packages/typescript/ai-mistral/src/tools/index.ts
  • packages/typescript/ai-mistral/src/tools/tool-converter.ts
  • packages/typescript/ai-mistral/src/utils/client.ts
  • packages/typescript/ai-mistral/src/utils/index.ts
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
  • packages/typescript/ai-mistral/tsconfig.json
  • packages/typescript/ai-mistral/vite.config.ts
  • packages/typescript/ai-mistral/vitest.config.ts

Comment thread packages/typescript/ai-mistral/package.json
Comment thread packages/typescript/ai-mistral/src/adapters/text.ts
Comment thread packages/typescript/ai-mistral/src/adapters/text.ts Outdated
Comment thread packages/typescript/ai-mistral/src/adapters/text.ts Outdated
Comment thread packages/typescript/ai-mistral/src/adapters/text.ts Outdated
Comment thread packages/typescript/ai-mistral/src/tools/function-tool.ts Outdated
Comment thread packages/typescript/ai-mistral/src/utils/schema-converter.ts
Comment thread packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
Comment on lines +5 to +27
const config = defineConfig({
test: {
name: packageJson.name,
dir: './',
watch: false,
globals: true,
environment: 'node',
include: ['tests/**/*.test.ts'],
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html', 'lcov'],
exclude: [
'node_modules/',
'dist/',
'tests/',
'**/*.test.ts',
'**/*.config.ts',
'**/types.ts',
],
include: ['src/**/*.ts'],
},
},
})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
fd -t f 'vitest.config.ts|vite.config.ts' packages/typescript/ai-mistral
cat packages/typescript/ai-mistral/vitest.config.ts 2>/dev/null || true

Repository: TanStack/ai

Length of output: 609


Remove duplicate Vitest configuration and align test location with project guidelines.

Both vite.config.ts and vitest.config.ts contain overlapping test configuration. The dedicated vitest.config.ts is the proper entry point for Vitest; the test block in vite.config.ts is redundant and should be removed. Additionally, the test include pattern uses tests/**/*.test.ts, but project guidelines require unit tests to be placed alongside source code at src/**/*.test.ts. Update the include pattern and move existing test files to match this structure.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/vite.config.ts` around lines 5 - 27, The
Vitest test block inside the defineConfig object in vite.config.ts is redundant
with the dedicated vitest.config.ts and should be removed: delete the entire
test: { ... } block (the object assigned under config in the defineConfig call)
so vite.config.ts only contains non-test Vite config. Also update any test
include patterns to follow project guidelines by using src/**/*.test.ts (adjust
references that previously used tests/**/*.test.ts) and move existing test files
from tests/ into the corresponding src/ locations; ensure vitest.config.ts
remains the single source of truth for test configuration.

Comment on lines +1 to +22
import { defineConfig } from 'vitest/config'

export default defineConfig({
test: {
globals: true,
environment: 'node',
include: ['tests/**/*.test.ts'],
coverage: {
provider: 'v8',
reporter: ['text', 'json', 'html', 'lcov'],
exclude: [
'node_modules/',
'dist/',
'tests/',
'**/*.test.ts',
'**/*.config.ts',
'**/types.ts',
],
include: ['src/**/*.ts'],
},
},
})
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
fd -t f 'vite(st)?\.config\.ts' packages/typescript | sort
for f in $(fd -t f 'vite(st)?\.config\.ts' packages/typescript); do
  echo "=== $f ==="
  cat "$f"
done

Repository: TanStack/ai

Length of output: 32275


Remove the redundant vitest.config.ts file.

The vitest.config.ts duplicates the test configuration already present in vite.config.ts. The test block in vite.config.ts (globals, environment, include, coverage) is identical to the standalone vitest.config.ts. Since Vitest prefers vitest.config.* when present, the simpler approach is to consolidate test configuration into vite.config.ts only and remove the redundant file, matching the pattern used by other adapter packages (ai-code-mode, ai-gemini, ai-groq, ai-openai, etc.).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/vitest.config.ts` around lines 1 - 22, Remove
the redundant vitest.config.ts file (which exports defineConfig / default
config) and consolidate its test block into the existing vite.config.ts by
keeping the test configuration (globals, environment, include, coverage) only in
vite.config.ts; delete the vitest.config.ts file so Vitest will use the
configuration from vite.config.ts like other adapter packages.

@AlemTuzlak
Copy link
Copy Markdown
Contributor

Would you mind adding this to the e2e test suite as an additional provider, I don't feel comfortable merging new adapters if they don't pass our e2e suite!

@cstrnt
Copy link
Copy Markdown
Author

cstrnt commented Apr 17, 2026

Would you mind adding this to the e2e test suite as an additional provider, I don't feel comfortable merging new adapters if they don't pass our e2e suite!

Good shout! Will do :)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
packages/typescript/ai-mistral/src/adapters/text.ts (1)

226-255: ⚠️ Potential issue | 🟡 Minor

Error-handling asymmetry between chatStream's outer catch and processMistralStreamChunks.

Pre-stream failures (e.g., fetchRawMistralStream throwing before yielding on bad request/401) flow through this outer catch, emit RUN_ERROR, and then throw err at line 253 — consumers must handle rejection. Mid-stream failures, however, are swallowed by the inner try/catch in processMistralStreamChunks (line 502-515), which yields RUN_ERROR without rethrowing, so the iterator completes normally. Consumers therefore cannot rely on for await … of chatStream(...) rejecting or not rejecting consistently on failure.

Pick one policy and apply uniformly: either remove the inner catch and let all errors bubble to this outer handler, or have the inner catch also rethrow after yielding RUN_ERROR.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 226 - 255,
The error handling is inconsistent: pre-stream errors are caught by the outer
catch in chatStream and rethrown, while processMistralStreamChunks currently
catches mid-stream errors and only yields RUN_ERROR without rethrowing; update
processMistralStreamChunks so that its inner catch (the try/catch inside
processMistralStreamChunks) still yields the RUN_ERROR event like it does now
but then rethrows the error (throw err) after yielding, so all errors uniformly
propagate to the outer handler (and consumers of chatStream) — locate
processMistralStreamChunks and add a rethrow in its catch block after yielding
the RUN_ERROR.
🧹 Nitpick comments (6)
packages/typescript/ai-mistral/tests/mistral-adapter.test.ts (1)

90-136: Minor: no mockComplete/fetch reset between tests in this block.

Tests in describe('Text adapter') don't invoke streaming, so the missing beforeEach reset is harmless today — but if someone later adds a streaming assertion here without noticing the outer block has no reset, prior vi.stubGlobal('fetch', …) state could leak. Adding a beforeEach(() => { vi.clearAllMocks() }) here mirrors the Mistral AG-UI event emission block and is defensive.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts` around lines 90
- 136, Add a beforeEach to the "Text adapter" describe block to clear any mocked
globals or fetch state to prevent test leakage: inside the describe('Text
adapter') block add beforeEach(() => { vi.clearAllMocks() }) (or
vi.resetAllMocks()/vi.restoreAllMocks() if you prefer) to mirror the Mistral
AG-UI event emission block and ensure prior vi.stubGlobal('fetch', ...) or
mockComplete state cannot leak into these tests.
packages/typescript/ai-mistral/src/adapters/text.ts (2)

36-36: Array-type style violations flagged by ESLint.

ChatCompletionMessageParam[] / unknown[] should use the Array<T> form used throughout this file. Static analysis flags both on lines 36 and 126.

🧹 Proposed change
-function messagesToSnakeCase(messages: ChatCompletionMessageParam[]): unknown[] {
+function messagesToSnakeCase(
+  messages: Array<ChatCompletionMessageParam>,
+): Array<unknown> {
@@
-  messages: ChatCompletionMessageParam[]
+  messages: Array<ChatCompletionMessageParam>

Also applies to: 126-126

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` at line 36, ESLint flags
array-type style violations: replace the bracket-style types with the Array<T>
form in this file; specifically change the function signature of
messagesToSnakeCase from using ChatCompletionMessageParam[] and unknown[] to use
Array<ChatCompletionMessageParam> and Array<unknown>, and likewise update the
other occurrence around the code referenced at line 126 to use Array<...>
instead of the bracket notation so the file consistently uses Array<T>.

148-148: Empty interface that only extends.

export interface MistralTextConfig extends MistralClientConfig {} adds no members and will trip @typescript-eslint/no-empty-object-type. Prefer a type alias so the intent (public re-export under a more specific name) is explicit.

🧹 Proposed change
-/**
- * Configuration for Mistral text adapter.
- */
-export interface MistralTextConfig extends MistralClientConfig {}
+/**
+ * Configuration for Mistral text adapter.
+ */
+export type MistralTextConfig = MistralClientConfig
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` at line 148, The
exported empty interface MistralTextConfig that only extends MistralClientConfig
should be replaced with a type alias to avoid the
`@typescript-eslint/no-empty-object-type` lint rule; change the declaration for
MistralTextConfig to a type alias (e.g., export type MistralTextConfig =
MistralClientConfig) so the file still publicly re-exports the same shape
without an empty interface.
packages/typescript/ai-mistral/src/utils/client.ts (1)

46-62: Browser window.env fallback uses as any and reads a non-standard global.

(globalThis as any).window?.env is not a standard browser surface — it only works if a build step injects window.env (e.g., some custom Vite/CRA setups). In a plain browser, window.env is undefined and you fall through to process.env, which doesn't exist, so the error message (“Please set it in your environment variables…”) is thrown in a context where “environment variables” isn't meaningful. Consider either documenting this convention or narrowing the fallback to Node-only, and typing the env lookup without as any (e.g., via (globalThis as { window?: { env?: Record<string, string> } }).window?.env).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 46 - 62, The
getMistralApiKeyFromEnv function currently uses an untyped (globalThis as
any).window?.env fallback and then tries process.env, which misbehaves in plain
browsers; change the lookup to be Node-only by checking typeof process !==
'undefined' && typeof process.env !== 'undefined' first and read
process.env.MISTRAL_API_KEY there, or if you must support injected window.env
explicitly cast globalThis to a safe type like (globalThis as { window?: { env?:
Record<string,string> } }) and read window.env.MISTRAL_API_KEY; also update the
thrown message around MISTRAL_API_KEY to not instruct browser users to set
"environment variables" (clarify Node vs injected window.env) so the error is
accurate for the execution environment.
testing/e2e/tests/test-matrix.ts (1)

14-140: Drift risk: this support matrix fully duplicates testing/e2e/src/lib/feature-support.ts.

Not introduced by this PR, but now that two files need 'mistral' added in lockstep (and 11 individual features each), the duplication is a concrete maintenance hazard — a future provider or feature change will silently skew between them. Consider importing/reusing the matrix from feature-support.ts in a follow-up so there is a single source of truth.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@testing/e2e/tests/test-matrix.ts` around lines 14 - 140, The file defines a
local supportMatrix that duplicates the canonical matrix in feature-support.ts;
remove the local supportMatrix and instead import and use the single exported
support matrix from feature-support.ts (the module that currently holds the
source-of-truth matrix), keeping the same type Record<Feature, Set<Provider>>
and replacing all references to the local supportMatrix with the imported symbol
(e.g., supportMatrix or getFeatureSupport) so there is one source of truth for
provider/feature support.
packages/typescript/ai-mistral/src/model-meta.ts (1)

248-269: Per-model provider options map is effectively uniform — intentional?

MistralChatModelProviderOptionsByName maps every model to the same MistralTextProviderOptions, so ResolveProviderOptions<TModel> always yields the same type. That still satisfies the tree-shakeable per-model typing mechanism, but if you plan to surface capability differences later (e.g., vision-only fields on pixtral-*, reasoning-specific options on magistral-*, or prohibiting tools on models that lack it), consider branching this map now so it actually narrows per model.

As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 248 - 269,
MistralChatModelProviderOptionsByName currently maps every model key to the same
MistralTextProviderOptions so ResolveProviderOptions<TModel> never narrows
per-model; update the map to assign model-specific option types (e.g., map
pixtral-* to a MistralVisionProviderOptions, magistral-* to a
MistralReasoningProviderOptions, or specific entries for each (typeof
MISTRAL_CHAT_MODELS)[number]) so ResolveProviderOptions will resolve to the
correct per-model type; adjust or add the new provider option types and ensure
ResolveProviderOptions and any uses still fall back to
MistralTextProviderOptions for unknown models.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 527-531: The current assembly of serverURL in the adapter
unconditionally appends "/v1/chat/completions" and will produce double "/v1/v1"
if a user passes a base that already includes a version segment; update the
logic around config.serverURL (used to build serverURL and url) to normalize and
strip any trailing "/v1" or "/v1/" (in addition to trailing slash) before
appending "/v1/chat/completions", so both bare bases and bases that already
include the version produce a correct single "/v1" in the final url.
- Around line 523-616: Add a brief inline comment next to the Mistral SDK client
initialization (the client created around line 209) stating that the SDK client
is intentionally retained because it's required by the structuredOutput logic
(see structuredOutput function around line 272) and is not used for streaming
paths; mention that e2e tests route Mistral through llmock via providers.ts
(serverURL: base) so the custom SSE streaming path remains covered, to prevent
future contributors from removing the client when refactoring streaming code.

---

Duplicate comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 226-255: The error handling is inconsistent: pre-stream errors are
caught by the outer catch in chatStream and rethrown, while
processMistralStreamChunks currently catches mid-stream errors and only yields
RUN_ERROR without rethrowing; update processMistralStreamChunks so that its
inner catch (the try/catch inside processMistralStreamChunks) still yields the
RUN_ERROR event like it does now but then rethrows the error (throw err) after
yielding, so all errors uniformly propagate to the outer handler (and consumers
of chatStream) — locate processMistralStreamChunks and add a rethrow in its
catch block after yielding the RUN_ERROR.

---

Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Line 36: ESLint flags array-type style violations: replace the bracket-style
types with the Array<T> form in this file; specifically change the function
signature of messagesToSnakeCase from using ChatCompletionMessageParam[] and
unknown[] to use Array<ChatCompletionMessageParam> and Array<unknown>, and
likewise update the other occurrence around the code referenced at line 126 to
use Array<...> instead of the bracket notation so the file consistently uses
Array<T>.
- Line 148: The exported empty interface MistralTextConfig that only extends
MistralClientConfig should be replaced with a type alias to avoid the
`@typescript-eslint/no-empty-object-type` lint rule; change the declaration for
MistralTextConfig to a type alias (e.g., export type MistralTextConfig =
MistralClientConfig) so the file still publicly re-exports the same shape
without an empty interface.

In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 248-269: MistralChatModelProviderOptionsByName currently maps
every model key to the same MistralTextProviderOptions so
ResolveProviderOptions<TModel> never narrows per-model; update the map to assign
model-specific option types (e.g., map pixtral-* to a
MistralVisionProviderOptions, magistral-* to a MistralReasoningProviderOptions,
or specific entries for each (typeof MISTRAL_CHAT_MODELS)[number]) so
ResolveProviderOptions will resolve to the correct per-model type; adjust or add
the new provider option types and ensure ResolveProviderOptions and any uses
still fall back to MistralTextProviderOptions for unknown models.

In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 46-62: The getMistralApiKeyFromEnv function currently uses an
untyped (globalThis as any).window?.env fallback and then tries process.env,
which misbehaves in plain browsers; change the lookup to be Node-only by
checking typeof process !== 'undefined' && typeof process.env !== 'undefined'
first and read process.env.MISTRAL_API_KEY there, or if you must support
injected window.env explicitly cast globalThis to a safe type like (globalThis
as { window?: { env?: Record<string,string> } }) and read
window.env.MISTRAL_API_KEY; also update the thrown message around
MISTRAL_API_KEY to not instruct browser users to set "environment variables"
(clarify Node vs injected window.env) so the error is accurate for the execution
environment.

In `@packages/typescript/ai-mistral/tests/mistral-adapter.test.ts`:
- Around line 90-136: Add a beforeEach to the "Text adapter" describe block to
clear any mocked globals or fetch state to prevent test leakage: inside the
describe('Text adapter') block add beforeEach(() => { vi.clearAllMocks() }) (or
vi.resetAllMocks()/vi.restoreAllMocks() if you prefer) to mirror the Mistral
AG-UI event emission block and ensure prior vi.stubGlobal('fetch', ...) or
mockComplete state cannot leak into these tests.

In `@testing/e2e/tests/test-matrix.ts`:
- Around line 14-140: The file defines a local supportMatrix that duplicates the
canonical matrix in feature-support.ts; remove the local supportMatrix and
instead import and use the single exported support matrix from
feature-support.ts (the module that currently holds the source-of-truth matrix),
keeping the same type Record<Feature, Set<Provider>> and replacing all
references to the local supportMatrix with the imported symbol (e.g.,
supportMatrix or getFeatureSupport) so there is one source of truth for
provider/feature support.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3653946e-37c6-4081-89cf-360a99c9e041

📥 Commits

Reviewing files that changed from the base of the PR and between a87d242 and ba8815b.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (13)
  • packages/typescript/ai-mistral/package.json
  • packages/typescript/ai-mistral/src/adapters/text.ts
  • packages/typescript/ai-mistral/src/message-types.ts
  • packages/typescript/ai-mistral/src/model-meta.ts
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • packages/typescript/ai-mistral/src/utils/client.ts
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
  • testing/e2e/package.json
  • testing/e2e/src/lib/feature-support.ts
  • testing/e2e/src/lib/providers.ts
  • testing/e2e/src/lib/types.ts
  • testing/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (3)
  • testing/e2e/package.json
  • testing/e2e/src/lib/types.ts
  • packages/typescript/ai-mistral/package.json
🚧 Files skipped from review as they are similar to previous changes (3)
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • packages/typescript/ai-mistral/src/message-types.ts

Comment thread packages/typescript/ai-mistral/src/adapters/text.ts Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (5)
packages/typescript/ai-mistral/src/adapters/text.ts (4)

566-578: Unknown extra modelOptions keys bypass the snake_case conversion.

mapTextOptionsToMistral (Line 687‑698) only copies a fixed allowlist from modelOptions into the request (stop, random_seed, response_format, …, safe_prompt, n, prediction). Here in fetchRawMistralStream, the destructure pulls out the camelCase names produced by the mapper, and ...rest spreads the remainder. If the mapper ever forwards a new camelCase field (or a future edit to mapTextOptionsToMistral spreads raw modelOptions), those keys will land in the JSON body unchanged (e.g. randomSeed instead of random_seed) and silently be ignored by the Mistral API. Consider applying a generic camelCase → snake_case transform on the final body (or centralize the request builder) so new fields don't silently regress.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 566 - 578,
The request body built in fetchRawMistralStream is allowing unknown camelCase
keys from mapTextOptionsToMistral to slip through via the rest spread, causing
Mistral to ignore them; update fetchRawMistralStream (or centralize the request
builder) to normalize all keys to snake_case before sending—e.g., after
destructuring the known camelCase props (responseFormat, toolChoice,
parallelToolCalls, frequencyPenalty, presencePenalty, safePrompt, etc.) convert
the remaining rest object's keys from camelCase to snake_case and merge that
normalized object into the final body so any future or unknown modelOptions are
correctly named for the Mistral API.

280-306: structuredOutput forwards responseFormat from modelOptions and then overrides it — confirm precedence is intentional.

mapTextOptionsToMistral copies modelOptions.response_format into requestParams.responseFormat (Line 690). In structuredOutput, you then spread ...nonStreamParams into the request and immediately re-set responseFormat: { type: 'json_schema', ... }, so the user-provided response_format is silently overridden. That's probably the right semantics for structuredOutput (callers shouldn't be able to fight the adapter), but it means users who pass a custom response_format via modelOptions get different behavior between chatStream and structuredOutput without any warning. Consider either dropping response_format from nonStreamParams explicitly or documenting the override.

Additionally, the raw JSON error path at Line 303‑305 slices the first 200 chars of the returned content for the error message — if Mistral returns an error envelope with leading metadata, that truncation may hide the useful part. Including response.id / finishReason in the error would aid debugging.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 280 - 306,
structuredOutput currently spreads nonStreamParams (populated by
mapTextOptionsToMistral which copies modelOptions.response_format) and then
unconditionally overrides responseFormat, silently discarding user-supplied
response_format; remove responseFormat from nonStreamParams before spreading (or
explicitly delete requestParams.responseFormat in structuredOutput) so callers
can't unintentionally override, and add response metadata to the JSON-parse
error by including response.id and response.finishReason (and avoid hiding
useful info by not truncating or by trimming intelligently) in the thrown Error
to aid debugging; locate symbols: mapTextOptionsToMistral, structuredOutput,
nonStreamParams, responseFormat, and response (response.id/finishReason).

606-622: SSE parsing splits on \n rather than event boundaries — multi‑line data: payloads will be misread.

The SSE spec delimits events by blank lines and allows a single event to span multiple data: lines (each concatenated with \n). The current parser treats every newline-separated line independently and only keeps lines starting with data:. For Mistral today this happens to be fine (single-line data: per chunk), but any future chunk that spans multiple lines (e.g. pretty-printed JSON, or a data: line followed by a data: continuation) will be split mid-JSON and silently dropped by the catch {} in Line 619–621, which also hides real parse errors.

Consider buffering on event boundaries (\n\n) and joining all consecutive data: lines within an event before JSON.parse. At minimum, log (or surface via RUN_ERROR) when JSON.parse fails so silent data loss is observable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 606 - 622,
The SSE loop currently splits on '\n' and parses each 'data:' line independently
causing multi-line SSE events to be broken and parse errors to be swallowed;
change the logic around buffer/decoder and the loop that uses buffer.split('\n')
so you accumulate until an event boundary (blank line '\n\n'), then for each
complete event collect and join all lines that start with 'data:' (preserving
'\n' between them) into one payload string, JSON.parse that joined payload and
pass the result into rawChunkToCamelCase before yielding; also replace the empty
catch in the JSON.parse block with explicit error handling that surfaces the
parsing error (e.g., log via RUN_ERROR or processLogger) so malformed JSON is
visible rather than silently dropped.

647-700: mapTextOptionsToMistral drops the TModel generic on options.

The signature is mapTextOptionsToMistral(options: TextOptions) — unparameterized — so inside the method options.modelOptions widens to the base TextOptions['modelOptions'] type, which is why the as Omit<InternalTextProviderOptions, ...> cast is needed at Line 648‑653. Threading the generic through (mapTextOptionsToMistral(options: TextOptions<ResolveProviderOptions<TModel>>)) would let the type system enforce the shape and let you drop the cast, keeping the per‑model provider‑options guarantee advertised by the class.

As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 647 - 700,
The mapTextOptionsToMistral function drops the TModel generic by accepting an
unparameterized TextOptions, causing model-specific provider options to widen
and forcing the unsafe cast; change the signature to accept the genericed type
(e.g., mapTextOptionsToMistral<TModel>(options:
TextOptions<ResolveProviderOptions<TModel>>)) so options.modelOptions retains
the per-model shape and remove the Omit/unsafe cast, then update any callers to
pass through the same TModel generic from the surrounding class/method where
appropriate (referencing mapTextOptionsToMistral, TextOptions,
ResolveProviderOptions, TModel, and InternalTextProviderOptions to locate the
implementation).
packages/typescript/ai-mistral/src/model-meta.ts (1)

4-7: MistralVisionProviderOptions and MistralReasoningProviderOptions are pure aliases and add no type safety.

Both are declared as = MistralTextProviderOptions with no additional or narrowed fields, so MistralChatModelProviderOptionsByName (Lines 257‑269) ends up mapping every model to the same provider‑options shape. This defeats the purpose of per‑model provider options. If Mistral truly exposes vision/reasoning‑specific knobs (e.g. image detail, reasoning toggles, safePrompt), model these as distinct types (e.g. MistralTextProviderOptions & { reasoning?: ... } / & { vision?: ... }) so consumers get type‑level differentiation. Otherwise, consider dropping the aliases and mapping everything to MistralTextProviderOptions directly.

As per coding guidelines: "Provide type safety per model by using model-specific provider options in adapter packages".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/model-meta.ts` around lines 4 - 7,
MistralVisionProviderOptions and MistralReasoningProviderOptions are just
aliases of MistralTextProviderOptions so they provide no model-specific type
safety; update them to either (A) declare distinct option shapes that extend
MistralTextProviderOptions with vision- or reasoning-specific fields (e.g.,
MistralVisionProviderOptions = MistralTextProviderOptions & { vision?: {
detail?: string; safePrompt?: boolean } } and MistralReasoningProviderOptions =
MistralTextProviderOptions & { reasoning?: { enableChainOfThought?: boolean;
maxSteps?: number } }), or (B) remove the aliases and map models directly to
MistralTextProviderOptions in MistralChatModelProviderOptionsByName if no extra
fields exist; adjust any usages of MistralVisionProviderOptions,
MistralReasoningProviderOptions, and MistralChatModelProviderOptionsByName
accordingly so each model has the intended distinct provider option type.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 447-510: The finishReason block currently emits TOOL_CALL_END for
every started toolCall in toolCallsInProgress each time a chunk carries
choice.finishReason, causing duplicates; modify the code to mark individual tool
calls as ended (e.g., set toolCall.ended = true or remove the entry from
toolCallsInProgress) when you yield TOOL_CALL_END so subsequent chunks skip
already-ended calls, and ensure hasEmittedToolCall is only set once per tool
call; update references in the loop over toolCallsInProgress and the emission of
TOOL_CALL_END and RUN_FINISHED (symbols: toolCallsInProgress, toolCall,
hasEmittedToolCall, TOOL_CALL_END, RUN_FINISHED, aguiState) accordingly.
- Around line 708-717: The code currently coerces a missing toolCallId to an
empty string when handling a tool message (the branch where message.role ===
'tool'), which can corrupt tool-response roundtrips; update that branch (the
object returned for tool messages) to validate message.toolCallId and throw a
clear Error (e.g., "Missing toolCallId for tool message") when it's falsy,
instead of emitting toolCallId: '', and retain the existing content
serialization logic for message.content.
- Around line 535-627: fetchRawMistralStream leaks HTTP connections because it
doesn't accept or forward an AbortSignal and only calls reader.releaseLock() in
finally; update the function and its callers to accept an AbortSignal (add it to
MistralClientConfig or TextOptions), pass that signal into fetch({ signal }),
and in the finally block call await reader.cancel().catch(() => {}) before
reader.releaseLock() so aborting the async generator or throwing from a consumer
tears down the server stream; update any call sites (e.g., chatStream) that
create the generator to forward the provided signal.

In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 68-70: The generateId function uses crypto.randomUUID() which may
be undefined on older Node versions; update the package to either require
Node>=19 by adding an "engines": { "node": ">=19" } entry to
packages/typescript/ai-mistral/package.json, or change generateId to use a safe
import/fallback: import { randomUUID } from 'node:crypto' and call randomUUID(),
or call globalThis.crypto?.randomUUID() and fall back to a polyfill (e.g., UUID
v4) when undefined; modify the generateId function accordingly to use the chosen
approach and ensure no ReferenceError occurs on Node 18 and below.

In `@packages/typescript/ai-mistral/src/utils/schema-converter.ts`:
- Around line 94-106: The optional-property branch only handles cases where
prop.type exists, so when wasOptional is true but prop lacks a top-level type
(e.g., uses anyOf/oneOf/$ref/enum-only/const) the property is not made nullable;
update the else branch for wasOptional to detect these cases and wrap the schema
in a nullable composition, e.g. replace the prop with a schema that expresses
"prop OR null" (using anyOf or oneOf) so properties[propName] becomes something
like { anyOf: [prop, { type: 'null' }] } when prop.type is absent or not usable;
ensure you preserve existing keys on prop and reuse the same
properties[propName] assignment pattern so downstream required logic (properties
and required handling) treats the field as nullable.
- Around line 13-15: The array branch in transformNullsToUndefined maps nulls to
undefined but does not remove them, leaving holes; update the Array.isArray
branch in transformNullsToUndefined to both map and then filter out undefined
entries (e.g., map each item with transformNullsToUndefined(item) and then
filter(item => item !== undefined)) so arrays do not contain undefined holes and
maintain the same "drop nullable" behavior as the object branch; ensure the
returned value is still cast to T or the appropriate array type after filtering.

---

Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 566-578: The request body built in fetchRawMistralStream is
allowing unknown camelCase keys from mapTextOptionsToMistral to slip through via
the rest spread, causing Mistral to ignore them; update fetchRawMistralStream
(or centralize the request builder) to normalize all keys to snake_case before
sending—e.g., after destructuring the known camelCase props (responseFormat,
toolChoice, parallelToolCalls, frequencyPenalty, presencePenalty, safePrompt,
etc.) convert the remaining rest object's keys from camelCase to snake_case and
merge that normalized object into the final body so any future or unknown
modelOptions are correctly named for the Mistral API.
- Around line 280-306: structuredOutput currently spreads nonStreamParams
(populated by mapTextOptionsToMistral which copies modelOptions.response_format)
and then unconditionally overrides responseFormat, silently discarding
user-supplied response_format; remove responseFormat from nonStreamParams before
spreading (or explicitly delete requestParams.responseFormat in
structuredOutput) so callers can't unintentionally override, and add response
metadata to the JSON-parse error by including response.id and
response.finishReason (and avoid hiding useful info by not truncating or by
trimming intelligently) in the thrown Error to aid debugging; locate symbols:
mapTextOptionsToMistral, structuredOutput, nonStreamParams, responseFormat, and
response (response.id/finishReason).
- Around line 606-622: The SSE loop currently splits on '\n' and parses each
'data:' line independently causing multi-line SSE events to be broken and parse
errors to be swallowed; change the logic around buffer/decoder and the loop that
uses buffer.split('\n') so you accumulate until an event boundary (blank line
'\n\n'), then for each complete event collect and join all lines that start with
'data:' (preserving '\n' between them) into one payload string, JSON.parse that
joined payload and pass the result into rawChunkToCamelCase before yielding;
also replace the empty catch in the JSON.parse block with explicit error
handling that surfaces the parsing error (e.g., log via RUN_ERROR or
processLogger) so malformed JSON is visible rather than silently dropped.
- Around line 647-700: The mapTextOptionsToMistral function drops the TModel
generic by accepting an unparameterized TextOptions, causing model-specific
provider options to widen and forcing the unsafe cast; change the signature to
accept the genericed type (e.g., mapTextOptionsToMistral<TModel>(options:
TextOptions<ResolveProviderOptions<TModel>>)) so options.modelOptions retains
the per-model shape and remove the Omit/unsafe cast, then update any callers to
pass through the same TModel generic from the surrounding class/method where
appropriate (referencing mapTextOptionsToMistral, TextOptions,
ResolveProviderOptions, TModel, and InternalTextProviderOptions to locate the
implementation).

In `@packages/typescript/ai-mistral/src/model-meta.ts`:
- Around line 4-7: MistralVisionProviderOptions and
MistralReasoningProviderOptions are just aliases of MistralTextProviderOptions
so they provide no model-specific type safety; update them to either (A) declare
distinct option shapes that extend MistralTextProviderOptions with vision- or
reasoning-specific fields (e.g., MistralVisionProviderOptions =
MistralTextProviderOptions & { vision?: { detail?: string; safePrompt?: boolean
} } and MistralReasoningProviderOptions = MistralTextProviderOptions & {
reasoning?: { enableChainOfThought?: boolean; maxSteps?: number } }), or (B)
remove the aliases and map models directly to MistralTextProviderOptions in
MistralChatModelProviderOptionsByName if no extra fields exist; adjust any
usages of MistralVisionProviderOptions, MistralReasoningProviderOptions, and
MistralChatModelProviderOptionsByName accordingly so each model has the intended
distinct provider option type.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c74b9bcc-dcea-4926-b4c2-66ab39cbc9ea

📥 Commits

Reviewing files that changed from the base of the PR and between ba8815b and 7ba235c.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (13)
  • packages/typescript/ai-mistral/package.json
  • packages/typescript/ai-mistral/src/adapters/text.ts
  • packages/typescript/ai-mistral/src/message-types.ts
  • packages/typescript/ai-mistral/src/model-meta.ts
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • packages/typescript/ai-mistral/src/utils/client.ts
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
  • testing/e2e/package.json
  • testing/e2e/src/lib/feature-support.ts
  • testing/e2e/src/lib/providers.ts
  • testing/e2e/src/lib/types.ts
  • testing/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (4)
  • testing/e2e/src/lib/types.ts
  • testing/e2e/package.json
  • packages/typescript/ai-mistral/package.json
  • packages/typescript/ai-mistral/src/message-types.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • testing/e2e/src/lib/providers.ts
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • testing/e2e/tests/test-matrix.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts

Comment on lines +535 to +627
private async *fetchRawMistralStream(
params: RawStreamParams,
config: MistralClientConfig,
): AsyncGenerator<MistralStreamEvent> {
const serverURL = (config.serverURL ?? 'https://api.mistral.ai')
.replace(/\/+$/, '')
.replace(/\/v1$/, '')
const url = `${serverURL}/v1/chat/completions`

const {
stream: _stream,
messages,
maxTokens,
topP,
randomSeed,
responseFormat,
toolChoice,
parallelToolCalls,
frequencyPenalty,
presencePenalty,
safePrompt,
...rest
} = params

const body: Record<string, unknown> = {
...rest,
messages: messagesToSnakeCase(messages),
stream: true,
...(maxTokens != null && { max_tokens: maxTokens }),
...(topP != null && { top_p: topP }),
...(randomSeed != null && { random_seed: randomSeed }),
...(responseFormat != null && { response_format: responseFormat }),
...(toolChoice != null && { tool_choice: toolChoice }),
...(parallelToolCalls != null && {
parallel_tool_calls: parallelToolCalls,
}),
...(frequencyPenalty != null && {
frequency_penalty: frequencyPenalty,
}),
...(presencePenalty != null && {
presence_penalty: presencePenalty,
}),
...(safePrompt != null && { safe_prompt: safePrompt }),
}

const headers: Record<string, string> = {
'Content-Type': 'application/json',
Authorization: `Bearer ${config.apiKey}`,
...config.defaultHeaders,
}

const response = await fetch(url, {
method: 'POST',
headers,
body: JSON.stringify(body),
})

if (!response.ok || !response.body) {
const errorText = await response.text()
throw new Error(`Mistral API error ${response.status}: ${errorText}`)
}

const reader = response.body.getReader()
const decoder = new TextDecoder()
let buffer = ''

try {
while (true) {
const { done, value } = await reader.read()
if (done) break

buffer += decoder.decode(value, { stream: true })
const lines = buffer.split('\n')
buffer = lines.pop()!

for (const line of lines) {
const trimmed = line.trim()
if (!trimmed.startsWith('data:')) continue
const data = trimmed.slice(5).trimStart()
if (data === '[DONE]') return

try {
const raw = JSON.parse(data) as Record<string, unknown>
yield { data: rawChunkToCamelCase(raw) }
} catch {
// skip malformed chunks
}
}
}
} finally {
reader.releaseLock()
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

fetchRawMistralStream has no cancellation path — abandoned consumers leak HTTP connections.

The manual fetch omits AbortSignal support and the finally block only calls reader.releaseLock(). If the consumer of chatStream stops iterating (e.g. user navigates away, downstream for await throws, outer run is aborted), the underlying fetch request and server-side stream keep running until Mistral closes them. This is a reliability/resource leak that compounds under load.

Suggested fix:

  • Accept an AbortSignal through TextOptions / MistralClientConfig and pass it to fetch({ signal }).
  • In the finally branch, call reader.cancel().catch(() => {}) before releaseLock() so aborting the generator (or the consumer throwing mid-iteration) tears down the HTTP connection.
🛠 Sketch
     const response = await fetch(url, {
       method: 'POST',
       headers,
       body: JSON.stringify(body),
+      signal: config.signal,
     })
@@
     try {
       while (true) {
         const { done, value } = await reader.read()
         if (done) break
         ...
       }
     } finally {
-      reader.releaseLock()
+      try {
+        await reader.cancel()
+      } catch {
+        /* ignore */
+      }
+      reader.releaseLock()
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 535 - 627,
fetchRawMistralStream leaks HTTP connections because it doesn't accept or
forward an AbortSignal and only calls reader.releaseLock() in finally; update
the function and its callers to accept an AbortSignal (add it to
MistralClientConfig or TextOptions), pass that signal into fetch({ signal }),
and in the finally block call await reader.cancel().catch(() => {}) before
reader.releaseLock() so aborting the async generator or throwing from a consumer
tears down the server stream; update any call sites (e.g., chatStream) that
create the generator to forward the provided signal.

Comment thread packages/typescript/ai-mistral/src/adapters/text.ts
Comment thread packages/typescript/ai-mistral/src/utils/client.ts
Comment thread packages/typescript/ai-mistral/src/utils/schema-converter.ts
Comment thread packages/typescript/ai-mistral/src/utils/schema-converter.ts
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
packages/typescript/ai-mistral/src/adapters/text.ts (1)

538-631: ⚠️ Potential issue | 🟠 Major

fetchRawMistralStream still has no AbortSignal wiring — connection leak on abandoned consumers.

The finally now correctly calls reader.cancel().catch(...) (good), but the fetch(url, { method, headers, body }) call still omits signal. If the adapter consumer aborts the outer run (AbortController.abort()), the HTTP request keeps running to completion server-side because no abort signal reaches fetch. Plumb an AbortSignal through TextOptions / MistralClientConfig (or read from options.signal) and pass it to both fetch({ signal }) and the async iterator loop so producers can actually be cancelled.

Sketch
     const response = await fetch(url, {
       method: 'POST',
       headers,
       body: JSON.stringify(body),
+      signal: config.signal,
     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 538 - 631,
fetchRawMistralStream is not wiring an AbortSignal into the fetch call or
iterator, causing HTTP requests to keep running when consumers abort; update
RawStreamParams and/or MistralClientConfig to accept an AbortSignal (e.g.,
signal) and read it in fetchRawMistralStream, then pass that signal into
fetch({... signal }) and check signal.aborted inside the read loop (or attach
signal.addEventListener('abort') to cancel/throw and ensure reader.cancel()
runs). Ensure references in your change point to fetchRawMistralStream,
RawStreamParams, MistralClientConfig (or the options.signal field) so the HTTP
request and async iterator properly cancel when aborted.
🧹 Nitpick comments (2)
packages/typescript/ai-mistral/src/utils/client.ts (1)

65-70: Minor: fallback uses Math.random() which is not cryptographically secure.

For correlation IDs this is fine, but if generateId is ever used for anything security-sensitive (e.g. run IDs that end up in auth contexts), prefer a crypto-backed fallback. As a defensive improvement, you could fall back to node:crypto.randomUUID before the Math.random() version, since the primary path already covers modern environments.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/utils/client.ts` around lines 65 - 70, The
uuidv4Fallback currently generates UUIDs using Math.random (in uuidv4Fallback),
which is not crypto-secure; update uuidv4Fallback to try using a crypto-backed
generator first (e.g., call node:crypto.randomUUID or
globalThis.crypto.randomUUID if available) and only fall back to the existing
Math.random implementation if no crypto API exists, ensuring generateId and any
other callers use the stronger source when possible.
packages/typescript/ai-mistral/src/adapters/text.ts (1)

619-625: Silent JSON-parse skip can mask real protocol errors.

Malformed SSE payloads are silently dropped (catch {}). That's fine for truly partial frames, but in practice it also hides provider-side protocol changes or proxy corruption that would otherwise surface as noisy user errors. Consider at minimum logging at debug level (gated) or surfacing after N consecutive failures, so a full-stream of unparseable data doesn't present as a silent empty response.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 619 - 625,
The SSE chunk parsing currently swallows JSON parse errors (catch { }) which can
hide protocol issues; update the parsing loop that uses rawChunkToCamelCase(raw)
to (1) log parse failures at debug level including the raw chunk and error
(gated by logger.debug), (2) track consecutive parse failures with a counter
(reset on success), and (3) after a small threshold (e.g. N consecutive
failures) surface an error/emit a failure event instead of silently skipping so
a full stream of unparseable data is not treated as an empty response. Ensure
the log and counter live alongside the existing parsing loop and that successful
yields call rawChunkToCamelCase(raw) as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 449-513: The code currently yields TEXT_MESSAGE_END and
RUN_FINISHED every time a chunk has choice.finishReason, causing duplicates; add
guards to prevent repeated emits by introducing hasEmittedTextMessageEnd and
hasEmittedRunFinished (similar to
hasEmittedToolCall/hasEmittedTextMessageStart), check hasEmittedTextMessageEnd
before yielding the TEXT_MESSAGE_END (and set it true after yielding), and check
hasEmittedRunFinished before yielding RUN_FINISHED (and set it true after
yielding); alternatively you can break out of the outer for-await loop after the
first terminal chunk is processed—update references around the
choice.finishReason handling, the TEXT_MESSAGE_END yield, and the RUN_FINISHED
yield to use these guards.

In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Line 1: Change the incorrect combined import so HTTPClient is imported from
'@mistralai/mistralai/lib/http' while Mistral remains from
'@mistralai/mistralai'; update the import statements to use separate imports and
order them alphabetically (HTTPClient before Mistral) to satisfy ESLint. Locate
the existing import line that references Mistral and HTTPClient and replace it
with two imports that reference the correct modules for the symbols HTTPClient
and Mistral.

---

Duplicate comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 538-631: fetchRawMistralStream is not wiring an AbortSignal into
the fetch call or iterator, causing HTTP requests to keep running when consumers
abort; update RawStreamParams and/or MistralClientConfig to accept an
AbortSignal (e.g., signal) and read it in fetchRawMistralStream, then pass that
signal into fetch({... signal }) and check signal.aborted inside the read loop
(or attach signal.addEventListener('abort') to cancel/throw and ensure
reader.cancel() runs). Ensure references in your change point to
fetchRawMistralStream, RawStreamParams, MistralClientConfig (or the
options.signal field) so the HTTP request and async iterator properly cancel
when aborted.

---

Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 619-625: The SSE chunk parsing currently swallows JSON parse
errors (catch { }) which can hide protocol issues; update the parsing loop that
uses rawChunkToCamelCase(raw) to (1) log parse failures at debug level including
the raw chunk and error (gated by logger.debug), (2) track consecutive parse
failures with a counter (reset on success), and (3) after a small threshold
(e.g. N consecutive failures) surface an error/emit a failure event instead of
silently skipping so a full stream of unparseable data is not treated as an
empty response. Ensure the log and counter live alongside the existing parsing
loop and that successful yields call rawChunkToCamelCase(raw) as before.

In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Around line 65-70: The uuidv4Fallback currently generates UUIDs using
Math.random (in uuidv4Fallback), which is not crypto-secure; update
uuidv4Fallback to try using a crypto-backed generator first (e.g., call
node:crypto.randomUUID or globalThis.crypto.randomUUID if available) and only
fall back to the existing Math.random implementation if no crypto API exists,
ensuring generateId and any other callers use the stronger source when possible.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ea2be8ef-108b-43e5-a0ab-bfd74da759f2

📥 Commits

Reviewing files that changed from the base of the PR and between 7ba235c and ad914cb.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (13)
  • packages/typescript/ai-mistral/package.json
  • packages/typescript/ai-mistral/src/adapters/text.ts
  • packages/typescript/ai-mistral/src/message-types.ts
  • packages/typescript/ai-mistral/src/model-meta.ts
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • packages/typescript/ai-mistral/src/utils/client.ts
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
  • testing/e2e/package.json
  • testing/e2e/src/lib/feature-support.ts
  • testing/e2e/src/lib/providers.ts
  • testing/e2e/src/lib/types.ts
  • testing/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (3)
  • testing/e2e/package.json
  • testing/e2e/src/lib/types.ts
  • packages/typescript/ai-mistral/package.json
🚧 Files skipped from review as they are similar to previous changes (6)
  • testing/e2e/src/lib/providers.ts
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • testing/e2e/src/lib/feature-support.ts
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
  • packages/typescript/ai-mistral/src/model-meta.ts

Comment thread packages/typescript/ai-mistral/src/adapters/text.ts
@@ -0,0 +1,78 @@
import { Mistral, HTTPClient } from '@mistralai/mistralai'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🌐 Web query:

Does the @mistralai/mistralai npm package export HTTPClient from the package root, and does HTTPClient support addHook('beforeRequest', ...)?

💡 Result:

No, the @mistralai/mistralai npm package does not export HTTPClient from the package root. It is exported from a subpath "@mistralai/mistralai/lib/http". However, HTTPClient does support addHook('beforeRequest', ...), as demonstrated in the official documentation and README examples where httpClient.addHook("beforeRequest", (request) => { ... }) is used to modify requests, such as adding headers or timeouts.

Citations:


Fix HTTPClient import path — currently importing from the wrong module.

HTTPClient is not exported from the @mistralai/mistralai package root; it must be imported from @mistralai/mistralai/lib/http. The current import will fail at runtime. Additionally, sort imports alphabetically per ESLint.

Fix
-import { Mistral, HTTPClient } from '@mistralai/mistralai'
+import { Mistral } from '@mistralai/mistralai'
+import { HTTPClient } from '@mistralai/mistralai/lib/http'
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { Mistral, HTTPClient } from '@mistralai/mistralai'
import { Mistral } from '@mistralai/mistralai'
import { HTTPClient } from '@mistralai/mistralai/lib/http'
🧰 Tools
🪛 ESLint

[error] 1-1: Member 'HTTPClient' of the import declaration should be sorted alphabetically.

(sort-imports)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/utils/client.ts` at line 1, Change the
incorrect combined import so HTTPClient is imported from
'@mistralai/mistralai/lib/http' while Mistral remains from
'@mistralai/mistralai'; update the import statements to use separate imports and
order them alphabetically (HTTPClient before Mistral) to satisfy ESLint. Locate
the existing import line that references Mistral and HTTPClient and replace it
with two imports that reference the correct modules for the symbols HTTPClient
and Mistral.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (3)
packages/typescript/ai-mistral/src/adapters/text.ts (2)

538-631: ⚠️ Potential issue | 🟠 Major

fetchRawMistralStream still lacks AbortSignal support.

The fetch call on Line 589 does not accept or forward a signal, so if the consumer of chatStream stops iterating (user navigation, outer run abort, downstream throw), the underlying HTTP request keeps running until the server closes it. The reader.cancel() in finally helps once the generator is actually disposed, but there is no way for callers to externally abort the request in flight.

Consider threading an AbortSignal through TextOptions / MistralClientConfig into fetch({ signal }) so cancellation propagates to the server.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 538 - 631,
The fetchRawMistralStream generator doesn't accept an AbortSignal; update the
types (e.g., add signal?: AbortSignal to RawStreamParams or MistralClientConfig
/ TextOptions) and thread that signal into the fetch call (call fetch(url, {
method: 'POST', headers, body: JSON.stringify(body), signal })) so the HTTP
request can be cancelled externally; keep the existing finally block that calls
reader.cancel() and releaseLock(), and ensure any callers of
fetchRawMistralStream / chatStream propagate their AbortSignal into the new
parameter.

449-513: ⚠️ Potential issue | 🟡 Minor

RUN_FINISHED / TEXT_MESSAGE_END can still be emitted multiple times.

The ended flag prevents duplicate TOOL_CALL_END, but the TEXT_MESSAGE_END (Line 489) and RUN_FINISHED (Line 499) emissions are still unconditional for every chunk where choice.finishReason is truthy. If Mistral (or a proxy) ever sends more than one terminal frame, downstream consumers will observe duplicate run-completion events.

Proposed guard
     let hasEmittedTextMessageStart = false
     let hasEmittedToolCall = false
+    let hasEmittedTextMessageEnd = false
+    let hasEmittedRunFinished = false
@@
-        if (choice.finishReason) {
+        if (choice.finishReason && !hasEmittedRunFinished) {
@@
-          if (hasEmittedTextMessageStart) {
+          if (hasEmittedTextMessageStart && !hasEmittedTextMessageEnd) {
+            hasEmittedTextMessageEnd = true
             yield {
               type: 'TEXT_MESSAGE_END',
               ...
             }
           }
@@
+          hasEmittedRunFinished = true
           yield {
             type: 'RUN_FINISHED',
             ...
           }
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 449 - 513,
The code can emit TEXT_MESSAGE_END and RUN_FINISHED repeatedly when multiple
terminal frames arrive; add idempotency guards: introduce a boolean like
hasEmittedTextMessageEnd and another like hasEmittedRunFinished (or a single
hasEmittedRunFinished used to gate both) and check them before yielding
TEXT_MESSAGE_END and RUN_FINISHED (use existing aguiState.messageId and
aguiState.runId to identify the run/message), set the flag(s) to true
immediately after emitting so subsequent chunks with choice.finishReason do not
emit duplicates; keep the TOOL_CALL_END behavior unchanged.
packages/typescript/ai-mistral/src/utils/client.ts (1)

1-1: ⚠️ Potential issue | 🔴 Critical

HTTPClient import path still appears incorrect.

HTTPClient is not exported from the @mistralai/mistralai package root — it must be imported from @mistralai/mistralai/lib/http. As written, the destructured HTTPClient will resolve to undefined and new HTTPClient() on Line 25 will throw at runtime whenever defaultHeaders is provided.

Proposed fix
-import { Mistral, HTTPClient } from '@mistralai/mistralai'
+import { Mistral } from '@mistralai/mistralai'
+import { HTTPClient } from '@mistralai/mistralai/lib/http'
Does `@mistralai/mistralai` 2.2.0 export HTTPClient from the package root, or only from `@mistralai/mistralai/lib/http`?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/utils/client.ts` at line 1, The import for
HTTPClient is wrong: change the module import so Mistral still comes from
'@mistralai/mistralai' but HTTPClient is imported from
'@mistralai/mistralai/lib/http' (so that the constructor used by new
HTTPClient(...) in this file resolves correctly); update the import statement
that currently destructures HTTPClient from the package root to instead import
HTTPClient from the lib/http path and keep the Mistral import as-is, then verify
usages like new HTTPClient(...) and any defaultHeaders handling still work with
the corrected class.
🧹 Nitpick comments (1)
packages/typescript/ai-mistral/src/adapters/text.ts (1)

295-297: Redundant string coercion.

rawText is already narrowed to string by the || '' fallback on Line 295, so the typeof rawText === 'string' ? rawText : String(rawText) ternary on Lines 296-297 is dead code. Just use rawText directly (or drop the ||'' and keep the String(...) path — but not both).

Proposed simplification
-    const rawText = response.choices?.[0]?.message?.content || ''
-    const textContent =
-      typeof rawText === 'string' ? rawText : String(rawText)
+    const textContent = response.choices?.[0]?.message?.content ?? ''
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/typescript/ai-mistral/src/adapters/text.ts` around lines 295 - 297,
The code performs redundant coercion: `rawText` is already a string due to the
`|| ''` fallback, so the ternary that produces `textContent` is dead code;
replace the ternary with a direct assignment `const textContent = rawText` (or
alternatively remove the `|| ''` fallback and keep `const textContent =
String(response.choices?.[0]?.message?.content)`), updating the variables
`rawText`, `textContent`, and the `response.choices?.[0]?.message?.content`
expression accordingly to remove the unnecessary conversion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 538-631: The fetchRawMistralStream generator doesn't accept an
AbortSignal; update the types (e.g., add signal?: AbortSignal to RawStreamParams
or MistralClientConfig / TextOptions) and thread that signal into the fetch call
(call fetch(url, { method: 'POST', headers, body: JSON.stringify(body), signal
})) so the HTTP request can be cancelled externally; keep the existing finally
block that calls reader.cancel() and releaseLock(), and ensure any callers of
fetchRawMistralStream / chatStream propagate their AbortSignal into the new
parameter.
- Around line 449-513: The code can emit TEXT_MESSAGE_END and RUN_FINISHED
repeatedly when multiple terminal frames arrive; add idempotency guards:
introduce a boolean like hasEmittedTextMessageEnd and another like
hasEmittedRunFinished (or a single hasEmittedRunFinished used to gate both) and
check them before yielding TEXT_MESSAGE_END and RUN_FINISHED (use existing
aguiState.messageId and aguiState.runId to identify the run/message), set the
flag(s) to true immediately after emitting so subsequent chunks with
choice.finishReason do not emit duplicates; keep the TOOL_CALL_END behavior
unchanged.

In `@packages/typescript/ai-mistral/src/utils/client.ts`:
- Line 1: The import for HTTPClient is wrong: change the module import so
Mistral still comes from '@mistralai/mistralai' but HTTPClient is imported from
'@mistralai/mistralai/lib/http' (so that the constructor used by new
HTTPClient(...) in this file resolves correctly); update the import statement
that currently destructures HTTPClient from the package root to instead import
HTTPClient from the lib/http path and keep the Mistral import as-is, then verify
usages like new HTTPClient(...) and any defaultHeaders handling still work with
the corrected class.

---

Nitpick comments:
In `@packages/typescript/ai-mistral/src/adapters/text.ts`:
- Around line 295-297: The code performs redundant coercion: `rawText` is
already a string due to the `|| ''` fallback, so the ternary that produces
`textContent` is dead code; replace the ternary with a direct assignment `const
textContent = rawText` (or alternatively remove the `|| ''` fallback and keep
`const textContent = String(response.choices?.[0]?.message?.content)`), updating
the variables `rawText`, `textContent`, and the
`response.choices?.[0]?.message?.content` expression accordingly to remove the
unnecessary conversion.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 4446ad0a-e71c-4b1d-bf33-4890ae9ec538

📥 Commits

Reviewing files that changed from the base of the PR and between ad914cb and 56da4e6.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (13)
  • packages/typescript/ai-mistral/package.json
  • packages/typescript/ai-mistral/src/adapters/text.ts
  • packages/typescript/ai-mistral/src/message-types.ts
  • packages/typescript/ai-mistral/src/model-meta.ts
  • packages/typescript/ai-mistral/src/tools/function-tool.ts
  • packages/typescript/ai-mistral/src/utils/client.ts
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts
  • testing/e2e/package.json
  • testing/e2e/src/lib/feature-support.ts
  • testing/e2e/src/lib/providers.ts
  • testing/e2e/src/lib/types.ts
  • testing/e2e/tests/test-matrix.ts
✅ Files skipped from review due to trivial changes (6)
  • testing/e2e/package.json
  • packages/typescript/ai-mistral/src/utils/schema-converter.ts
  • testing/e2e/src/lib/types.ts
  • packages/typescript/ai-mistral/package.json
  • packages/typescript/ai-mistral/src/model-meta.ts
  • packages/typescript/ai-mistral/src/message-types.ts
🚧 Files skipped from review as they are similar to previous changes (3)
  • testing/e2e/tests/test-matrix.ts
  • testing/e2e/src/lib/providers.ts
  • packages/typescript/ai-mistral/tests/mistral-adapter.test.ts

@cstrnt
Copy link
Copy Markdown
Author

cstrnt commented Apr 20, 2026

@AlemTuzlak what do you think?

@nx-cloud
Copy link
Copy Markdown

nx-cloud Bot commented Apr 24, 2026

View your CI Pipeline Execution ↗ for commit 23ad8e0

Command Status Duration Result
nx run-many --targets=test:sherif,test:knip,tes... ✅ Succeeded 9s View ↗

☁️ Nx Cloud last updated this comment at 2026-04-24 13:23:05 UTC

@nx-cloud
Copy link
Copy Markdown

nx-cloud Bot commented Apr 24, 2026

View your CI Pipeline Execution ↗ for commit 018ed5e


☁️ Nx Cloud last updated this comment at 2026-04-24 09:05:25 UTC

@cstrnt
Copy link
Copy Markdown
Author

cstrnt commented Apr 24, 2026

@AlemTuzlak would love to get started using the mistral provider :)

@AlemTuzlak
Copy link
Copy Markdown
Contributor

@cstrnt mind making sure the ci checks are green first?

Co-authored-by: Copilot <copilot@github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants