feat(ai-gemini): add geminiTextInteractions() adapter for stateful Interactions API#502
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR adds an experimental Changes
Sequence DiagramsequenceDiagram
participant Client
participant Adapter as GeminiTextInteractionsAdapter
participant GeminiAPI as Gemini Interactions API
Client->>Adapter: chatStream(firstMessage)
Adapter->>Adapter: Build interaction request (no previous_interaction_id)
Adapter->>GeminiAPI: interactions.create(stream: true, turns: [...])
GeminiAPI-->>Adapter: SSE stream (content & tool deltas)
Adapter->>Client: Emit RUN_STARTED, TEXT/TOOL chunks...
Adapter->>Client: Emit CUSTOM (name: "gemini.interactionId", id)
Adapter->>Client: Emit RUN_FINISHED (finishReason, usage)
Client->>Client: Capture interactionId from CUSTOM event
Client->>Adapter: chatStream(secondMessage, previous_interaction_id)
Adapter->>Adapter: Build request (previous_interaction_id, trimmed turns)
Adapter->>GeminiAPI: interactions.create(stream: true, turns: [latest user turn])
GeminiAPI-->>Adapter: SSE stream (content & tool deltas)
Adapter->>Client: Emit RUN_STARTED, TEXT/TOOL chunks...
Adapter->>Client: Emit CUSTOM (new interactionId)
Adapter->>Client: Emit RUN_FINISHED
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution ↗ for commit b30243d
☁️ Nx Cloud last updated this comment at |
|
View your CI Pipeline Execution ↗ for commit f502c52 ☁️ Nx Cloud last updated this comment at |
…teractions API Routes through `client.interactions.create` so callers can pass `previous_interaction_id` via `modelOptions` and let the server retain conversation history. Surfaces the server-assigned interaction id on `RUN_FINISHED.providerMetadata.interactionId` (a new field on the `RunFinishedEvent`) to feed back on the next turn. Scope: text output with function tools only; built-in Gemini tools and non-text output via Interactions remain on `geminiText()`. Marked `@experimental` — the underlying API is Beta per Google. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
f502c52 to
814e65b
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (4)
testing/e2e/tests/stateful-interactions.spec.ts (1)
21-28: Placeholder spec is acceptable, but track the TODO explicitly.The skipped test is well-documented and defers to the Gemini unit suite for adapter correctness. Since this is effectively a stub pending aimock fixture support, it would be worth:
- Filing a tracking issue (referenced from the TODO) so this doesn't silently remain
.skiplong-term.- Considering whether an assertion inside the body (even when skipped) would make the intent more concrete once the test is enabled — right now the empty body means a future maintainer starts from scratch.
As per coding guidelines: "Add E2E test coverage for every feature, bug fix, or behavior change in the testing/e2e directory" — the placeholder-plus-strong-unit-coverage approach is a reasonable compromise given the mock limitation, but should not be left indefinitely.
Want me to draft the intended assertions (two-turn flow extracting
RUN_FINISHED.providerMetadata.interactionIdand re-sending viaprevious_interaction_id) as commented-out scaffolding inside thetest.skipbody so the follow-up PR only needs to remove.skipand wire fixtures?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testing/e2e/tests/stateful-interactions.spec.ts` around lines 21 - 28, Add an explicit tracking TODO and commented scaffolding inside the skipped test to avoid the stub being forgotten: update the test.skip block (in the file containing providersFor / test.describe / test.skip for "stateful-interactions") to reference a newly filed tracking issue in the TODO comment, and include commented-out assertion scaffolding that shows the intended two-turn flow — capture RUN_FINISHED.providerMetadata.interactionId from the first run and show the re-send using previous_interaction_id for the second run (so maintainer only needs to remove .skip and wire fixtures/aimock). Ensure the TODO includes the issue number and the commented assertions reference RUN_FINISHED.providerMetadata.interactionId and previous_interaction_id so intent is explicit.packages/typescript/ai-gemini/src/index.ts (1)
14-22: Exported API surface looks good; consider whether both option type aliases need to be public.
GeminiTextInteractionsProviderOptionsis a direct alias ofExternalTextInteractionsProviderOptionsper the adapter file. Exporting both names means downstream consumers have two ways to refer to the same type, which can cause confusion over time (e.g., docs drifting, types diverging if one is later modified).Unless
ExternalTextInteractionsProviderOptionsis intended as a distinct extension point in the future, you might consider keeping onlyGeminiTextInteractionsProviderOptionsin the public surface and marking the external alias@internal.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/index.ts` around lines 14 - 22, The public API currently exposes two identical type aliases (GeminiTextInteractionsProviderOptions and ExternalTextInteractionsProviderOptions); remove or hide ExternalTextInteractionsProviderOptions from the public index surface to avoid duplicate type names. Update packages/typescript/ai-gemini/src/index.ts to stop exporting ExternalTextInteractionsProviderOptions and only export GeminiTextInteractionsProviderOptions, and, if the original alias must remain in the adapter file, mark the symbol ExternalTextInteractionsProviderOptions as internal (e.g., JSDoc `@internal` or remove its export) in ./adapters/text-interactions so downstream consumers only see GeminiTextInteractionsProviderOptions.packages/typescript/ai-gemini/src/adapters/text-interactions.ts (1)
270-282:JSON.parseon tool‑call arguments can throw synchronously inside request build.
toolCall.function.argumentsis a string of whatever the prior model emitted; a malformed payload will throw here. That propagates up throughbuildInteractionsRequestintochatStream's outertry/catch, so the user sees a genericRUN_ERRORrather than a clear "invalid tool arguments" signal. Worth a targeted try/catch with a descriptive error (including tool name + id) or a safe‑parse fallback to{}.♻️ Suggested fix
- arguments: toolCall.function.arguments - ? JSON.parse(toolCall.function.arguments) - : {}, + arguments: safeParseToolArgs(toolCall.function.arguments, toolCall.function.name, toolCall.id),With a helper that wraps
JSON.parseand throwsnew Error(\Invalid JSON arguments for tool ${name} (${id}): ...`)` on failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines 270 - 282, The JSON.parse call inside the loop that converts toolCall.function.arguments can throw and bubble up from buildInteractionsRequest into chatStream; wrap the parse in a small try/catch (or use a safe-parse helper) when building the parts for msg.toolCalls so malformed JSON yields a descriptive Error like "Invalid JSON arguments for tool <name> (<id>): <parse error message>" (including toolCall.id and toolCall.function.name) or, if you prefer a tolerant approach, log the parse failure and fall back to an empty object {} for that toolCall's arguments; update the code paths that push the function_call part so they use the parsed/safe value.packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts (1)
51-435: Comprehensive test coverage — LGTM.The suite exercises the adapter’s key surfaces: streaming translation,
previous_interaction_idshort‑circuit (latest user turn only) vs. full‑history Turn[] fallback, tool call/result round‑trip, built‑in‑tool and mime‑type rejections, upstream SSE error propagation, and non‑streaming structured output withresponse_mime_type/response_formatandstream: undefined.Two small optional nits, both non‑blocking:
- The many
as anycasts for narrowingStreamChunkunion members are fine, but a tiny helper likefindChunk<T extends StreamChunk['type']>(chunks, type)returning the properly narrowed variant would tighten the assertions.- Consider adding a test where an
errorSSE event is followed byinteraction.completein the same stream, to lock in whether a single turn is expected to emit bothRUN_ERRORandRUN_FINISHED(see related comment ontranslateInteractionEvents).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts` around lines 51 - 435, Add a small typed helper to avoid repetitive "as any" casts and add the suggested SSE sequence test: implement a generic helper findChunk<T extends StreamChunk['type']>(chunks, type) that returns the narrowed chunk type for use in tests (replace usages like chunks.find((c) => c.type === 'RUN_FINISHED') as any with findChunk(chunks, 'RUN_FINISHED')), and add a new test case (e.g., "emits RUN_ERROR and RUN_FINISHED when error then interaction.complete in same stream") that mocks interactionsCreateSpy to stream an error event followed by an interaction.complete and asserts both RUN_ERROR and RUN_FINISHED are emitted; update imports/types in the test file as needed so StreamChunk is available for the helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/adapters/gemini.md`:
- Around line 201-202: Update the retention statement that currently reads
"Retention: **55 days on the Paid Tier, 1 day on the Free Tier.**" to either
append a citation link to Google's Interactions API docs
(https://ai.google.dev/gemini-api/docs/interactions) as the authoritative source
or soften the wording to "as of the time of writing, 55 days on the Paid Tier, 1
day on the Free Tier" so the note is not presented as permanent; make the change
next to the existing `store: false` / `background: true` note so readers see the
link/softened language in context.
- Around line 437-451: The docs for createGeminiTextInteractions incorrectly
list config.httpOptions? whereas the actual config type
(GeminiTextInteractionsConfig extends GeminiClientConfig) uses the same fields
as other Gemini factories; update the documentation for
createGeminiTextInteractions to reference config.baseURL? (matching
createGeminiText and createGeminiImage) and ensure the example and any inline
mentions (including the geminiTextInteractions section) use baseURL consistently
so the config surface matches GeminiClientConfig/GeminiTextInteractionsConfig.
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts`:
- Around line 216-223: The function convertMessagesToInteractionsInput currently
returns an empty array when hasPreviousInteraction is true but
findLatestUserTurn(messages) returns undefined, which leads to requests with
empty input; update convertMessagesToInteractionsInput to throw a descriptive
error instead of returning [] in that case (e.g., indicate that
modelOptions.previous_interaction_id was provided but no user turn could be
found in messages), so callers get a clear adapter-level error; reference
convertMessagesToInteractionsInput and findLatestUserTurn to locate the logic to
change and include mention of previous_interaction_id in the error text.
- Around line 213-251: The current shortcut for previous_interaction_id uses
findLatestUserTurn which only returns a bare user turn and drops trailing tool
results; instead, modify convertMessagesToInteractionsInput and replace
findLatestUserTurn with logic that returns all messages that occur after the
last assistant text completion so the server receives any trailing
tool/assistant-toolCalls messages (i.e., include the latest user turn plus any
subsequent tool/result or assistant(toolCalls) messages). Concretely: in
convertMessagesToInteractionsInput, build toolCallIdToName from earlier
assistant.toolCalls as today, then find the index of the last assistant text
completion (msg.role === 'assistant' && !msg.toolCalls); slice messages from
that index+1 to end (or if none, fallback to the last user turn plus subsequent
tool messages), map that slice via messageToTurn using toolCallIdToName, and
return the resulting TurnInput[]; remove/replace the old findLatestUserTurn
behavior to avoid returning stale user-only turns.
- Around line 656-672: The 'case "error"' branch yields a RUN_ERROR but uses
break which returns control to the outer for-await loop, allowing later events
(e.g., interaction.complete) to produce terminal events like RUN_FINISHED or
TOOL_CALL_END after an error; change the behavior in the case 'error' block
inside the generator (the switch handling event types) to stop the generator
immediately by returning after yielding the RUN_ERROR (replace the break with a
return) so the run ends deterministically on first error and no further terminal
events are emitted.
- Around line 563-587: The thought_summary branch currently emits STEP_FINISHED
per delta; change it to create a single reasoningMessageId (similar to
thinkingStepId using generateId(adapterName)), emit REASONING_MESSAGE_START when
first seeing thinking, then emit REASONING_MESSAGE_CONTENT for each delta chunk
(using delta.content.text), and finally emit REASONING_MESSAGE_END and
REASONING_END when the thinking block completes (on interaction.complete or when
content type changes away from thought_summary), while still keeping the
STEP_STARTED/STEP_FINISHED semantics for the overall thinking step
(thinkingStepId) as appropriate.
---
Nitpick comments:
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts`:
- Around line 270-282: The JSON.parse call inside the loop that converts
toolCall.function.arguments can throw and bubble up from
buildInteractionsRequest into chatStream; wrap the parse in a small try/catch
(or use a safe-parse helper) when building the parts for msg.toolCalls so
malformed JSON yields a descriptive Error like "Invalid JSON arguments for tool
<name> (<id>): <parse error message>" (including toolCall.id and
toolCall.function.name) or, if you prefer a tolerant approach, log the parse
failure and fall back to an empty object {} for that toolCall's arguments;
update the code paths that push the function_call part so they use the
parsed/safe value.
In `@packages/typescript/ai-gemini/src/index.ts`:
- Around line 14-22: The public API currently exposes two identical type aliases
(GeminiTextInteractionsProviderOptions and
ExternalTextInteractionsProviderOptions); remove or hide
ExternalTextInteractionsProviderOptions from the public index surface to avoid
duplicate type names. Update packages/typescript/ai-gemini/src/index.ts to stop
exporting ExternalTextInteractionsProviderOptions and only export
GeminiTextInteractionsProviderOptions, and, if the original alias must remain in
the adapter file, mark the symbol ExternalTextInteractionsProviderOptions as
internal (e.g., JSDoc `@internal` or remove its export) in
./adapters/text-interactions so downstream consumers only see
GeminiTextInteractionsProviderOptions.
In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts`:
- Around line 51-435: Add a small typed helper to avoid repetitive "as any"
casts and add the suggested SSE sequence test: implement a generic helper
findChunk<T extends StreamChunk['type']>(chunks, type) that returns the narrowed
chunk type for use in tests (replace usages like chunks.find((c) => c.type ===
'RUN_FINISHED') as any with findChunk(chunks, 'RUN_FINISHED')), and add a new
test case (e.g., "emits RUN_ERROR and RUN_FINISHED when error then
interaction.complete in same stream") that mocks interactionsCreateSpy to stream
an error event followed by an interaction.complete and asserts both RUN_ERROR
and RUN_FINISHED are emitted; update imports/types in the test file as needed so
StreamChunk is available for the helper.
In `@testing/e2e/tests/stateful-interactions.spec.ts`:
- Around line 21-28: Add an explicit tracking TODO and commented scaffolding
inside the skipped test to avoid the stub being forgotten: update the test.skip
block (in the file containing providersFor / test.describe / test.skip for
"stateful-interactions") to reference a newly filed tracking issue in the TODO
comment, and include commented-out assertion scaffolding that shows the intended
two-turn flow — capture RUN_FINISHED.providerMetadata.interactionId from the
first run and show the re-send using previous_interaction_id for the second run
(so maintainer only needs to remove .skip and wire fixtures/aimock). Ensure the
TODO includes the issue number and the commented assertions reference
RUN_FINISHED.providerMetadata.interactionId and previous_interaction_id so
intent is explicit.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: aaadd776-b336-41b5-97fb-0ce5e30d852f
📒 Files selected for processing (12)
.changeset/gemini-text-interactions.mddocs/adapters/gemini.mdpackages/typescript/ai-gemini/src/adapters/text-interactions.tspackages/typescript/ai-gemini/src/index.tspackages/typescript/ai-gemini/src/text-interactions/text-interactions-provider-options.tspackages/typescript/ai-gemini/tests/text-interactions-adapter.test.tspackages/typescript/ai/src/types.tstesting/e2e/src/lib/feature-support.tstesting/e2e/src/lib/features.tstesting/e2e/src/lib/types.tstesting/e2e/tests/stateful-interactions.spec.tstesting/e2e/tests/test-matrix.ts
…e providerMetadata field Replace the `RunFinishedEvent.providerMetadata.interactionId` approach with an AG-UI `CUSTOM` event (`name: 'gemini.interactionId'`) emitted just before `RUN_FINISHED`. Keeps the provider-specific identifier outside the core `@tanstack/ai` type surface — no change to shared types required. The `CUSTOM` event is the AG-UI protocol's designated escape hatch for provider-specific data. Also adds an `asChunk` cast helper (mirroring sibling `text.ts`) to fix type errors where string-literal event types weren't narrowing to the `EventType` enum. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-code-mode
@tanstack/ai-code-mode-skills
@tanstack/ai-devtools-core
@tanstack/ai-elevenlabs
@tanstack/ai-event-client
@tanstack/ai-fal
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-groq
@tanstack/ai-isolate-cloudflare
@tanstack/ai-isolate-node
@tanstack/ai-isolate-quickjs
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (3)
packages/typescript/ai-gemini/src/adapters/text-interactions.ts (3)
671-683:⚠️ Potential issue | 🟡 Minor
errorbranch still usesbreak; run can emit events afterRUN_ERROR.Falling through to the outer
for awaitmeans a subsequentinteraction.complete(or any trailing event) will still driveRUN_FINISHED/TOOL_CALL_END/TEXT_MESSAGE_ENDafter the terminalRUN_ERROR, violating the single-terminal-event contract. Replacebreakwithreturnhere.🐛 Proposed fix
}, }) - break + return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines 671 - 683, The error branch yields a RUN_ERROR chunk but uses break, which allows the outer for-await loop to continue and emit terminal events after RUN_ERROR; modify the handler in the case 'error' block (the asChunk call that emits RUN_ERROR with runId, model, timestamp and event.error) to return immediately instead of break so the generator stops and no further RUN_FINISHED/TOOL_CALL_END/TEXT_MESSAGE_END events are emitted after the terminal RUN_ERROR.
226-229:⚠️ Potential issue | 🟠 MajorTool-result continuation is still lost when chaining with
previous_interaction_id.
findLatestUserTurnreturns onlyrole === 'user'messages. For a function-tool round-trip[user, assistant(toolCall), tool(result)]withprevious_interaction_idset, the server expects a trailingfunction_resultturn — today this returns either an older user turn or[], silently dropping the tool output. Consider sending all messages after the last assistant text completion (or at minimum including trailingtool/assistant-toolCalls messages alongside the latest user turn).Also: when
hasPreviousInteractionis true and no user/tool turn can be derived, returning[]hands an emptyinputto the server which surfaces as an opaque server error. Throwing a descriptive adapter error up front would be clearer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines 226 - 229, The current branch that runs when hasPreviousInteraction is true uses findLatestUserTurn (which only returns role === 'user') and thus drops trailing tool results; update the logic in text-interactions.ts (the block that calls findLatestUserTurn) to collect not just the last user turn but also any messages after the last assistant text completion — at minimum include trailing assistant toolCall and tool result messages alongside the latest user turn so the tool-result is preserved when previous_interaction_id is set; additionally, when no suitable user/tool messages can be derived do not return [] (which yields an empty input to the server) — instead throw a clear adapter error (e.g., AdapterError or a new descriptive error) explaining that no previous interaction messages could be resolved.
569-593:⚠️ Potential issue | 🟠 Major
thought_summarystill emits per-deltaSTEP_FINISHED.Each delta yields
STEP_FINISHEDon the samethinkingStepId, which tells consumers the step terminated on the first chunk. Use the reasoning protocol instead: emitREASONING_MESSAGE_STARTonce, stream deltas viaREASONING_MESSAGE_CONTENT, and close withREASONING_MESSAGE_END/REASONING_ENDwhen thinking completes (matches the pattern intext.ts). Reserve a singleSTEP_FINISHEDfor the end of the thinking step.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines 569 - 593, The current case 'thought_summary' handler emits STEP_FINISHED for every delta which prematurely signals step completion; instead, when first receiving thought_text, create thinkingStepId and emit STEP_STARTED, then emit a single REASONING_MESSAGE_START (once) and for each delta emit REASONING_MESSAGE_CONTENT appending to thinkingAccumulated (do not emit STEP_FINISHED per-delta), and when the thought_summary stream finishes emit REASONING_MESSAGE_END (and REASONING_END if applicable) followed by one STEP_FINISHED for thinkingStepId; use the existing symbols thinkingStepId, generateId(adapterName), asChunk, thinkingAccumulated and switch out STEP_FINISHED emissions for REASONING_MESSAGE_START/CONTENT/END and a final STEP_FINISHED at completion.
🧹 Nitpick comments (2)
packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts (1)
129-171: Consider adding a test forprevious_interaction_id+ trailing tool result.The current
previous_interaction_idtest only exercises the user-only trim path. A follow-up case with[user, assistant(toolCalls), tool(result)]+previous_interaction_idwould lock in behavior for the tool-continuation scenario (which is exactly the code path that intersects withfindLatestUserTurnintext-interactions.tsL247-257). This would either pin the fix once the trim logic is broadened, or demonstrate the current gap where thefunction_resultis dropped on the outgoing request.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts` around lines 129 - 171, Add a new test that mirrors the existing previous_interaction_id case but uses a conversation sequence [user, assistant (with toolCalls), tool (result), user] to exercise the tool-continuation trim path that touches findLatestUserTurn in text-interactions.ts; create providerOptions with previous_interaction_id set (like 'int_1'), call chat(...) with messages containing an assistant tool call and a subsequent tool result, then assert mocks.interactionsCreateSpy was called once and that the captured payload preserves previous_interaction_id, sets model/stream as expected, and that payload.input contains only the final user turn (and does not drop a function_result/tool result) to lock in correct trimming behavior.packages/typescript/ai-gemini/src/adapters/text-interactions.ts (1)
148-155: Error wrapping drops the original cause and stack.
throw new Error(error.message)preserves the message but loses the original stack and any error fields (e.g., HTTP status, request id). Forwarding the original error — or at minimum attachingcause— makes production debugging considerably easier.♻️ Proposed refactor
- } catch (error) { - throw new Error( - error instanceof Error - ? error.message - : 'An unknown error occurred during structured output generation.', - ) - } + } catch (error) { + if (error instanceof Error) throw error + throw new Error( + 'An unknown error occurred during structured output generation.', + { cause: error }, + ) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines 148 - 155, In the catch block that currently does "throw new Error(...)" preserve the original error and its stack/fields instead of replacing it: either rethrow the original error object (throw error) or create a new Error with the original as the cause (throw new Error(error instanceof Error ? error.message : 'An unknown error...', { cause: error })), updating the catch in the function handling structured output generation so downstream logging/monitoring can access the original stack and metadata.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts`:
- Around line 280-288: The JSON.parse of toolCall.function.arguments can throw
on malformed input and should be guarded: in the code that builds the
function_call object (where toolCall.id, toolCall.function.name and
toolCall.function.arguments are used inside buildInteractionsRequest), wrap the
JSON.parse in a try/catch and on failure either fallback to an empty object
({}), matching the streaming parse behavior around the streaming parse at
L614-L619, or throw a descriptive error that includes the toolCall id/name so
callers (e.g., chatStream) can surface a clear message; update the function_call
creation to use the safe-parsed value.
- Around line 530-567: The handler is overwriting incremental function_call
fragments (state.args) instead of accumulating them, so change the update in the
toolCalls flow to append the incoming args fragment to the existing state.args
(use the same accumulation pattern used elsewhere) and keep the initial
assignment when creating state; when yielding TOOL_CALL_ARGS, set delta to the
new fragment (argsString) and set args to the full accumulated state.args so
downstream consumers get only the new bytes in delta and the complete arguments
in args; update references around toolCalls, state.args, TOOL_CALL_ARGS, and
TOOL_CALL_END to use this accumulation approach.
---
Duplicate comments:
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts`:
- Around line 671-683: The error branch yields a RUN_ERROR chunk but uses break,
which allows the outer for-await loop to continue and emit terminal events after
RUN_ERROR; modify the handler in the case 'error' block (the asChunk call that
emits RUN_ERROR with runId, model, timestamp and event.error) to return
immediately instead of break so the generator stops and no further
RUN_FINISHED/TOOL_CALL_END/TEXT_MESSAGE_END events are emitted after the
terminal RUN_ERROR.
- Around line 226-229: The current branch that runs when hasPreviousInteraction
is true uses findLatestUserTurn (which only returns role === 'user') and thus
drops trailing tool results; update the logic in text-interactions.ts (the block
that calls findLatestUserTurn) to collect not just the last user turn but also
any messages after the last assistant text completion — at minimum include
trailing assistant toolCall and tool result messages alongside the latest user
turn so the tool-result is preserved when previous_interaction_id is set;
additionally, when no suitable user/tool messages can be derived do not return
[] (which yields an empty input to the server) — instead throw a clear adapter
error (e.g., AdapterError or a new descriptive error) explaining that no
previous interaction messages could be resolved.
- Around line 569-593: The current case 'thought_summary' handler emits
STEP_FINISHED for every delta which prematurely signals step completion;
instead, when first receiving thought_text, create thinkingStepId and emit
STEP_STARTED, then emit a single REASONING_MESSAGE_START (once) and for each
delta emit REASONING_MESSAGE_CONTENT appending to thinkingAccumulated (do not
emit STEP_FINISHED per-delta), and when the thought_summary stream finishes emit
REASONING_MESSAGE_END (and REASONING_END if applicable) followed by one
STEP_FINISHED for thinkingStepId; use the existing symbols thinkingStepId,
generateId(adapterName), asChunk, thinkingAccumulated and switch out
STEP_FINISHED emissions for REASONING_MESSAGE_START/CONTENT/END and a final
STEP_FINISHED at completion.
---
Nitpick comments:
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts`:
- Around line 148-155: In the catch block that currently does "throw new
Error(...)" preserve the original error and its stack/fields instead of
replacing it: either rethrow the original error object (throw error) or create a
new Error with the original as the cause (throw new Error(error instanceof Error
? error.message : 'An unknown error...', { cause: error })), updating the catch
in the function handling structured output generation so downstream
logging/monitoring can access the original stack and metadata.
In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts`:
- Around line 129-171: Add a new test that mirrors the existing
previous_interaction_id case but uses a conversation sequence [user, assistant
(with toolCalls), tool (result), user] to exercise the tool-continuation trim
path that touches findLatestUserTurn in text-interactions.ts; create
providerOptions with previous_interaction_id set (like 'int_1'), call chat(...)
with messages containing an assistant tool call and a subsequent tool result,
then assert mocks.interactionsCreateSpy was called once and that the captured
payload preserves previous_interaction_id, sets model/stream as expected, and
that payload.input contains only the final user turn (and does not drop a
function_result/tool result) to lock in correct trimming behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e3f3e8b4-43f7-4759-914d-ebd5b4a21ad8
📒 Files selected for processing (4)
.changeset/gemini-text-interactions.mddocs/adapters/gemini.mdpackages/typescript/ai-gemini/src/adapters/text-interactions.tspackages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts
✅ Files skipped from review due to trivial changes (1)
- .changeset/gemini-text-interactions.md
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/adapters/gemini.md
…apter - Send trailing tool results (not just the latest user turn) when chaining via previous_interaction_id, so function-call round-trips survive statefully. - Throw a descriptive error if previous_interaction_id is set with no sendable content instead of silently sending empty input. - Accumulate function_call arguments across incremental content.delta events via object-level merge (SDK types args as an object per delta). - Emit REASONING_START / REASONING_MESSAGE_START / REASONING_MESSAGE_CONTENT / REASONING_MESSAGE_END / REASONING_END for thought_summary deltas, matching the pattern in text.ts; keep legacy STEP_STARTED/STEP_FINISHED for transition. - Return (not break) after RUN_ERROR so no further terminal events leak out. - Guard JSON.parse on assistant toolCall.arguments with a safe fallback. - Docs: fix createGeminiTextInteractions config field (baseURL, not httpOptions) and link Google's Interactions docs for retention policy.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
packages/typescript/ai-gemini/src/adapters/text-interactions.ts (2)
148-154: Re-wrapping errors loses stack trace and original error metadata.When
erroris already anErrorinstance (e.g., the JSON-parse failure thrown on L142-144),throw new Error(error.message)drops the original stack,cause, and any custom subclass. Considerthrow error instanceof Error ? error : new Error('An unknown error occurred during structured output generation.')to preserve context for debugging.♻️ Proposed refactor
- } catch (error) { - throw new Error( - error instanceof Error - ? error.message - : 'An unknown error occurred during structured output generation.', - ) - } + } catch (error) { + throw error instanceof Error + ? error + : new Error( + 'An unknown error occurred during structured output generation.', + ) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines 148 - 154, The catch block in the structured output generation currently wraps any Error into a new Error, losing stack and metadata; update the catch in the function handling structured output (the catch that inspects the variable error around the JSON-parse failure) to rethrow the original error when error instanceof Error, otherwise throw a new Error('An unknown error occurred during structured output generation.'); this preserves the original stack, cause, and custom subclasses while still handling non-Error values.
565-618: Tool name can becomeundefinedif the first delta omits it.On line 576 the initial
ToolCallStateis created withname: delta.name, and L596 only updates the name on subsequent deltas when truthy. If the first delta for atoolCallIdlacks aname(e.g., a pathological ordering),TOOL_CALL_STARTis emitted withtoolName: undefined, andTOOL_CALL_ENDalso uses this undefined name. In practice the SDK reliably sends name on the first delta, so this is speculative — but a defensive fallback (e.g., deferTOOL_CALL_STARTuntil a name is known, or usestate.name ?? '') would make the translator more resilient.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines 565 - 618, The TOOL_CALL_START can emit an undefined toolName when the first delta omits delta.name; update the ToolCallState creation and emissions to default the name to a safe fallback and ensure later deltas can overwrite it: when creating the state in the function_call case (the local variable state stored in toolCalls), set its name to delta.name ?? '' (or similar non-undefined fallback), and when emitting TOOL_CALL_START and subsequent events (the asChunk call producing type 'TOOL_CALL_START' and any 'TOOL_CALL_END' usage), use state.name ?? '' so the emitted toolName is never undefined while still allowing later deltas (the delta.name merge logic) to replace the placeholder.packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts (1)
270-336: Consider adding a multi-delta function_call test.This test covers a single
content.deltaforfunction_call, but the adapter's merge logic intext-interactions.ts(L583–597, aroundJSON.parse(state.args)+ object spread) only exercises its fast path here. Given the past review thread specifically called out incremental argument accumulation as a bug, a regression test with two or morefunction_calldeltas (e.g.,{location:'Madrid'}then{unit:'c'}) asserting the finalTOOL_CALL_END.inputis the merged object would lock in that behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts` around lines 270 - 336, Add a regression test that sends multiple content.delta events of type 'function_call' to exercise the incremental-args merge path in the adapter (the logic around JSON.parse(state.args) / object spread in text-interactions.ts). Update the existing test case in text-interactions-adapter.test.ts (the one using mkStream, chat, collectChunks and asserting TOOL_CALL_START/TOOL_CALL_ARGS/TOOL_CALL_END) to include at least two successive content.delta deltas for the same id/name (e.g., first {arguments:{location:'Madrid'}} then {arguments:{unit:'c'}}) and assert the final TOOL_CALL_END.input equals the merged object ({location:'Madrid', unit:'c'}) and that TOOL_CALL_ARGS reflects the accumulated JSON string.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts`:
- Around line 619-664: The reasoning state (thinkingStepId, reasoningMessageId,
hasClosedReasoning, thinkingAccumulated) must be reset when closing a reasoning
block so subsequent interleaved thought_summary blocks start fresh; update the
closeReasoningIfNeeded function to, after it sets hasClosedReasoning = true and
emits the closing chunks, set thinkingStepId = null, reasoningMessageId = null,
hasClosedReasoning = false (or ensure it reflects the reset state per your
logic), and thinkingAccumulated = '' so future thought_summary handling in the
switch (which checks thinkingStepId and emits
REASONING_START/REASONING_MESSAGE_START/STEP_STARTED) will run correctly.
---
Nitpick comments:
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts`:
- Around line 148-154: The catch block in the structured output generation
currently wraps any Error into a new Error, losing stack and metadata; update
the catch in the function handling structured output (the catch that inspects
the variable error around the JSON-parse failure) to rethrow the original error
when error instanceof Error, otherwise throw a new Error('An unknown error
occurred during structured output generation.'); this preserves the original
stack, cause, and custom subclasses while still handling non-Error values.
- Around line 565-618: The TOOL_CALL_START can emit an undefined toolName when
the first delta omits delta.name; update the ToolCallState creation and
emissions to default the name to a safe fallback and ensure later deltas can
overwrite it: when creating the state in the function_call case (the local
variable state stored in toolCalls), set its name to delta.name ?? '' (or
similar non-undefined fallback), and when emitting TOOL_CALL_START and
subsequent events (the asChunk call producing type 'TOOL_CALL_START' and any
'TOOL_CALL_END' usage), use state.name ?? '' so the emitted toolName is never
undefined while still allowing later deltas (the delta.name merge logic) to
replace the placeholder.
In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts`:
- Around line 270-336: Add a regression test that sends multiple content.delta
events of type 'function_call' to exercise the incremental-args merge path in
the adapter (the logic around JSON.parse(state.args) / object spread in
text-interactions.ts). Update the existing test case in
text-interactions-adapter.test.ts (the one using mkStream, chat, collectChunks
and asserting TOOL_CALL_START/TOOL_CALL_ARGS/TOOL_CALL_END) to include at least
two successive content.delta deltas for the same id/name (e.g., first
{arguments:{location:'Madrid'}} then {arguments:{unit:'c'}}) and assert the
final TOOL_CALL_END.input equals the merged object ({location:'Madrid',
unit:'c'}) and that TOOL_CALL_ARGS reflects the accumulated JSON string.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 2ed67796-7efd-4df8-b004-6f424e41c38e
📒 Files selected for processing (4)
docs/adapters/gemini.mdpackages/typescript/ai-gemini/src/adapters/text-interactions.tspackages/typescript/ai-gemini/tests/text-interactions-adapter.test.tstesting/e2e/tests/stateful-interactions.spec.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- docs/adapters/gemini.md
| case 'thought_summary': { | ||
| const thoughtText = | ||
| delta.content && 'text' in delta.content ? delta.content.text : '' | ||
| if (!thoughtText) break | ||
| if (thinkingStepId === null) { | ||
| thinkingStepId = generateId(adapterName) | ||
| reasoningMessageId = generateId(adapterName) | ||
| yield asChunk({ | ||
| type: 'REASONING_START', | ||
| messageId: reasoningMessageId, | ||
| model, | ||
| timestamp, | ||
| }) | ||
| yield asChunk({ | ||
| type: 'REASONING_MESSAGE_START', | ||
| messageId: reasoningMessageId, | ||
| role: 'reasoning', | ||
| model, | ||
| timestamp, | ||
| }) | ||
| yield asChunk({ | ||
| type: 'STEP_STARTED', | ||
| stepId: thinkingStepId, | ||
| model, | ||
| timestamp, | ||
| stepType: 'thinking', | ||
| }) | ||
| } | ||
| thinkingAccumulated += thoughtText | ||
| yield asChunk({ | ||
| type: 'REASONING_MESSAGE_CONTENT', | ||
| messageId: reasoningMessageId!, | ||
| delta: thoughtText, | ||
| model, | ||
| timestamp, | ||
| }) | ||
| yield asChunk({ | ||
| type: 'STEP_FINISHED', | ||
| stepId: thinkingStepId, | ||
| model, | ||
| timestamp, | ||
| delta: thoughtText, | ||
| content: thinkingAccumulated, | ||
| }) | ||
| break | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does the @google/genai Interactions streaming API interleave thought_summary content.delta events with text content.delta events in a single interaction, or is thinking always delivered as one contiguous block before text?
💡 Result:
The @google/genai Interactions streaming API interleaves thought_summary content.delta events with text content.delta events in a single interaction. Thought summaries are delivered as separate thought_summary delta types within the stream of ContentDelta events, which can mix with text deltas and other types like tool calls. They are not restricted to a single contiguous block before text. This is evidenced by the API structure where streaming emits a sequence of ContentStart / multiple ContentDelta (including thought_summary, text, etc.) / ContentStop events, with deltas appearing in processing order. Multiple content blocks can be active simultaneously, and tool-related deltas (analogous to thinking) explicitly interleave between start/stop. SDK type definitions list thought_summary alongside text in the GeminiEventMap and ContentDelta union. Code examples handling streams check delta.type separately for text and thought_summary/thought, printing them distinctly but in the mixed stream order. Rust SDK docs note thoughts stream "before the final response" but via Delta events that accumulate, consistent with interleaving during generation.
Citations:
- 1: https://dev.to/asross311/taming-the-interactions-api-in-the-googlegenai-sdk-561b
- 2: https://deepwiki.com/googleapis/js-genai/8.4-streaming-interaction-events
- 3: https://github.com/evansenter/genai-rs/blob/main/docs/THINKING_MODE.md
- 4: https://github.com/evansenter/genai-rs/blob/main/docs/STREAMING_API.md
- 5: https://github.com/googleapis/js-genai/blob/89400098/src/interactions/resources/interactions.ts
- 6: https://deepwiki.com/googleapis/js-genai/8.1-creating-and-managing-interactions
- 7: https://github.com/google-gemini/gemini-skills/blob/main/skills/gemini-interactions-api/SKILL.md
- 8: https://building.theatlantic.com/gemini-interactions-api-one-interface-for-models-and-agents-986ffb16021c
🏁 Script executed:
cat -n packages/typescript/ai-gemini/src/adapters/text-interactions.ts | head -700 | tail -150Repository: TanStack/ai
Length of output: 5815
🏁 Script executed:
cat -n packages/typescript/ai-gemini/src/adapters/text-interactions.ts | head -550 | tail -100Repository: TanStack/ai
Length of output: 3370
Reset reasoning state when closing to support interleaved thought_summary blocks.
The API can interleave thought_summary and text deltas within a single interaction. Currently, thinkingStepId and reasoningMessageId are set once (L623–625) and never reset when closeReasoningIfNeeded sets hasClosedReasoning = true. If the stream alternates content types (e.g., thought_summary → text → thought_summary), the second reasoning block will skip REASONING_START/REASONING_MESSAGE_START/STEP_STARTED (since thinkingStepId !== null) yet emit REASONING_MESSAGE_CONTENT with a stale reasoningMessageId, creating an orphan reasoning message.
Reset thinkingStepId, reasoningMessageId, hasClosedReasoning, and thinkingAccumulated inside closeReasoningIfNeeded to allow a subsequent thought block to start a fresh reasoning sequence.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/typescript/ai-gemini/src/adapters/text-interactions.ts` around lines
619 - 664, The reasoning state (thinkingStepId, reasoningMessageId,
hasClosedReasoning, thinkingAccumulated) must be reset when closing a reasoning
block so subsequent interleaved thought_summary blocks start fresh; update the
closeReasoningIfNeeded function to, after it sets hasClosedReasoning = true and
emits the closing chunks, set thinkingStepId = null, reasoningMessageId = null,
hasClosedReasoning = false (or ensure it reflects the reset state per your
logic), and thinkingAccumulated = '' so future thought_summary handling in the
switch (which checks thinkingStepId and emits
REASONING_START/REASONING_MESSAGE_START/STEP_STARTED) will run correctly.
The Interactions adapter emitted RUN_ERROR with only a nested `error` object, which stripToSpecMiddleware removes by design — leaving downstream consumers with no error detail at all. Match the dual-form pattern from text.ts: emit flat `message`/`code` alongside the nested `error` so both spec-compliant and legacy consumers get the information. Update the three affected unit tests to assert against the flat fields.
…models Adds a `gemini-interactions` provider in the example's model selector backed by `geminiTextInteractions`, threads the server-assigned interactionId from the `gemini.interactionId` CUSTOM event through client state back into `previous_interaction_id` on subsequent turns, and refreshes MODEL_OPTIONS across every provider to current defaults (GPT-5.2, Claude 4.6, Gemini 3.1 preview, Grok 4.20, etc.). Also plumbs the standard `InternalLogger` (request / provider / errors) through `GeminiTextInteractionsAdapter` so `debug: true` produces useful diagnostics on this adapter like every other text adapter. Tools are intentionally omitted for the `gemini-interactions` branch because the Interactions API rejects `anyOf` in function parameter schemas.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts (1)
395-446: Consider asserting the SDK was not invoked on short-circuit paths.Both the image mime-type and built-in-tool rejection tests pre-seed
interactionsCreateSpy.mockResolvedValue(mkStream([])). If the adapter ever regressed and actually forwarded the request to the SDK (instead of short-circuiting with a clear error), the empty stream would cause a different failure mode that might be harder to diagnose. Addingexpect(mocks.interactionsCreateSpy).not.toHaveBeenCalled()to each of these tests would lock in the intended short-circuit behavior and make regressions easier to spot.🧪 Proposed tweak
const err = chunks.find((c) => c.type === 'RUN_ERROR') as any expect(err).toBeDefined() expect(err.message).toMatch(/image\/bmp/) expect(err.message).toMatch(/image\/png/) + expect(mocks.interactionsCreateSpy).not.toHaveBeenCalled()const err = chunks.find((c) => c.type === 'RUN_ERROR') as any expect(err).toBeDefined() expect(err.message).toMatch(/google_search/) expect(err.message).toMatch(/Interactions API/) + expect(mocks.interactionsCreateSpy).not.toHaveBeenCalled()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts` around lines 395 - 446, Add assertions to ensure the Interactions SDK was not invoked on the short-circuit error paths by checking mocks.interactionsCreateSpy was not called; in the two tests that use createAdapter(), chat(), collectChunks(), and pre-seed mocks.interactionsCreateSpy.mockResolvedValue(mkStream([])), add expect(mocks.interactionsCreateSpy).not.toHaveBeenCalled() after collecting the chunks (after collectChunks(...) returns) and before asserting the RUN_ERROR contents to lock in the intended short-circuit behavior.examples/ts-react-chat/src/routes/api.tanchat.ts (1)
210-226: Consider wiring a subset of tools for thegemini-interactionsdemo.Per
packages/typescript/ai-gemini/src/adapters/text-interactions.ts(lines 461-498), function tools with simple JSON Schema are supported — onlyanyOfschemas (and built-in tools) are rejected. Tools likegetPersonalGuitarPreferenceToolDeforaddToCartToolServerlikely work and would make the example more representative of real Interactions usage. Not a blocker.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@examples/ts-react-chat/src/routes/api.tanchat.ts` around lines 210 - 226, The current branch sets tools = [] for provider === 'gemini-interactions', which disables all tools; instead include only the tools with simple JSON Schema (no anyOf) so the demo better reflects Interactions. Modify the tools assignment in the tools variable branch that checks provider === 'gemini-interactions' to return a curated subset (e.g. getGuitars, getPersonalGuitarPreferenceToolDef, addToCartToolServer, addToWishListToolDef, and any other server-side tools known to have simple schemas) or implement a filter that excludes tools whose schema contains anyOf; update the provider === 'gemini-interactions' branch accordingly so those supported tool symbols are wired while still omitting tools with anyOf like unioned schemas.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@examples/ts-react-chat/src/routes/api.tanchat.ts`:
- Around line 210-226: The current branch sets tools = [] for provider ===
'gemini-interactions', which disables all tools; instead include only the tools
with simple JSON Schema (no anyOf) so the demo better reflects Interactions.
Modify the tools assignment in the tools variable branch that checks provider
=== 'gemini-interactions' to return a curated subset (e.g. getGuitars,
getPersonalGuitarPreferenceToolDef, addToCartToolServer, addToWishListToolDef,
and any other server-side tools known to have simple schemas) or implement a
filter that excludes tools whose schema contains anyOf; update the provider ===
'gemini-interactions' branch accordingly so those supported tool symbols are
wired while still omitting tools with anyOf like unioned schemas.
In `@packages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts`:
- Around line 395-446: Add assertions to ensure the Interactions SDK was not
invoked on the short-circuit error paths by checking mocks.interactionsCreateSpy
was not called; in the two tests that use createAdapter(), chat(),
collectChunks(), and pre-seed
mocks.interactionsCreateSpy.mockResolvedValue(mkStream([])), add
expect(mocks.interactionsCreateSpy).not.toHaveBeenCalled() after collecting the
chunks (after collectChunks(...) returns) and before asserting the RUN_ERROR
contents to lock in the intended short-circuit behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bf190840-69aa-4cc9-9057-57c0c71fcd0a
📒 Files selected for processing (5)
examples/ts-react-chat/src/lib/model-selection.tsexamples/ts-react-chat/src/routes/api.tanchat.tsexamples/ts-react-chat/src/routes/index.tsxpackages/typescript/ai-gemini/src/adapters/text-interactions.tspackages/typescript/ai-gemini/tests/text-interactions-adapter.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/typescript/ai-gemini/src/adapters/text-interactions.ts
Refactor ai-gemini tests to call chat() and generateImage() instead of invoking adapter methods. This drops the test-only dependency on the internal InternalLogger and exercises the real activity code paths end-to-end, matching the pattern already used by the audio/tts/chat test suites. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Wires google_search, code_execution, url_context, file_search, and computer_use through the stateful Interactions adapter by translating tool factories into the snake_case Tool_2 union the Interactions SDK expects; surfaces per-tool *_call / *_result deltas as AG-UI CUSTOM events named gemini.googleSearchCall / gemini.googleSearchResult (and matching codeExecution/urlContext/fileSearch variants) so consumers can display provider-tool activity without conflating it with function-tool TOOL_CALL_* events. Rejects google_search_retrieval, google_maps, and mcp_server with targeted errors pointing at the supported alternative. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
geminiTextInteractions()adapter that routes throughclient.interactions.createfor server-side conversation state.previous_interaction_id/store/background/system_instruction/response_*/generation_configonmodelOptions, with the type derived viaPick<>from the@google/genaiSDK's ownCreateModelInteractionParamsStreamingso fields stay in sync automatically.providerMetadata?: Record<string, unknown>toRunFinishedEventand surfaces the server-assigned interaction id asproviderMetadata.interactionIdso callers can chain turns.google_search,code_execution,url_context,file_search,computer_use) throw a clear error; usegeminiText()for those. Unsupported media mime types are rejected at runtime against the SDK's allowed sets (pinned viasatisfies).Test plan
pnpm --filter @tanstack/ai-gemini test:lib— 74 passing, including stream translation, stateful-chaining short-circuit, tool call / result round-trip, built-in-tool rejection, mime-type rejection, upstream error propagation, and structured output.pnpm --filter @tanstack/ai-gemini test:typespnpm --filter @tanstack/ai-gemini test:eslinttesting/e2e/tests/stateful-interactions.spec.tswithtest.skip— aimock does not yet record/replayinteractions:create; tracked as follow-up.🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Documentation
Tests
Examples