Skip to content

Releases: TanStack/ai

@tanstack/solid-ai-devtools@0.2.29

24 Apr 13:15
af19dcc

Choose a tag to compare

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.25

@tanstack/react-ai-devtools@0.2.29

24 Apr 13:16
af19dcc

Choose a tag to compare

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.25

@tanstack/preact-ai-devtools@0.1.29

24 Apr 13:15
af19dcc

Choose a tag to compare

Patch Changes

  • Updated dependencies []:
    • @tanstack/ai-devtools-core@0.3.25

@tanstack/ai@0.14.0

24 Apr 13:15
af19dcc

Choose a tag to compare

Minor Changes

  • feat: add generateAudio activity for music and sound-effect generation (#463)

    Adds a new audio activity kind alongside the existing tts and transcription activities:

    • generateAudio() / createAudioOptions() functions
    • AudioAdapter interface and BaseAudioAdapter base class
    • AudioGenerationOptions / AudioGenerationResult / GeneratedAudio types
    • audio:request:started, audio:request:completed, and audio:usage devtools events
  • feat: add useGenerateAudio hook and streaming support for generateAudio() (#463)

    Closes the parity gap between audio generation and the other media
    activities (image, speech, video, transcription, summarize):

    • generateAudio() now accepts stream: true, returning an
      AsyncIterable<StreamChunk> that can be piped through
      toServerSentEventsResponse().
    • AudioGenerateInput type added to @tanstack/ai-client.
    • useGenerateAudio hook added to @tanstack/ai-react,
      @tanstack/ai-solid, and @tanstack/ai-vue; matching
      createGenerateAudio added to @tanstack/ai-svelte. All follow the same
      { generate, result, isLoading, error, status, stop, reset } shape as
      the existing media hooks and support both connection (SSE) and
      fetcher transports.
  • Tighten GeneratedImage and GeneratedAudio to enforce exactly one of url or b64Json via a mutually-exclusive GeneratedMediaSource union. (#463)

    Both types previously declared url? and b64Json? as independently optional, which allowed meaningless {} values and objects that set both fields. They now require exactly one:

    type GeneratedMediaSource =
      | { url: string; b64Json?: never }
      | { b64Json: string; url?: never }

    Existing read patterns like img.url || \data:image/png;base64,${img.b64Json}`continue to work unchanged. The only runtime-visible change is that the@tanstack/ai-openrouterand@tanstack/ai-falimage adapters no longer populateurlwith a synthesizeddata:image/png;base64,...URI when the provider returns base64 — they return{ b64Json }only. Consumers that want a data URI should build it fromb64Json` at render time.

Patch Changes

  • refactor(ai, ai-openai): narrow error handling before logging (#465)

    catch (error: any) sites in stream-to-response.ts, activities/stream-generation-result.ts, and activities/generateVideo/index.ts are now narrowed to unknown and funnel through a shared toRunErrorPayload(error, fallback) helper that extracts message / code without leaking the original error object (which can carry request state from an SDK).

    Replaced four console.error calls in the OpenAI text adapter's chatStream catch block that dumped the full error object to stdout. SDK errors can carry the original request including auth headers, so the library now logs only the narrowed { message, code } payload via the internal logger — any user-supplied logger receives the sanitized shape, not the raw SDK error.

  • Updated dependencies []:

    • @tanstack/ai-event-client@0.2.8

@tanstack/ai-vue@0.7.0

24 Apr 13:15
af19dcc

Choose a tag to compare

Minor Changes

  • feat: add useGenerateAudio hook and streaming support for generateAudio() (#463)

    Closes the parity gap between audio generation and the other media
    activities (image, speech, video, transcription, summarize):

    • generateAudio() now accepts stream: true, returning an
      AsyncIterable<StreamChunk> that can be piped through
      toServerSentEventsResponse().
    • AudioGenerateInput type added to @tanstack/ai-client.
    • useGenerateAudio hook added to @tanstack/ai-react,
      @tanstack/ai-solid, and @tanstack/ai-vue; matching
      createGenerateAudio added to @tanstack/ai-svelte. All follow the same
      { generate, result, isLoading, error, status, stop, reset } shape as
      the existing media hooks and support both connection (SSE) and
      fetcher transports.

Patch Changes

  • fix(ai-react, ai-preact, ai-vue, ai-solid): propagate useChat callback changes (#465)

    onResponse, onChunk, and onCustomEvent were captured by reference at client creation time. When a parent component re-rendered with fresh closures, the ChatClient kept calling the originals. Every framework now wraps these callbacks so the latest options.xxx is read at call time (via optionsRef.current in React/Preact, and direct option access in Vue/Solid, matching the pattern already used for onFinish / onError). Clearing a callback (setting it to undefined) now correctly no-ops instead of continuing to invoke the stale handler.

  • Updated dependencies [54523f5, 54523f5, af9eb7b, 008f015, 54523f5]:

    • @tanstack/ai@0.14.0
    • @tanstack/ai-client@0.8.0

@tanstack/ai-vue-ui@0.1.31

24 Apr 13:15
af19dcc

Choose a tag to compare

Patch Changes

@tanstack/ai-svelte@0.7.0

24 Apr 13:15
af19dcc

Choose a tag to compare

Minor Changes

  • feat: add useGenerateAudio hook and streaming support for generateAudio() (#463)

    Closes the parity gap between audio generation and the other media
    activities (image, speech, video, transcription, summarize):

    • generateAudio() now accepts stream: true, returning an
      AsyncIterable<StreamChunk> that can be piped through
      toServerSentEventsResponse().
    • AudioGenerateInput type added to @tanstack/ai-client.
    • useGenerateAudio hook added to @tanstack/ai-react,
      @tanstack/ai-solid, and @tanstack/ai-vue; matching
      createGenerateAudio added to @tanstack/ai-svelte. All follow the same
      { generate, result, isLoading, error, status, stop, reset } shape as
      the existing media hooks and support both connection (SSE) and
      fetcher transports.

Patch Changes

@tanstack/ai-solid@0.7.0

24 Apr 13:15
af19dcc

Choose a tag to compare

Minor Changes

  • feat: add useGenerateAudio hook and streaming support for generateAudio() (#463)

    Closes the parity gap between audio generation and the other media
    activities (image, speech, video, transcription, summarize):

    • generateAudio() now accepts stream: true, returning an
      AsyncIterable<StreamChunk> that can be piped through
      toServerSentEventsResponse().
    • AudioGenerateInput type added to @tanstack/ai-client.
    • useGenerateAudio hook added to @tanstack/ai-react,
      @tanstack/ai-solid, and @tanstack/ai-vue; matching
      createGenerateAudio added to @tanstack/ai-svelte. All follow the same
      { generate, result, isLoading, error, status, stop, reset } shape as
      the existing media hooks and support both connection (SSE) and
      fetcher transports.

Patch Changes

  • fix(ai-react, ai-preact, ai-vue, ai-solid): propagate useChat callback changes (#465)

    onResponse, onChunk, and onCustomEvent were captured by reference at client creation time. When a parent component re-rendered with fresh closures, the ChatClient kept calling the originals. Every framework now wraps these callbacks so the latest options.xxx is read at call time (via optionsRef.current in React/Preact, and direct option access in Vue/Solid, matching the pattern already used for onFinish / onError). Clearing a callback (setting it to undefined) now correctly no-ops instead of continuing to invoke the stale handler.

  • Updated dependencies [54523f5, 54523f5, af9eb7b, 008f015, 54523f5]:

    • @tanstack/ai@0.14.0
    • @tanstack/ai-client@0.8.0

@tanstack/ai-solid-ui@0.6.2

24 Apr 13:15
af19dcc

Choose a tag to compare

Patch Changes

@tanstack/ai-react@0.8.0

24 Apr 13:15
af19dcc

Choose a tag to compare

Minor Changes

  • feat: add useGenerateAudio hook and streaming support for generateAudio() (#463)

    Closes the parity gap between audio generation and the other media
    activities (image, speech, video, transcription, summarize):

    • generateAudio() now accepts stream: true, returning an
      AsyncIterable<StreamChunk> that can be piped through
      toServerSentEventsResponse().
    • AudioGenerateInput type added to @tanstack/ai-client.
    • useGenerateAudio hook added to @tanstack/ai-react,
      @tanstack/ai-solid, and @tanstack/ai-vue; matching
      createGenerateAudio added to @tanstack/ai-svelte. All follow the same
      { generate, result, isLoading, error, status, stop, reset } shape as
      the existing media hooks and support both connection (SSE) and
      fetcher transports.

Patch Changes

  • fix(ai-react, ai-preact, ai-vue, ai-solid): propagate useChat callback changes (#465)

    onResponse, onChunk, and onCustomEvent were captured by reference at client creation time. When a parent component re-rendered with fresh closures, the ChatClient kept calling the originals. Every framework now wraps these callbacks so the latest options.xxx is read at call time (via optionsRef.current in React/Preact, and direct option access in Vue/Solid, matching the pattern already used for onFinish / onError). Clearing a callback (setting it to undefined) now correctly no-ops instead of continuing to invoke the stale handler.

  • Updated dependencies [54523f5, 54523f5, af9eb7b, 008f015, 54523f5]:

    • @tanstack/ai@0.14.0
    • @tanstack/ai-client@0.8.0