Skip to content

docs(blog): TanStack AI Just Learned to Compose Music#854

Merged
LadyBluenotes merged 2 commits intoTanStack:mainfrom
AlemTuzlak:blog/tanstack-ai-audio-generation
Apr 24, 2026
Merged

docs(blog): TanStack AI Just Learned to Compose Music#854
LadyBluenotes merged 2 commits intoTanStack:mainfrom
AlemTuzlak:blog/tanstack-ai-audio-generation

Conversation

@AlemTuzlak
Copy link
Copy Markdown
Contributor

@AlemTuzlak AlemTuzlak commented Apr 24, 2026

Summary

Announces TanStack AI's new audio support: a single generateAudio activity with streaming, plus fal and Gemini Lyria adapters for music, speech, and transcription. Framework hooks land for React, Solid, Vue, and Svelte (useGenerateAudio / createGenerateAudio).

  • Post: src/blog/tanstack-ai-audio-generation.md
  • Header image: public/blog-assets/tanstack-ai-audio-generation/header.png
  • Publish date: 2026-04-24

The post leads with the real developer pain (every audio provider ships a different SDK shape — Lyria returns raw PCM; ElevenLabs wants music_length_ms; Stable Audio wants seconds_total) and then shows the fix as a one-line adapter swap behind a shared generateAudio() call. Code samples are grounded against the actual exports (geminiAudio, falAudio, falSpeech, falTranscription, useGenerateAudio) and verified against the library source.

Test plan

  • Preview the blog post locally (pnpm dev) and confirm the header image loads
  • Verify internal links resolve (/ai/docs/media/audio-generation)
  • Check frontmatter validates against content-collections schema
  • Confirm responsive layout for the header image on mobile

Summary by CodeRabbit

  • Documentation
    • Published blog post documenting audio generation feature in TanStack AI with unified typed interface and streaming support via Server-Sent Events.
    • Covers framework integrations for React, Solid, Vue, and Svelte.
    • Documents newly available Gemini and Fal adapters with provider-specific configuration and code examples.

Announces the new generateAudio activity, streaming support, and
fal + Gemini Lyria adapters for music, speech, and transcription.

Covers:
- One generateAudio() call, any adapter
- stream: true -> AsyncIterable<StreamChunk> via toServerSentEventsResponse()
- useGenerateAudio (React/Solid/Vue) + createGenerateAudio (Svelte)
- geminiAudio(): Lyria 3 Pro/Clip + Gemini 3.1 Flash TTS multi-speaker
- falAudio() / falSpeech() / falTranscription() tree-shakeable adapters
@netlify
Copy link
Copy Markdown

netlify Bot commented Apr 24, 2026

👷 Deploy request for tanstack pending review.

Visit the deploys page to approve it

Name Link
🔨 Latest commit 3ee5dc1

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 24, 2026

📝 Walkthrough

Walkthrough

A new blog post documenting TanStack AI's audio generation feature release, including the generateAudio activity, typed interface, adapter system for provider translation, streaming support via AsyncIterable, and available framework integrations with newly shipped Gemini and Fal adapters.

Changes

Cohort / File(s) Summary
Blog Post Addition
src/blog/tanstack-ai-audio-generation.md
New blog post announcing TanStack AI audio generation capabilities with technical details on typed interfaces, streaming, adapters, and framework integrations.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~3 minutes

Poem

🎤 A whisker twitch, a hop of cheer,
Audio waves now crystal clear!
TanStack sings with Gemini's grace,
Streaming sound at rabbiting pace! 🐰✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'TanStack AI Just Learned to Compose Music' is metaphorical and not precisely accurate. The PR adds a blog post about audio generation capabilities (music, speech, transcription via adapters), but the title uses poetic language ('compose music') that oversimplifies and could mislead about the exact scope—the feature covers audio generation broadly, not just music composition. Consider a more descriptive title like 'docs(blog): Announce TanStack AI audio generation support' or 'docs(blog): Add blog post on TanStack AI audio capabilities' for greater clarity about the actual content and scope.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/blog/tanstack-ai-audio-generation.md (1)

37-37: Tighten wording for readability

Line 37: “the exact same call” reads a bit wordy; consider “the same call”.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/blog/tanstack-ai-audio-generation.md` at line 37, Update the sentence
that reads "the exact same call generates music..." to use tighter wording by
replacing "the exact same call" with "the same call" in the paragraph mentioning
geminiAudio and falAudio; ensure the rest of the sentence still lists MiniMax,
DiffRhythm, Stable Audio 2.5 and the adapter examples (music_length_ms,
seconds_total, duration) unchanged so the meaning remains clear.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/blog/tanstack-ai-audio-generation.md`:
- Line 86: The snippet accesses result.audio.url on a union type ({ url: string
} | { b64Json: string }) without narrowing; update the conditional to narrow the
union before accessing url (e.g., check that result and result.audio exist AND
that "url" is in result.audio using the in operator or a type guard) so you only
render <audio> when result.audio has a url property; refer to result, audio, url
and b64Json to locate the code and apply the in-based or guard-based type
narrowing.

---

Nitpick comments:
In `@src/blog/tanstack-ai-audio-generation.md`:
- Line 37: Update the sentence that reads "the exact same call generates
music..." to use tighter wording by replacing "the exact same call" with "the
same call" in the paragraph mentioning geminiAudio and falAudio; ensure the rest
of the sentence still lists MiniMax, DiffRhythm, Stable Audio 2.5 and the
adapter examples (music_length_ms, seconds_total, duration) unchanged so the
meaning remains clear.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c1ebcd2f-e0a6-4bcf-8426-14742390d72e

📥 Commits

Reviewing files that changed from the base of the PR and between b3f2fb8 and 3ee5dc1.

⛔ Files ignored due to path filters (1)
  • public/blog-assets/tanstack-ai-audio-generation/header.png is excluded by !**/*.png
📒 Files selected for processing (1)
  • src/blog/tanstack-ai-audio-generation.md

Generate
</button>
{isLoading && <button onClick={stop}>Stop</button>}
{result?.audio.url && <audio src={result.audio.url} controls />}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In TypeScript, for a union type like { url: string } | { b64Json: string }, is result.audio.url valid without narrowing? Please provide canonical docs/examples.

💡 Result:

No, result.audio.url is not valid without narrowing on a union type like { url: string } | { b64Json: string } for result.audio. TypeScript only allows access to properties common to ALL members of a union type. Here, url exists only on the first member, while b64Json exists only on the second—there are no common properties beyond structural compatibility. Attempting audio.url produces the error: "Property 'url' does not exist on type '{ url: string; } | { b64Json: string; }". This prevents runtime errors if audio is the { b64Json: string } variant. To access url safely, narrow the type first using a type guard, such as the in operator: if ('url' in result.audio) { // TypeScript narrows to { url: string } console.log(result.audio.url); // Valid } else { // TypeScript narrows to { b64Json: string } console.log(result.audio.b64Json); } For better ergonomics with multiple variants, use discriminated unions by adding a literal discriminant property (e.g., kind: 'url' | 'b64'): type Audio = { kind: 'url'; url: string } | { kind: 'b64'; b64Json: string }; Then narrow with switch (result.audio.kind) { ... }. This behavior is unchanged as of TypeScript 5.x (current in 2026).

Citations:


🏁 Script executed:

# Read the file around the relevant lines
cat -n src/blog/tanstack-ai-audio-generation.md | sed -n '30,90p'

Repository: TanStack/tanstack.com

Length of output: 2685


Fix TypeScript type error in the React snippet

Line 86 accesses result.audio.url directly on the union type declared on line 34 ({ url: string } | { b64Json: string }). TypeScript does not allow this without narrowing—the code will fail type-checking with "Property 'url' does not exist on type '{ url: string; } | { b64Json: string; }'" because url is not a common property across all union members.

Narrow the type using the in operator before accessing the property:

Fix
-      {result?.audio.url && <audio src={result.audio.url} controls />}
+      {result?.audio && 'url' in result.audio && (
+        <audio src={result.audio.url} controls />
+      )}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
{result?.audio.url && <audio src={result.audio.url} controls />}
{result?.audio && 'url' in result.audio && (
<audio src={result.audio.url} controls />
)}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/blog/tanstack-ai-audio-generation.md` at line 86, The snippet accesses
result.audio.url on a union type ({ url: string } | { b64Json: string }) without
narrowing; update the conditional to narrow the union before accessing url
(e.g., check that result and result.audio exist AND that "url" is in
result.audio using the in operator or a type guard) so you only render <audio>
when result.audio has a url property; refer to result, audio, url and b64Json to
locate the code and apply the in-based or guard-based type narrowing.

@LadyBluenotes LadyBluenotes merged commit 3cd2b24 into TanStack:main Apr 24, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants