Skip to content

DOC-3498: tinymceai on-prem documentation.#4137

Draft
kemister85 wants to merge 4 commits into
feature/8.6.0/DOC-3401from
feature/8.6.0/DOC-3401_DOC-3498
Draft

DOC-3498: tinymceai on-prem documentation.#4137
kemister85 wants to merge 4 commits into
feature/8.6.0/DOC-3401from
feature/8.6.0/DOC-3401_DOC-3498

Conversation

@kemister85
Copy link
Copy Markdown
Contributor

@kemister85 kemister85 commented May 13, 2026

Ticket

DOC-3498

Summary

Complete on-premises deployment documentation for the TinyMCE AI service. This introduces 10 new pages covering every aspect of self-hosted AI deployment, from initial setup through production hardening and advanced scenarios.

New pages (10)

Page Scope
Overview Architecture, capabilities, prerequisites, setup paths, and support
Getting started Five-minute Docker Compose quick start with end-to-end smoke test
Database, Redis, and storage MySQL/PostgreSQL setup, Redis configuration, container runtimes, and managed cloud options
LLM providers OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, Google Vertex AI, and self-hosted endpoints (Ollama, vLLM, LM Studio)
JWT authentication HS256 signing model, claims, permissions reference, and token endpoint examples in 8 languages
Framework integration Editor-side configuration, token provider, CORS, CSP, and SSR patterns
Production deployment Podman, Kubernetes, AWS ECS, scaling, security hardening, observability, rate limiting, backup/recovery, and sizing
Advanced scenarios MCP integration, web scraping/search, multi-tenant patterns, custom models with guardrails, and AI-powered document pipelines
Troubleshooting Quick triage, container startup, JWT, LLM provider, editor, and performance diagnostics
Reference Environment variables, API endpoints, SSE events, error codes, and known limits

Assets

  • 20 Mermaid .mmd source files with pre-rendered .svg diagrams
  • Render script at -scripts/render-mermaid.sh
  • All diagrams set to width=100% for responsive scaling

Navigation

nav.adoc updated with a new "On-premises deployment" section containing all 10 pages in logical reading order.

Pre-checks

  • Branch prefixed with feature/<version>/
  • modules/ROOT/nav.adoc has been updated
  • Included a release note entry for any New product features
  • If this is a minor release, updated productminorversion in antora.yml and added new supported versions entry in modules/ROOT/partials/misc/supported-versions.adoc

Review

  • Documentation Team Lead has reviewed

Add missing customer-facing content identified by comparing the
original internal documentation against the current on-premises
AsciiDoc pages: capabilities matrix on the overview page, Podman
production runbook, performance characteristics table, expanded
known limits reference, MySQL 8.4 caveat, Ollama systemd and
Modelfile examples, and getting-started teardown and config update
guidance.
Expand 18 acronyms (OCI, JWT, LLM, SSE, TLS, CORS, MCP, NTP, HPA,
OTLP, IRSA, ADC, SSR, CSP, SIEM, PII, HA, mTLS) on first prose
occurrence per page for readers unfamiliar with the terms.
Comment on lines +3 to +4
AI -->|MCP tools/call| MCP[MCP Server<br>knowledge-hub]
MCP -->|read| KB[Confluence ·<br>Notion ·<br>GitBook ·<br>internal wiki]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
AI -->|MCP tools/call| MCP[MCP Server<br>knowledge-hub]
MCP -->|read| KB[Confluence ·<br>Notion ·<br>GitBook ·<br>internal wiki]
AI <-->|MCP tools/call| MCP[MCP Server<br>knowledge-hub]
MCP <-->|read| KB[Confluence ·<br>Notion ·<br>GitBook ·<br>internal wiki]

?

Comment thread modules/ROOT/images/tinymceai-on-premises/complete-guide-fig-1.mmd Outdated
@@ -0,0 +1,20 @@
flowchart TD
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea, but this one has so much negative space because it's largely a single flow. Honestly wondering if it's better a just a list - "work through this list to check for common causes of errors" or something.

@@ -0,0 +1,20 @@
flowchart TD
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the same as the one I commented on just before??

== Architecture

[.text-center]
image::tinymceai-on-premises/complete-guide-fig-1.svg[alt="Service architecture showing browser with TinyMCE token endpoint AI service database Redis and LLM providers",width=100%]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this diagram, but I worry that putting it on the Overview page, which might be people's first introduction to the product, could be overwhelming and/or give a false impression of the required complexity. Could we switch this for a simpler model that just gives the highlights, and use this somewhere we're talking about enterprise deployment?

+
[source,console]
----
docker inspect ai-service | jq '.[0].Config.Image'
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick, but putting this after the other docker one is just a little neater? less jumping around if someone is running through the steps

|Five-minute Docker Compose quick start. Stand up the AI service, database, Redis, token server, and a browser editor.

|xref:tinymceai-on-premises-database.adoc[Database, Redis, and storage]
|MySQL and PostgreSQL setup, Redis configuration, container runtimes (Docker, Podman, Kubernetes, ECS), and reverse proxy with TLS.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we use the term "data layer" elsewhere - at least in the diagrams - so might as well use it here?

Suggested change
|MySQL and PostgreSQL setup, Redis configuration, container runtimes (Docker, Podman, Kubernetes, ECS), and reverse proxy with TLS.
|Data layer setup: MySQL and PostgreSQL setup, Redis configuration, container runtimes (Docker, Podman, Kubernetes, ECS), and reverse proxy with TLS.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

although the proxy is application layer... doesn't fit with the "database, redis, and storage" title either. thoughts?

|MySQL and PostgreSQL setup, Redis configuration, container runtimes (Docker, Podman, Kubernetes, ECS), and reverse proxy with TLS.

|xref:tinymceai-on-premises-providers.adoc[LLM providers]
|OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, Google Vertex AI, and self-hosted endpoints (Ollama, vLLM, LM Studio). Custom model catalog and API key rotation.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
|OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, Google Vertex AI, and self-hosted endpoints (Ollama, vLLM, LM Studio). Custom model catalog and API key rotation.
|Connect to any OpenAI-compatible model, such as OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, Google Vertex AI, and self-hosted endpoints (Ollama, vLLM, LM Studio). Custom model catalog and API key rotation.

trying to avoid implying a limitation

|xref:tinymceai-on-premises-jwt.adoc[JWT authentication]
|HS256 signing model, required and optional claims, permissions reference, and token endpoint examples in 8 languages.

|xref:tinymceai-on-premises-frameworks.adoc[Framework integration]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"framework integration"? that feels weird. do we usually use that kind of term? I'd keep it straightforward with "TinyMCE integration" unless we need it to be a wider term?


* The AI service is already running. See xref:tinymceai-on-premises-getting-started.adoc[Getting started] for setup instructions.
* A token endpoint exists that signs JSON Web Tokens (JWTs) for the AI service. See xref:tinymceai-on-premises-jwt.adoc[JWT authentication] for back-end implementations.
* The TinyMCE API key has the AI feature enabled. Retrieve or upgrade a key at https://www.tiny.cloud/my-account/integrate/.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

API key = cloud, need to mention API key or license key I think? I assume license key is more likely here because on prem

Reduce edge clutter by connecting a single representative replica
to downstream services and grouping the data layer into a subgraph.
Fix SVG width to use a fixed pixel value consistent with other
diagrams in the set.
tiny-ben-tran

This comment was marked as spam.

@tiny-ben-tran tiny-ben-tran self-requested a review May 14, 2026 23:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants