Skip to content

feat(llm): introduce unified LlmProvider abstraction for AI chat backends#1

Draft
Copilot wants to merge 4 commits intodevelopfrom
copilot/add-unified-ai-provider-abstraction
Draft

feat(llm): introduce unified LlmProvider abstraction for AI chat backends#1
Copilot wants to merge 4 commits intodevelopfrom
copilot/add-unified-ai-provider-abstraction

Conversation

Copy link
Copy Markdown

Copilot AI commented Apr 30, 2026

Both AWSBedrockController and ChatGptController contained identical inline HTTP call logic, tightly coupling the AI chat path to specific backends with no seam for future provider extension.

Changes

  • LlmProvider interface (org.wise.portal.service.llm) — uniform chat(String requestBody) + getName() contract for all AI backends
  • OpenAiCompatibleLlmProvider — single concrete impl covering any OpenAI-compatible endpoint: AWS Bedrock's runtime, the real OpenAI API, and local gateways (Ollama, vLLM, LM Studio)
  • LlmProviderConfig — Spring @Configuration that wires two named beans from existing app properties; no property renames
@Bean("bedrockLlmProvider")
public LlmProvider bedrockLlmProvider(
    @Value("${aws.bedrock.api.key:}") String apiKey,
    @Value("${aws.bedrock.runtime.endpoint:}") String runtimeEndpoint) { … }

@Bean("openAiLlmProvider")
public LlmProvider openAiLlmProvider(
    @Value("${openai.api.key:}") String apiKey,
    @Value("${openai.chat.api.url:https://api.openai.com/v1/chat/completions}") String chatApiUrl) { … }
  • Controllers refactoredAWSBedrockController and ChatGptController drop ~50 lines of duplicated HTTP code each; now delegate via @Qualifier-injected LlmProvider. produces = APPLICATION_JSON_VALUE added to both to enforce correct Content-Type and prevent XSS
  • 6 unit tests for OpenAiCompatibleLlmProvider covering name retrieval and misconfiguration error paths (missing/null key and URL)
  • application_sample.properties — expanded AI provider docs; notes the local-gateway override path for openai.chat.api.url

Adding a future provider (Gemini, Claude, local model)

  1. Implement LlmProvider (or reuse OpenAiCompatibleLlmProvider for any OpenAI-compat endpoint)
  2. Add a @Bean("geminiLlmProvider") in LlmProviderConfig wired from new properties
  3. Inject with @Qualifier("geminiLlmProvider") wherever needed

Existing endpoints (/api/aws-bedrock/chat, /api/chat-gpt) and all config property names are unchanged.

Original prompt

Implement the first safe iteration of a unified AI provider abstraction for WISE, targeting the forked repositories discussed in this conversation. The user will test locally after the PR is created.

Repository: VM8gkAs/WISE-API
Base branch: develop

Goal:

  • Introduce a minimal provider abstraction layer that reduces direct coupling to the current AI backend implementation.
  • Preserve current functionality and outward behavior as much as possible.
  • This is the first incremental step toward a future multi-provider architecture.

Scope requirements:

  1. Add a unified internal abstraction for chat-capable AI providers, e.g. an interface such as LlmProvider (name can vary if a better repo-consistent naming is preferred).
  2. Implement the current Bedrock-backed chat path behind that abstraction as the first concrete provider.
  3. Refactor the existing AI Assistant / chatbot backend flow to use the abstraction instead of calling the Bedrock-specific implementation directly.
  4. Keep existing API endpoints and existing request/response behavior stable unless a small compatibility change is absolutely required.
  5. Do NOT add a full Admin GUI in this PR.
  6. Do NOT add Gemini/Claude/OpenAI provider implementations in this PR unless necessary scaffolding is tiny and clearly beneficial.
  7. Add configuration structure or code organization that makes future capability-based routing possible, but keep this PR intentionally small.
  8. Update or add tests where practical for the abstraction/refactor.
  9. Update developer-facing documentation/comments only where needed to explain the new abstraction and how future providers could fit.

Non-goals:

  • No major frontend changes.
  • No database schema changes unless strictly necessary.
  • No user-facing model-selection UI.
  • No large behavior redesign.

Success criteria:

  • Existing chatbot/AI assistant backend flow still works.
  • The current provider implementation is wrapped behind the new abstraction.
  • The code is easier to extend with future providers such as OpenAI, Gemini, Claude, or OpenAI-compatible local gateways.
  • The PR remains reviewable and incremental.

Suggested implementation approach:

  • Explore current AI/chatbot-related controllers/services first.
  • Introduce a provider-neutral request/response abstraction only if it meaningfully simplifies the Bedrock refactor.
  • Prefer minimal disruption and repo-consistent naming over idealized architecture.

Please create a pull request in the user's fork with a concise title describing the provider abstraction refactor.

The following is the prior conversation context from the user's chat exploration (may be truncated):

User: @copilot Accepted Confirmation: Are you sure?

Assistant: 已整理:以下提案刻意避開 WISE 專案中已明示「開發中 / 預定開發」的 AI 功能,避免衝突。

先講結論

目前應避免與下列已知方向重疊:

已明示「預定開發 / 未來規劃」

  1. AI Assistant 的 context awareness

    • 文件已寫明是 Planned for future release
    • 來源:
      ### 5. Context Awareness (Planned for future release)
      
      In the future, the assistant will know which part of the curriculum you are currently working on.
    • 因此不建議把「自動讀取當前步驟上下文」列為新提案主軸
  2. OpenAI provider 支援

已完成、正在延伸中的方向

  1. 教師端 AI 翻譯

  2. 教師端 AI 摘要 / 總結


AI 功能強化提案表

下面是避開既有規劃後,我認為較有價值的增強方向。

提案 改進內容 開發價值 開發難度 開發耗時 與現有/規劃衝突 備註
1. AI 回答引用課程依據 AI 回答時附上「引用本單元哪一段提示/活動說明」,提升可信度 不等於 context-awareness;偏回應可解釋性
2. 教師可查看 AI 對話摘要儀表板 將學生 AI 對話做班級層級摘要、主題聚類、常見迷思統整 中高 中高 與既有摘要功能相近,但聚焦「chat 對話分析」而非作答摘要
3. AI 安全護欄與回應規則模板 為不同課程/年級提...

This pull request was created from Copilot chat.

Copilot AI and others added 2 commits April 30, 2026 16:16
@VM8gkAs VM8gkAs marked this pull request as ready for review April 30, 2026 16:22
Copilot AI changed the title [WIP] Introduce unified AI provider abstraction for WISE feat(llm): introduce unified LlmProvider abstraction for AI chat backends Apr 30, 2026
Copilot AI requested a review from VM8gkAs April 30, 2026 16:23
@VM8gkAs VM8gkAs marked this pull request as draft April 30, 2026 16:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants