Skip to content

feat(llm): introduce LlmProvider abstraction, rename impl to HttpChatCompletionLlmProvider#2

Draft
Copilot wants to merge 4 commits intocopilot/add-unified-ai-provider-abstractionfrom
copilot/update-lhm-provider-class-name
Draft

feat(llm): introduce LlmProvider abstraction, rename impl to HttpChatCompletionLlmProvider#2
Copilot wants to merge 4 commits intocopilot/add-unified-ai-provider-abstractionfrom
copilot/update-lhm-provider-class-name

Conversation

Copy link
Copy Markdown

Copilot AI commented Apr 30, 2026

Draft PR #1 introduced a LlmProvider abstraction but used OpenAiCompatibleLlmProvider as the implementation name—misleading when backing a Bedrock bean—and overstated multi-provider readiness in comments and docs.

Changes

  • Rename OpenAiCompatibleLlmProviderHttpChatCompletionLlmProvider; describes the HTTP Bearer-token adapter pattern, not a vendor
  • ControllersAWSBedrockController and ChatGptController drop ~90 lines of duplicated inline HTTP code; delegate via @Qualifier-injected LlmProvider; produces = APPLICATION_JSON_VALUE added to both
  • LlmProviderConfig — Spring @Configuration wires two named beans from existing properties with no renames:
    @Bean("bedrockLlmProvider")
    public LlmProvider bedrockLlmProvider(@Value("${aws.bedrock.api.key:}") String apiKey,
        @Value("${aws.bedrock.runtime.endpoint:}") String runtimeEndpoint) { … }
    
    @Bean("openAiLlmProvider")
    public LlmProvider openAiLlmProvider(@Value("${openai.api.key:}") String apiKey,
        @Value("${openai.chat.api.url:https://api.openai.com/v1/chat/completions}") String url) { … }
  • Comments/docs toned down — removed speculative Gemini/Claude/registry language from LlmProvider, LlmProviderConfig, and application_sample.properties; wording now reflects this is the first abstraction step only
  • Resource safetyOutputStreamWriter and BufferedReader in HttpChatCompletionLlmProvider now use try-with-resources
  • TestsHttpChatCompletionLlmProviderTest (6 misconfiguration error-path tests) + new LlmProviderConfigTest (5 tests verifying bean factory methods produce correctly named/typed providers, including empty-endpoint edge case)

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • test.org
    • Triggering command: /usr/lib/jvm/temurin-17-jdk-amd64/bin/java /usr/lib/jvm/temurin-17-jdk-amd64/bin/java -javaagent:/home/REDACTED/.m2/repository/org/jacoco/org.jacoco.agent/0.8.12/org.jacoco.agent-0.8.12-runtime.jar=destfile=/home/REDACTED/work/WISE-API/WISE-API/target/jacoco.exec -jar /home/REDACTED/work/WISE-API/WISE-API/target/surefire/surefirebooter-20260430170412250_3.jar /home/REDACTED/work/WISE-API/WISE-API/target/surefire 2026-04-30T17-04-12_174-jvmRun1 surefire-20260430170412250_1tmp surefire_0-20260430170412250_2tmp (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Original prompt

Follow up on the existing draft pull request #1 and revise the implementation so it better matches the intended scope of PR #1 (Provider Abstraction Core) for WISE-API.

Repository: VM8gkAs/WISE-API
Base branch: develop

Context:
PR #1 currently introduces a unified LlmProvider abstraction for AI chat backends, but the current draft needs correction before it should be merged.

Please update the existing work (or create a replacement PR if needed) with these required fixes:

Required fixes

  1. Rename the provider implementation class to a more neutral name

    • The current class name OpenAiCompatibleLlmProvider is too specific and makes the Bedrock-backed bean conceptually confusing.
    • Replace it with a more neutral/repo-appropriate name such as something in the spirit of HttpChatCompletionLlmProvider (choose the best repo-consistent name).
    • Keep behavior minimal and preserve the current HTTP-based adapter approach.
  2. Keep PR feat(llm): introduce unified LlmProvider abstraction for AI chat backends #1 intentionally small and foundational

    • This PR should remain the provider abstraction core only.
    • Do not expand into a full provider registry, Admin GUI, Gemini/Claude implementations, or capability-specific routing UI.
    • Preserve existing endpoints and outward behavior.
  3. Reduce overstatement in comments/docs

    • Review LlmProvider, configuration classes, and sample properties comments.
    • Remove or tone down language that implies multi-provider support is already fully implemented.
    • Keep wording accurate: this PR is only the first abstraction step, with future provider support possible later.
  4. Clean up the pull request presentation

    • Make the changes reviewer-friendly and incremental.
    • If any developer-facing documentation/comments are updated, keep them concise and specific to what is actually implemented.
    • Avoid embedding large speculative future architecture claims in code comments.
  5. Improve tests meaningfully

Non-goals

  • No Admin GUI
  • No frontend changes
  • No database changes
  • No provider CRUD APIs
  • No Gemini/Claude/Ollama feature completion in this PR

Success criteria

Please update the branch/PR so it is ready for local testing and eventual merge once reviewed.

The following is the prior conversation context from the user's chat exploration (may be truncated):

User: 1. forked,接下來如何架設,該如何測試? 自己的倉庫 Copilot 修改後如何在 Local 測試?
2. 我打算架設後使用 免費額度的Model 測試功能

forked:
https://github.com/VM8gkAs/WISE-Docker-Dev
https://github.com/VM8gkAs/WISE-Client
https://github.com/VM8gkAs/WISE-API

Assistant: 先做法:先把 Docker 開發環境跑起來,再把 WISE-Client / WISE-API fork 接到 Docker 環境,最後只用便宜或免費額度 provider 測功能。

你現在已經 fork 好 repo 了,下一步是可執行環境。
我先回答你的兩題,再給你一條最穩的實作路線。


1. fork 後接下來如何架設、如何測試、Copilot 改完如何本地測?

結論

最穩的方式是:

  1. VM8gkAs/WISE-Docker-Dev 架本地開發環境
  2. 讓 Docker Dev 指向你的:
    • VM8gkAs/WISE-Client
    • VM8gkAs/WISE-API
  3. 先確認原始系統能正常跑
  4. 再讓 Copilot 幫你改 fork repo
  5. 在本地 pull 那個 branch / PR 測試
  6. 驗證 OK 再決定是否送回 upstream

也就是說:
本地環境是你測試場;Copilot 負責改你 fork 的 repo;你在 local 驗證結果。


你可以怎麼理解整個流程

你的 fork repo
   ↓
Copilot 幫你修改 / 開 PR
   ↓
你在本地 WISE-Docker-Dev 拉下修改後分支
   ↓
重建或重啟容器
   ↓
在本地 UI / API 測試

A. 建議整體架設順序

我建議這個順序,不要一開始三個 repo 一起亂接。

Step 1:先把 WISE-Docker-Dev 跑起來

目標:

  • 不改 code
  • 不接 AI
  • 先讓 WISE 能正常啟動

你要先驗證:

  • Docker / Docker Compose 正常
  • DB 起得來
  • API 起得來
  • Client 起得來
  • 可登入 / 可打開頁面

Step 2:把 Docker Dev 指向你的 forks

目標:

  • 你的本地環境不再拉 upstream,而是用你 fork 的 repo

通常這種 Docker dev repo 會:

  • clone 或 mount WISE-Client
  • clone 或 mount WISE-API

你要把它改成:

  • VM8gkAs/WISE-Client
  • VM8gkAs/WISE-API

Step 3:先測「原樣 fork」能跑

非常重要。
在 Copilot 幫你改任何東西之前,你要先確認:

  • 你的 fork 跟 upstream 目前版本一致
  • Docker Dev 能編譯
  • WISE 前後端能起來
  • 不改任何 AI,也能正常用

這樣之後出錯時,才知道問題是新改動造成,不是環境問題。


Step 4:再讓 Copilot 修改 fork

當環境穩定後,你再叫我做:

  • 先改 VM8gkAs/WISE-API
  • 或同時改 VM8gkAs/WISE-Client

Copilot 會改的是你 fork 的 repo,不是 upstream。


Step 5:在 local 拉 Copilot 的 branch/PR 測試

Copilot 開 PR 後,你本地可以:

  • checkout PR branch
  • rebuild container / restart service
  • 用本地瀏覽器測

這是最標準流程。


B. Copilot 修改後如何在 Local 測試?

這部分其實是重點。


方式 1:PR branch 測試

如果 Copilot 幫你在 fork repo 開 PR,通常你可以:

在本地 repo

git fetch origin
git checkout <copilot-branch>

然後重新啟動對應服務。

如果是 API 改動

通常需要:

  • rebuild API 容器
  • 或重跑 maven / spring boot container

如果是 Client 改動

通常需要:

  • rebuild frontend
  • 或重啟 dev server/container

方式 2:直接 pull Copilot 推的 branch

若 Copilot 沒開 PR、只推 ...

This pull request was created from Copilot chat.

Copilot AI and others added 2 commits April 30, 2026 17:05
Copilot AI changed the title [WIP] Revise implementation of provider abstraction for better alignment feat(llm): introduce LlmProvider abstraction, rename impl to HttpChatCompletionLlmProvider Apr 30, 2026
Copilot AI requested a review from VM8gkAs April 30, 2026 17:09
@VM8gkAs VM8gkAs changed the base branch from develop to copilot/add-unified-ai-provider-abstraction April 30, 2026 17:37
@VM8gkAs
Copy link
Copy Markdown
Owner

VM8gkAs commented Apr 30, 2026

@copilot resolve the merge conflicts in this pull request

…straction base branch

Co-authored-by: VM8gkAs <61822684+VM8gkAs@users.noreply.github.com>
Copy link
Copy Markdown
Author

Copilot AI commented Apr 30, 2026

@copilot resolve the merge conflicts in this pull request

Resolved in commit 04e3826. I merged the copilot/add-unified-ai-provider-abstraction base branch into this branch, resolved all conflicts by keeping the neutral HttpChatCompletionLlmProvider naming and toned-down comments from this PR, and removed the OpenAiCompatibleLlmProvider files that were introduced by the base branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants