Skip to content

fix(doctor): correct ℹ️ icon and generalize hook example from PR #56 review#59

Open
TechNickAI wants to merge 1 commit intomainfrom
fix/pr-56-review-sweep
Open

fix(doctor): correct ℹ️ icon and generalize hook example from PR #56 review#59
TechNickAI wants to merge 1 commit intomainfrom
fix/pr-56-review-sweep

Conversation

@TechNickAI
Copy link
Copy Markdown
Owner

Summary

Follow-up fixes from bot review feedback on #56 (ai-coding-config doctor subcommand).

Changes

Bug Fix: Icon inconsistency (⚠️ → ℹ️ for experimental stability)

Both cursor[bot] and claude[bot] caught that the example output used ⚠️ for
stability: experimental while the spec explicitly says ℹ️ (informational, not a
warning). This was the only issue that could cause incorrect LLM output at runtime.

Before: ⚠️ mcp-debug — triggers field present, stability: experimental
After: ℹ️ mcp-debug — triggers field present, stability: experimental

Improvement: Generalize hardcoded hook filename in auto-fix example

claude[bot] noted that hardcoding todo-persist.sh in the auto-fix example could
cause LLMs to pattern-match and substitute the literal filename instead of the actual
failing hook. Generalized to <hook-name>.sh placeholder.

Declined feedback

  • Design smell: duplicated logic with <marketplace-doctor> — valid observation, tracked in a separate issue (see below), but refactoring is out of scope for this sweep.
  • Plugin enablement warning underspecific — the correct subcommand is context-dependent; adding a potentially-wrong specific command could be worse than the current generic guidance.
  • No agent frontmatter check — valid gap, explicitly calling out as out of scope via GitHub issue.

Closes feedback from: cursor[bot] comment on PR #56, claude[bot] review on PR #56

…hook auto-fix

- Use ℹ️ (not ⚠️) for stability:experimental in example output to match spec
- Generalize hardcoded todo-persist.sh to <hook-name>.sh in auto-fix example
  to prevent LLMs from pattern-matching the literal filename

Fixes review feedback from PR #56 (cursor[bot] + claude[bot])
@claude
Copy link
Copy Markdown

claude Bot commented May 6, 2026

Code Review

This is a clean, minimal follow-up that correctly addresses the two most impactful items from the PR #56 review. Both fixes are correct.

✅ Icon fix (⚠️ → ℹ️)

Confirmed correct against the spec at line 651:

- If `stability: experimental` declared: ℹ️ note as informational (expected for new or

The fix aligns the example output with the declared spec. This was the only change that could cause incorrect LLM behavior at runtime.

✅ Hook filename generalization

The behavioral spec lines (the auto-fix prompt and chmod command) are the right place to use <hook-name>.sh — that's where the LLM derives its behavior pattern.

Minor observation: incomplete generalization in example output block

The PR updated the behavioral spec but the example terminal output block still has hardcoded todo-persist.sh in two more places:

  • Line ~691: ⚠️ todo-persist.sh — exists but not executable
  • Lines ~714–715:
    ⚠️ todo-persist.sh not executable:
       chmod +x plugins/core/hooks/todo-persist.sh
    

These are in the illustrative output sample, not in the spec instructions, so the LLM behavioral risk is lower. But for consistency — and to prevent a reader from wondering why some instances use <hook-name>.sh and others use the literal name — it may be worth generalizing these too in a follow-up. Not blocking.

PR description quality

The declined-feedback section is excellent. Documenting why feedback was declined (with pointers to follow-up issues) is exactly the right practice for a sweep PR. Makes the change history much easier to audit.

Overall: LGTM with the minor observation above as a non-blocking suggestion.

Copy link
Copy Markdown

@cursor cursor Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 0bc0588. Configure here.

✅ brainstorming — name, description, triggers, next-skill: brainstorm-synthesis (found)
✅ systematic-debugging — name, description, triggers, next-skill: verify-fix (found)
⚠️ mcp-debug — triggers field present, stability: experimental
ℹ️ mcp-debug — triggers field present, stability: experimental
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary warning count inconsistent after icon change

Medium Severity

The example summary line still says 2 warnings ⚠️, but changing the mcp-debug line from ⚠️ to ℹ️ means there's now only 1 warning (todo-persist.sh) in the example output. The count wasn't updated to match, so the example is internally inconsistent. Since this is a spec/prompt that guides LLM behavior, the mismatched count could teach models to produce incorrect summary tallies at runtime.

Additional Locations (1)
Fix in Cursor Fix in Web

Reviewed by Cursor Bugbot for commit 0bc0588. Configure here.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0bc0588b74

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

✅ brainstorming — name, description, triggers, next-skill: brainstorm-synthesis (found)
✅ systematic-debugging — name, description, triggers, next-skill: verify-fix (found)
⚠️ mcp-debug — triggers field present, stability: experimental
ℹ️ mcp-debug — triggers field present, stability: experimental
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Align summary counts with info-level mcp-debug example

Changing mcp-debug from ⚠️ to ℹ️ makes the sample output internally inconsistent because the summary later still reports 2 warnings. In this command file, the example output is behavioral guidance for the doctor response format, so this mismatch can lead the generated report to present incorrect warning totals (e.g., claiming two warnings when only todo-persist.sh is a warning).

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Hex Generated by Hex review-sweep Automated review sweep fixes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant