fix(doctor): correct ℹ️ icon and generalize hook example from PR #56 review#59
fix(doctor): correct ℹ️ icon and generalize hook example from PR #56 review#59TechNickAI wants to merge 1 commit intomainfrom
Conversation
…hook auto-fix - Use ℹ️ (not⚠️ ) for stability:experimental in example output to match spec - Generalize hardcoded todo-persist.sh to <hook-name>.sh in auto-fix example to prevent LLMs from pattern-matching the literal filename Fixes review feedback from PR #56 (cursor[bot] + claude[bot])
Code ReviewThis is a clean, minimal follow-up that correctly addresses the two most impactful items from the PR #56 review. Both fixes are correct. ✅ Icon fix (
|
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 0bc0588. Configure here.
| ✅ brainstorming — name, description, triggers, next-skill: brainstorm-synthesis (found) | ||
| ✅ systematic-debugging — name, description, triggers, next-skill: verify-fix (found) | ||
| ⚠️ mcp-debug — triggers field present, stability: experimental | ||
| ℹ️ mcp-debug — triggers field present, stability: experimental |
There was a problem hiding this comment.
Summary warning count inconsistent after icon change
Medium Severity
The example summary line still says 2 warnings ⚠️, but changing the mcp-debug line from ⚠️ to ℹ️ means there's now only 1 warning (todo-persist.sh) in the example output. The count wasn't updated to match, so the example is internally inconsistent. Since this is a spec/prompt that guides LLM behavior, the mismatched count could teach models to produce incorrect summary tallies at runtime.
Additional Locations (1)
Reviewed by Cursor Bugbot for commit 0bc0588. Configure here.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 0bc0588b74
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| ✅ brainstorming — name, description, triggers, next-skill: brainstorm-synthesis (found) | ||
| ✅ systematic-debugging — name, description, triggers, next-skill: verify-fix (found) | ||
| ⚠️ mcp-debug — triggers field present, stability: experimental | ||
| ℹ️ mcp-debug — triggers field present, stability: experimental |
There was a problem hiding this comment.
Align summary counts with info-level mcp-debug example
Changing mcp-debug from ⚠️ to ℹ️ makes the sample output internally inconsistent because the summary later still reports 2 warnings. In this command file, the example output is behavioral guidance for the doctor response format, so this mismatch can lead the generated report to present incorrect warning totals (e.g., claiming two warnings when only todo-persist.sh is a warning).
Useful? React with 👍 / 👎.


Summary
Follow-up fixes from bot review feedback on #56 (ai-coding-config doctor subcommand).
Changes
Bug Fix: Icon inconsistency (⚠️ → ℹ️ for experimental stability)
Both
cursor[bot]andclaude[bot]caught that the example output used⚠️forstability: experimentalwhile the spec explicitly saysℹ️(informational, not awarning). This was the only issue that could cause incorrect LLM output at runtime.
Before:
⚠️ mcp-debug — triggers field present, stability: experimentalAfter:
ℹ️ mcp-debug — triggers field present, stability: experimentalImprovement: Generalize hardcoded hook filename in auto-fix example
claude[bot]noted that hardcodingtodo-persist.shin the auto-fix example couldcause LLMs to pattern-match and substitute the literal filename instead of the actual
failing hook. Generalized to
<hook-name>.shplaceholder.Declined feedback
<marketplace-doctor>— valid observation, tracked in a separate issue (see below), but refactoring is out of scope for this sweep.Closes feedback from: cursor[bot] comment on PR #56, claude[bot] review on PR #56