🐛 Address review feedback from PR #55#57
Conversation
…: LLM offers handoff, user confirms (safety) - Add stability: experimental to brainstorm-synthesis (demonstrates the field) - Align stability wording between dev guide and skill-creation guide
Code Review — PR #57This is a clean, focused documentation improvement that addresses real feedback from PR #55. The changes are well-scoped and the core clarification is important. What's Working Well
This aligns with the heart-centered AI philosophy in the project (presence before solutions, user agency). Good call.
Table alignment improvement is a nice polish win — the padded columns are easier to scan. Version bump consistency — Minor Observations1. LLM-enforced convention is fragile by design — worth being explicit about that. The docs note: "The Claude Code harness ignores them; the LLM reads and acts on them as part of the skill content." The "user confirms before next skill runs" guarantee relies entirely on the LLM reading and honoring the convention — there's no harness enforcement. This is a known architectural choice, but since this PR is specifically clarifying what users can expect, it might be worth one sentence acknowledging that this is a convention (not a technical guarantee) so power users aren't surprised if an LLM model skips the confirmation step. Something like: "This is a convention enforced by the LLM reading the skill, not by the harness — model behavior may vary." Optional, not blocking. 2. The annotated example shows 3. Trailing spaces in the aligned table ( VerdictApprove. The next-skill semantic clarification is the substance of this PR and it's correct. The rest is polish. No functional changes, no regressions, consistent cross-file update. Ready to merge. |
Downstream commands (verify-fix) can't declare requires back since they lack SKILL.md frontmatter, and terminal skills (ship) are invokable standalone without inbound dependency declarations. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
PR #57 ReviewThis is a focused documentation tightening addressing valid feedback from the PR #55 review. Changes are scoped to three files and the intent is clear. Substance
The original wording ("invokes the next skill when its task is complete") implied automatic chaining. The replacement ("offers the handoff... the user confirms before the next skill runs. Never auto-invoke silently.") correctly establishes this as a user-gateable suggestion. This matters for the heart-centered AI philosophy in this project — user agency should be explicit. The change is applied consistently across both doc locations (
Adding the field to an actual in-repo skill (not just the example block) makes the docs self-demonstrating. The version bump ( Annotated chains footnote is a useful disambiguation.
This answers the "why doesn't Minor Observations
VerdictApprove. The next-skill clarification is the substance of this PR and it's correct. Consistent cross-file updates, clean diff, no regressions. Ready to merge. |
Addresses valid feedback from Claude bot reviews on PR #55. Clarifies next-skill convention (LLM offers handoff, user confirms), adds stability: experimental to brainstorm-synthesis as a concrete example, and aligns stability wording across docs.