Skip to content

docs: add manual test case documentation for all MDL command areas#386

Draft
retran wants to merge 1 commit intomendixlabs:mainfrom
retran:test-case-docs-clean
Draft

docs: add manual test case documentation for all MDL command areas#386
retran wants to merge 1 commit intomendixlabs:mainfrom
retran:test-case-docs-clean

Conversation

@retran
Copy link
Copy Markdown
Contributor

@retran retran commented Apr 29, 2026

Summary

Comprehensive manual test case documentation covering all MDL command areas. 18 test documents, 663 test cases total.

Coverage

Area File Tests
Entities & Associations entity-test-cases.md 58
Enumerations enumeration-test-cases.md 32
Microflows microflow-test-cases.md 89
Pages page-test-cases.md 72
Security security-test-cases.md 45
Navigation & Settings navigation-settings-test-cases.md 38
Organization organization-test-cases.md 28
Workflows workflow-test-cases.md 42
Integration (REST/OData) integration-test-cases.md 67
Catalog & SQL catalog-test-cases.md, sql-integration-test-cases.md 44
CLI Commands cli-commands-test-cases.md 35
Tooling (lint/report/fmt) tooling-test-cases.md 30
Business Events business-event-test-cases.md 24
Mappings mapping-test-cases.md 22
Agent/Editor agent-editor-test-cases.md 18
Images/Collections image-collection-test-cases.md 12
Session Management session-test-cases.md 7

Test Execution Results (v0.7.0-257-ga07e7e0)

  • Pass rate: 534/663 (80.5%)
  • Bugs filed: 16 confirmed issues (see comment below)
  • Test projects: Evora CRM (large), EnquiriesManagement (small)

Notes

  • All expectations corrected to match actual mxcli behavior observed during testing
  • Tests designed for manual execution by AI agents or human QA
  • Each test case includes: preconditions, steps, expected results, and cleanup

@github-actions
Copy link
Copy Markdown

AI Code Review

What Looks Good

  • The PR adds comprehensive manual test case documentation for all MDL command areas as described, with no code changes (documentation-only).
  • Each document follows the established structure from existing nanoflow test cases (numbered scenarios, MDL code blocks, expected results, failure modes, boundary/stress tests, coverage matrices, and manual test report templates).
  • The documentation covers areas that correspond to implemented features per the "Current Implementation Status" section (e.g., agent-editor, business-event, catalog, workflow, etc.), so it is not re-documenting already-shipped features as new but rather providing test documentation for existing functionality.
  • The structure is consistent and ready for QA use as stated, with clear setup instructions and test project coverage matrices.
  • No overlap with existing proposals (the proposals directory is empty) or duplication of code implementations (since no code is changed).
  • The documentation correctly uses MDL syntax patterns (colon for property definitions, qualified names, standard CRUD verbs) consistent with existing examples, and does not misuse AS in non-mapping contexts.

Recommendation

Approve the PR. The documentation is valuable for manual testing and QA processes, adheres to contribution guidelines, and introduces no risks since it contains no code changes.


Automated review via OpenRouter (Nemotron Super 120B) — workflow source

@retran retran force-pushed the test-case-docs-clean branch 2 times, most recently from c361ed9 to 1b98697 Compare April 29, 2026 20:46
@github-actions
Copy link
Copy Markdown

AI Code Review

What Looks Good

This PR adds comprehensive manual test case documentation for all MDL command areas, which is a valuable contribution for quality assurance and testing. The documentation is well-organized by feature area (agents, business events, catalog, CLI commands, etc.) and includes:

  • Clear preconditions, steps, expected results, and cleanup instructions for each test case
  • Coverage of both read and write operations (with appropriate warnings about destructive tests)
  • Roundtrip testing validation (CREATE → DESCRIBE → CREATE from output)
  • Failure mode testing and boundary/stress tests
  • Cross-module reference validation
  • Proper version-specific considerations noted where applicable

The test cases appear to align with features marked as "Implemented" in CLAUDE.md (e.g., AI agent documents, business event services), and the documentation correctly references module requirements (like AgentEditorCommons for agents) and version constraints.

The structure follows the project's documentation conventions, placing test documentation in the appropriate docs/15-testing/ directory. The PR explicitly states that expectations were corrected to match actual mxcli behavior observed during testing, showing this documentation was validated against real implementation.

Recommendation

Approve this PR. The documentation adds significant value for manual testing and QA processes without introducing any code changes or violating project guidelines. It properly covers implemented features and provides actionable test cases for both AI agents and human testers. No changes are needed.


Automated review via OpenRouter (Nemotron Super 120B) — workflow source

@retran
Copy link
Copy Markdown
Contributor Author

retran commented Apr 30, 2026

Test Report — mxcli v0.7.0-368 (2026-05-01, updated)

Project: EnquiriesManagement (Mendix 11.8.0, 281 entities)
Method: Full manual test execution of 1099 test cases across 19 categories + targeted re-test of skipped categories
Build: v0.7.0-368-g5a109a0e (rebased on upstream/main 59ab2bb)
Test docs: docs/15-testing/


Summary

Metric Session 1 (Apr 30) Session 2 (May 1)
Total test cases 1099 64 (from skipped pool)
Passed 617 48
Failed 91 8
Blocked 6
Skipped 393
Pass rate (runnable) 87% 83%
Bugs filed (new) 44 5

Session 2 focus: Re-executing previously-skipped tests from REPL/TTY (52) and feature-gap (89) categories.


Session 2 Results (2026-05-01)

REPL/TTY (Milestone 2)

Total Pass Fail Blocked
22 12 2 6

Critical finding: REPL multi-line mode is fundamentally broken for write operations. All create, grant, revoke, execute script, and similar statements trigger multi-line continuation (...>) that can never complete — semicolons on continuation lines are not detected. Only read commands (SHOW, DESCRIBE, exit, help) work in REPL.

~12 of the original 52 "REPL-required" tests work fine via -c flag (incorrectly classified).

Feature-gap (Milestone 3)

Total Pass Fail Blocked
42 36 6 0

4 failures are feature gaps (filed as enhancements), 2 are by-design limitations.


All Bugs Filed (49)

Critical (3)

# Title Issue
1 CREATE WORKFLOW user task panics (nil pointer) #403
2 SIGSEGV in visitor_rest.go:386 on end resource syntax #429
3 PANIC nil pointer in visitor_workflow.go:145 on DROP BOUNDARY EVENT #430

High (8)

# Title Issue
4 DESCRIBE ASSOCIATION emits DELETE_CASCADE but parser only accepts CASCADE #388
5 CREATE ASSOCIATION allows duplicate qualified names #389
6 GRANT ON entity accepts non-existent module roles #399
7 lint hangs after catalog load #413
8 ALTER PAGE SET only works on DynamicText/Content #414
9 UPDATE WIDGETS syntax not implemented (parse error) #416
10 GRANT with XPath WHERE clause produces empty access rules #431
11 REPL multi-line never completes for write statements Not yet filed upstream

Medium (19)

# Title Issue
12 DESCRIBE ENUMERATION drops documentation comments #393
13 CREATE ENUMERATION accepts duplicate value names #390
14 DROP ENUMERATION ambiguous name silently drops first match #391
15 DESCRIBE ENUMERATION unescaped single quotes in captions #394
16 SHOW CONTEXT OF Entity produces empty definition #396
17 ALTER SNIPPET looks up page namespace instead of snippet #402
18 DESCRIBE ODATA SERVICE output not parseable (roundtrip broken) #400
19 CREATE PUBLISHED REST SERVICE allows duplicate names #401
20 CREATE EXTERNAL ENTITY accepts non-existent OData client #417
21 SHOW WIDGETS WHERE Name LIKE filter non-functional #418
22 CATALOG.ASSOCIATIONS table does not exist #419
23 CREATE OR MODIFY REST CLIENT reports "Created" for existing #421
24 mxcli new exits 0 on download failure #422
25 CREATE EXTERNAL ENTITIES FROM ODATA CLIENT not implemented #423
26 RENAME nanoflow allows creating duplicates (no collision check) #432
27 Multiline $$ dollar-quoting broken in CREATE JSON STRUCTURE #433
28 CREATE OR MODIFY REST CLIENT parse error on Authentication #434
29 CREATE CONSUMED MCP SERVICE ProtocolVersion dash as subtraction #435
30 Image collection create or replace fails on existing #436

Low / Enhancement (19)

# Title Issue
31 SHOW STRUCTURE does not display folder hierarchy #437
32 CREATE MICROFLOW accepts calls to non-existent microflows #395
33 Unknown attribute type silently interpreted as enum reference #392
34 mxcli fmt outputs to stdout, ignores invalid input #398
35 CREATE ASSOCIATION cross-module flaky under load #420
36 diff-local exits 0 on git error #424
37 Multiple CLI commands exit 0 on error conditions #425
38 rename module not implemented #426
39 Invalid MetadataUrl accepted for OData client #427
40 DESCRIBE NANOFLOW omits excluded property #438
41 STATUS command not in MDL parser grammar #439
42 SET command only works in REPL mode, not via -c flag #440
43 HELP <topic> not implemented #441
44 No recursion depth protection in EXECUTE SCRIPT #442
45 mxcli report hangs after catalog load #397 ⚠️ NOT reproduced in session 2
46 ALTER AGENT not supported in grammar #464
47 CREATE CONSUMED REST SERVICE not supported #465
48 RENAME WORKFLOW not supported #466
49 --format mermaid not supported for workflows #467

Results by Category (Session 1)

Category Total Pass Fail Skip Pass % (runnable)
Entities 88 72 5 11 94%
Enumerations 27 22 1 4 96%
Microflows 62 51 3 8 94%
Nanoflows 52 46 3 3 94%
Pages 62 44 10 8 81%
Integration 72 52 8 14 87%
Security 55 43 5 7 90%
Navigation/Settings 60 49 3 8 94%
Organization 57 42 4 11 91%
Workflows 68 28 13 27 68%
Catalog 42 36 4 2 90%
Tooling 87 34 11 42 76%
CLI Commands 67 19 5 43 79%
SQL Integration 110 0 0 110
Session 29 16 9 4 64%
Mappings 38 17 1 20 94%
Business Events 30 5 1 24 83%
Image Collections 30 13 2 15 87%
Agent/Model/KB/MCP 63 28 3 32 90%

Skipped Tests (393) — Infrastructure Requirements

Reason Count % Session 2 status
Docker (DB + MxBuild + runtime) 132 34% Not yet tested
Feature not in test project 89 23% 42 tested → 36 pass
REPL/TTY required 52 13% 22 tested → 12 pass, 6 blocked
Blocked by other failures 48 12% Not yet tested
Auth required (Mendix SSO) 28 7% Out of scope
Stress/boundary tests 24 6% Out of scope
Heavy operations (downloads) 20 5% Not yet tested

Key Findings

  1. Crash bugs — 3 scenarios cause process crashes (SIGSEGV/panic): REST end resource syntax (SIGSEGV in visitor_rest.go:386 on end resource syntax in CREATE PUBLISHED REST SERVICE #429), workflow DROP BOUNDARY EVENT (PANIC nil pointer dereference in visitor_workflow.go:145 on ALTER WORKFLOW DROP BOUNDARY EVENT #430), and workflow user task CREATE (CREATE WORKFLOW with user task panics — nil pointer dereference #403). These are the highest priority fixes.

  2. REPL fundamentally broken for writes — Multi-line mode never completes for any write statement. Semicolons on continuation lines are not detected as terminators. REPL is usable only for SHOW/DESCRIBE/help. This blocks interactive authoring workflows entirely.

  3. Security gap — XPath-constrained GRANT produces empty access rules (GRANT with XPath WHERE clause produces empty access rules (permissions lost) #431). This silently creates rules with no permissions, giving a false sense of security.

  4. Roundtrip fidelity — DESCRIBE output often not re-parseable (DESCRIBE ASSOCIATION emits DELETE_CASCADE but parser only accepts CASCADE #388, DESCRIBE ENUMERATION emits unescaped single quotes in captions #394, DESCRIBE ODATA SERVICE output not parseable — roundtrip broken #400). Workflow captions lost on roundtrip. This blocks automated refactoring workflows.

  5. Missing validation — CREATE commands accept invalid references, allowing corrupted models (CREATE ASSOCIATION allows duplicate qualified names #389, CREATE ENUMERATION accepts duplicate value names #390, CREATE MICROFLOW accepts calls to non-existent microflows without error #395, GRANT ... ON entity accepts non-existent module roles #399, CREATE PUBLISHED REST SERVICE allows duplicate qualified names #401, CREATE EXTERNAL ENTITY accepts non-existent OData client without validation #417, RENAME nanoflow allows creating duplicates (no collision check) #432).

  6. Feature gaps — ALTER AGENT (ALTER AGENT not supported in grammar #464), CREATE CONSUMED REST SERVICE (CREATE CONSUMED REST SERVICE not supported #465), RENAME WORKFLOW (RENAME WORKFLOW not supported #466), and workflow mermaid (--format mermaid not supported for workflows #467) are missing but expected based on patterns established for other document types.

  7. Workflow authoring fragile — user task creation panics (CREATE WORKFLOW with user task panics — nil pointer dereference #403), boundary events panic (PANIC nil pointer dereference in visitor_workflow.go:145 on ALTER WORKFLOW DROP BOUNDARY EVENT #430), parallel split syntax issues, captions lost on roundtrip. Only 68% pass rate.

  8. Exit code hygiene — multiple commands exit 0 on error (mxcli new exits 0 on download failure #422, diff-local exits 0 on git error #424, Multiple CLI commands exit 0 on error conditions #425), breaking CI/scripting.


Remaining Test Work

  • Docker tests (132): Requires mxbuild setup + running app — largest remaining batch
  • Destructive ops (20): Tmpdir sandbox tests — quick
  • Cascade re-run (48): Re-run tests blocked by other failures after fixes land

@retran retran force-pushed the test-case-docs-clean branch 4 times, most recently from bbbab80 to 1625d02 Compare May 1, 2026 10:26
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 1, 2026

AI Code Review

Review Summary

This PR adds comprehensive manual test case documentation for all MDL command areas. It consists entirely of new documentation files (no code changes) covering:

  • Testing methodology and workflows (AGENT-TESTING.md)
  • Domain-specific test cases for:
    • Agent/Editor functionality
    • Business event services
    • Catalog & code navigation
    • And 15 other MDL feature areas

The documentation includes prerequisites, execution methods, verification patterns, and detailed test case templates.

Checklist Analysis

Overlap & duplication:

  • ✅ No code duplication (pure documentation)
  • ✅ Documents existing functionality, not new features
  • ✅ References existing doctest files appropriately

MDL syntax design:

  • ❌ Not applicable (no MDL syntax changes)

Full-stack consistency:

  • ❌ Not applicable (no MDL features added)

Test coverage:

  • ⚠️ Documentation describes manual tests but doesn't add automated test files
  • However, this is acceptable as the PR explicitly states it's for manual test case documentation

Security & robustness:

  • ✅ Documentation only - no security implications

Scope & atomicity:

  • ✅ Single concern: manual test documentation for MDL command areas
  • ✅ All changes are related to this single purpose

Code quality:

  • ✅ Documentation appears accurate and well-structured
  • ✅ No inconsistencies with actual implementation observed

Bugs & correctness:

  • ✅ Documentation only - no logic errors possible
  • ✅ Includes sections on handling known failures and verification patterns

Minor Issues

  1. Process adherence: The PR doesn't reference an associated issue, which goes against the contribution guideline to "File an issue first". While less critical for documentation, substantial additions like this (18 files) should ideally have tracking.

  2. PR reference consistency: Some documentation files reference PR feat: full nanoflow support — CREATE, DROP, SHOW, DESCRIBE, DIFF, MERMAID, security, and agentic skill #301 but link to pull/386, which may cause confusion.

What Looks Good

  • Excellent coverage of manual testing procedures with clear workflows
  • Comprehensive verification patterns and result reporting templates
  • Well-organized by test domain with appropriate enrichment guidance
  • Practical instructions for environment setup, test execution, and cleanup
  • Includes stress testing, boundary conditions, and failure mode guidance
  • Provides templates for consistent test reporting

Recommendation

Approve

This documentation will significantly improve the quality and consistency of manual testing for the mxcli tool. While it would have been ideal to reference an associated issue per contribution guidelines, the value of this comprehensive testing documentation outweighs this minor process concern, especially since it supports existing implementation work (referencing PR #301). The documentation is accurate, well-structured, and directly addresses a need for standardized manual testing procedures.


Automated review via OpenRouter (Nemotron Super 120B) — workflow source

@retran retran force-pushed the test-case-docs-clean branch from 1625d02 to 292817e Compare May 1, 2026 11:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant