Skip to content

[6034518] Fix Conv->Relu->Concat Q/DQ insertion gap#1398

Open
willg-nv wants to merge 6 commits into
NVIDIA:mainfrom
willg-nv:fix/autoqdq-concat-qdq-insertion
Open

[6034518] Fix Conv->Relu->Concat Q/DQ insertion gap#1398
willg-nv wants to merge 6 commits into
NVIDIA:mainfrom
willg-nv:fix/autoqdq-concat-qdq-insertion

Conversation

@willg-nv
Copy link
Copy Markdown
Contributor

@willg-nv willg-nv commented May 6, 2026

Models with Conv->Relu->Concat patterns were not receiving Q/DQ nodes on Concat inputs, preventing TRT INT8 Concat fusion. Three fixes:

  • Remove Concat from autotuner skip-ops list (Concat is byte-level copy in TRT and can pass INT8 data through)
  • Add Conv->[BN->]Add skip rule for TRT Conv+Add+Relu kernel fusion
  • Promote tensor-level Q/DQ when uncovered consumers are all Concat

What does this PR do?

Summary

  • Remove Concat from autotuner skip-ops list — it was incorrectly blocking Q/DQ insertion at all Relu->Concat boundaries (root cause)
  • Add Concat-aware merge promotion: when uncovered consumers are all Concat, promotes to tensor-level Q/DQ (enables INT8 Concat fusion)
  • Add Conv->[BN->]Add skip rule to prevent breaking TRT Conv+Add+Relu INT8 kernel fusion
  • Add atomic Concat-group mutation to scheme sampling so the search explores all-or-nothing Concat quantization

Usage

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Enhancements
    • Autotuner now performs Concat-group-aware mutations that add or remove entire Concat input groups, improving exploration of quantization schemes in Concat-heavy regions.
    • Insertion-point logic refined to avoid disrupting main Conv/BN/activation paths, skip additional structural/utility ops, and better promote safe INT8 Concat fusion.
    • New configuration option to control minimum Concat-group mutation sampling per region.

Review Change Stack

@willg-nv willg-nv requested a review from a team as a code owner May 6, 2026 07:46
@willg-nv willg-nv requested a review from ajrasane May 6, 2026 07:46
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 6, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 6, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds Concat-group-aware atomic add/remove mutations to scheme sampling with a new config flag; integrates the mutation into insertion-point sampling probability; adds a Conv→(BN)→Add→Relu skip guard; promotes node-level insertions to tensor-level when uncovered consumers are Concat; and updates the autotuner skip-ops set.

Changes

Concat-Group Quantization Mutations

Layer / File(s) Summary
Configuration
modelopt/onnx/quantization/autotune/common.py
Added concat_group_min_samples: int = 5 to Config.
New Mutation Helper
modelopt/onnx/quantization/autotune/autotuner_base.py
Added _sample_concat_group_mutation(...) to detect Concat nodes, group candidate node-input insertion points by Concat, filter groups matching Concat arity, and atomically add or remove all inputs for a chosen Concat group.
Sampling Integration
modelopt/onnx/quantization/autotune/autotuner_base.py
_generate_next_insertion_sample now probabilistically invokes the Concat-group mutation using config.concat_group_min_samples and current pattern scheme count to compute/clamp probability.
Skip Rules & Heuristics
modelopt/onnx/quantization/autotune/insertion_points.py
skip_invalid_insertion_points gains a guard for Conv→(optional BN)→Add→Relu main-path pattern (skips Q/DQ when Conv activation has single consumer).
Promotion Logic
modelopt/onnx/quantization/autotune/insertion_points.py
merge_resolved_insertion_points updated to promote to tensor-level insertion when all consumers are covered, or when remaining uncovered consumers are exclusively Concat nodes.
Skip-ops Set Update
modelopt/onnx/quantization/autotune/insertion_points.py
get_autotuner_skip_ops recomputed to remove Concat from copy-ops and union in a larger set of non-quantizable/utility ops (indexing/reshape/etc.).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 6
✅ Passed checks (6 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly and specifically describes the main change: fixing Q/DQ insertion gaps in Conv->Relu->Concat patterns, which is the core objective across all three modified files.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Security Anti-Patterns ✅ Passed No security violations found. All SECURITY.md practices met: safe yaml.safe_load(), no unsafe deserialization patterns, no eval/exec on external input, no nosec bypasses, no unsafe dependencies added.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@modelopt/onnx/quantization/autotune/autotuner_base.py`:
- Around line 1150-1152: The computed concat_prob can exceed 1.0 (making the
branch deterministic) because concat_group_min_samples / num_schemes is
unbounded; change the calculation in autotuner_base.py where concat_prob is set
(using variables num_schemes and self.config.concat_group_min_samples) to clamp
the result to a valid probability range, e.g. ensure concat_prob =
min(max(self.config.concat_group_min_samples / num_schemes, 0.05), 1.0) so the
probability stays between 0.05 and 1.0 before the random.random() check.
- Around line 993-1016: The Concat mutation can be a no-op because action is
chosen unconditionally with random.choice(["add","remove"]); change the logic in
the mutation block in autotuner_base.py (the section using variables action,
absent_groups, full_groups, concat_groups, selected_points, selected_keys) so
you first compute available_actions based on whether absent_groups and/or
full_groups are non-empty, pick action from available_actions (or bail/return
early if none), and only then perform the add/remove branch; this ensures you
never pick an impossible action and avoids consuming a generation attempt with a
no-op.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 31b38904-2fc0-4237-b4cc-588e9eb16b79

📥 Commits

Reviewing files that changed from the base of the PR and between f34f488 and dcd28ee.

📒 Files selected for processing (3)
  • modelopt/onnx/quantization/autotune/autotuner_base.py
  • modelopt/onnx/quantization/autotune/common.py
  • modelopt/onnx/quantization/autotune/insertion_points.py

Comment thread modelopt/onnx/quantization/autotune/autotuner_base.py Outdated
Comment thread modelopt/onnx/quantization/autotune/autotuner_base.py
@willg-nv willg-nv force-pushed the fix/autoqdq-concat-qdq-insertion branch from dcd28ee to 563757d Compare May 6, 2026 08:30
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@modelopt/onnx/quantization/autotune/autotuner_base.py`:
- Around line 968-975: The concat_groups mapping is built from all_points and
may miss inputs filtered earlier, so treat a Concat as a valid group only if you
have a selectable point for every Concat input: for each group keyed by
p.node_index (concat_groups), fetch the Concat node arity (e.g., from the Concat
node's inputs/num_inputs on the node referenced by that index) and skip the
group unless len(concat_groups[node_index]) == concat_node_arity; apply the same
check in the similar block around the 982-991 logic that processes concat_groups
so you never select or sample on partial Concat groups (use the
functions/variables concat_groups, all_points, concat_local_indices,
selected_points and the Concat node's input count to implement the guard).

In `@modelopt/onnx/quantization/autotune/insertion_points.py`:
- Around line 372-395: The skip for "Conv -> [BN ->] Add" should only apply when
that Add actually feeds a Relu fusion target: update the block that handles
node.op == "Add" (the variables producer, conv_node, conv_act_input) to
additionally check the Add node's consumers and only return True if the Add's
output is consumed by a Relu (e.g., check node.outputs or the consumer nodes of
node and verify any consumer.op == "Relu" before returning True); otherwise fall
through so the quantization point is not incorrectly removed.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 0cb4da4a-b15d-4d02-9fd4-33368e34202b

📥 Commits

Reviewing files that changed from the base of the PR and between dcd28ee and 563757d.

📒 Files selected for processing (3)
  • modelopt/onnx/quantization/autotune/autotuner_base.py
  • modelopt/onnx/quantization/autotune/common.py
  • modelopt/onnx/quantization/autotune/insertion_points.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • modelopt/onnx/quantization/autotune/common.py

Comment thread modelopt/onnx/quantization/autotune/autotuner_base.py
Comment thread modelopt/onnx/quantization/autotune/insertion_points.py Outdated
@gcunhase gcunhase changed the title Fix Conv->Relu->Concat Q/DQ insertion gap [6034518] Fix Conv->Relu->Concat Q/DQ insertion gap May 6, 2026
Comment thread modelopt/onnx/quantization/autotune/common.py Outdated
@gcunhase
Copy link
Copy Markdown
Contributor

gcunhase commented May 8, 2026

/ok to test d7f2df4

@codecov
Copy link
Copy Markdown

codecov Bot commented May 8, 2026

Codecov Report

❌ Patch coverage is 28.08989% with 64 lines in your changes missing coverage. Please review.
✅ Project coverage is 76.83%. Comparing base (88e1543) to head (3073a3e).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
...elopt/onnx/quantization/autotune/autotuner_base.py 20.00% 48 Missing ⚠️
...opt/onnx/quantization/autotune/insertion_points.py 42.85% 16 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1398      +/-   ##
==========================================
- Coverage   76.92%   76.83%   -0.09%     
==========================================
  Files         478      478              
  Lines       51440    51527      +87     
==========================================
+ Hits        39568    39590      +22     
- Misses      11872    11937      +65     
Flag Coverage Δ
unit 52.56% <28.08%> (-0.05%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

willg-nv and others added 3 commits May 11, 2026 06:48
Models with Conv->Relu->Concat patterns were not receiving Q/DQ nodes
on Concat inputs, preventing TRT INT8 Concat fusion. Fixes:

- Remove Concat from autotuner skip-ops list (Concat is byte-level copy
  in TRT and can pass INT8 data through)
- Add Conv->[BN->]Add skip rule for TRT Conv+Add+Relu kernel fusion
- Promote tensor-level Q/DQ when uncovered consumers are all Concat
- Add atomic Concat-group mutation to scheme sampling (adaptive probability)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Will Guo <willg@nvidia.com>
Signed-off-by: Will Guo <willg@nvidia.com>
Signed-off-by: Will Guo <willg@nvidia.com>
@willg-nv willg-nv force-pushed the fix/autoqdq-concat-qdq-insertion branch from d7f2df4 to 3073a3e Compare May 11, 2026 06:49
@willg-nv
Copy link
Copy Markdown
Contributor Author

@gcunhase I have resolved one code-quality check issue, please help start a new CI job, thanks!

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@modelopt/onnx/quantization/autotune/autotuner_base.py`:
- Around line 1015-1024: The Concat "add" branch in _mutate_insertion_points
currently appends points_to_add to the end which breaks canonical all_points
ordering; instead, after computing points_to_add (based on selected_keys and
complete_concat_groups[target]) merge selected_points and points_to_add and then
rebuild the final list by iterating all_points in order and including any point
whose (node_index,input_index) is in the merged selection set so the returned
list respects the canonical all_points ordering (use selected_points,
points_to_add, complete_concat_groups[target], and all_points to locate and
normalize).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: d275fee3-023f-4a0c-a569-38861bc0fc5f

📥 Commits

Reviewing files that changed from the base of the PR and between 6f458e2 and 3073a3e.

📒 Files selected for processing (3)
  • modelopt/onnx/quantization/autotune/autotuner_base.py
  • modelopt/onnx/quantization/autotune/common.py
  • modelopt/onnx/quantization/autotune/insertion_points.py
✅ Files skipped from review due to trivial changes (1)
  • modelopt/onnx/quantization/autotune/common.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • modelopt/onnx/quantization/autotune/insertion_points.py

Comment thread modelopt/onnx/quantization/autotune/autotuner_base.py Outdated
willg-nv added 3 commits May 11, 2026 15:57
Signed-off-by: Will Guo <willg@nvidia.com>
Signed-off-by: Will Guo <willg@nvidia.com>
Signed-off-by: Will Guo <willg@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants