Skip to content

Reduce redundant CUDA Jacobian uploads during a linear solve#2806

Draft
LwhJesse wants to merge 2 commits intosu2code:developfrom
LwhJesse:perf/gpu-single-upload-pr
Draft

Reduce redundant CUDA Jacobian uploads during a linear solve#2806
LwhJesse wants to merge 2 commits intosu2code:developfrom
LwhJesse:perf/gpu-single-upload-pr

Conversation

@LwhJesse
Copy link
Copy Markdown

@LwhJesse LwhJesse commented Apr 30, 2026

Proposed Changes

This draft PR reduces redundant CUDA Jacobian uploads in the CUDA matrix-vector product
path.

Previously, the CUDA matvec path uploaded the Jacobian from host to device inside each
GPUMatrixVectorProduct() call. This could repeatedly transfer the same matrix during a
single linear solve.

This revision keeps the per-matvec upload removed from GPUMatrixVectorProduct(), but
now handles the CUDA matrix upload in CSysMatrixVectorProduct so that the upload is
performed when the matvec wrapper is constructed, rather than at scattered caller sites.

The current implementation is:

  • remove the per-matvec HtDTransfer() call from CSysMatrixGPU.cu;
  • perform the CUDA matrix upload in the CSysMatrixVectorProduct constructor when CUDA
    is enabled;
  • remove the now-redundant explicit upload handling from the linear solve callers,
    including the Newton-Krylov preconditioner path.

This keeps the original optimization goal while aligning the upload lifetime with the
abstraction boundary suggested in review. The change assumes that the Jacobian remains
unchanged while the same matvec wrapper is reused during a linear solve.

Validation

Updated local CUDA benchmarks against the latest develop on the original self-
contained cases show:

Case Develop Patched Speedup
periodic2d_sector 2.089169 s 2.038985 s 1.025x
udf_lam_flatplate_s 4.798030 s 3.596918 s 1.334x
udf_lam_flatplate_m 22.619971 s 17.323714 s 1.306x
udf_lam_flatplate_l 39.253938 s 30.314421 s 1.295x
udf_test_11_probes_s 3.171073 s 2.661697 s 1.191x
udf_test_11_probes_m 15.916509 s 12.393672 s 1.284x

Geometric mean speedup: approximately 1.234x.

nsys indicates that the speedup mainly comes from reduced Host-to-Device memcpy
traffic (time / count / bytes), while the GPUMatrixVectorProductAdd kernel itself
remains essentially unchanged.

I also ran supplemental targeted Newton-Krylov coverage to exercise the
CNewtonIntegration path affected by this

Related Work

None.

PR Checklist

  • I am submitting my contribution to the develop branch.
  • My contribution generates no new compiler warnings (try with --warn level=3 when
    using meson).
  • My contribution is commented and consistent with SU2 style
    (https://su2code.github.io/docs_v7/Style-Guide/).
  • I used the pre-commit hook to prevent dirty commits and used pre-co mmit run --all to format old commits.
  • I have added a test case that demonstrates my contribution, if neces
    sary.
  • I have updated appropriate documentation (Tutorials, Docs Page,
    config_template.cpp), if necessary.

Comment thread Common/src/linear_algebra/CSysSolve.cpp Outdated
Comment on lines 1467 to 1470
#ifdef HAVE_CUDA
if (config->GetCUDA()) Jacobian.HtDTransfer();
#endif
auto mat_vec = CSysMatrixVectorProduct<ScalarType>(Jacobian, geometry, config);
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems we could make this part of CSysMatrixVectorProduct to handle all cases.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I agree.

I will revise this so the CUDA matrix upload is handled inside CSysMatrixVectorProduct, rather than requiring each caller to do it explicitly before constructing the matvec wrapper. Then GPUMatrixVectorProduct() can stay free of the per-matvec matrix upload, while the device-side matrix is reused across repeated operator() calls.

I will also remove the explicit HtDTransfer() calls from CSysSolve.cpp and CNewtonIntegration.hpp, check the other CSysMatrixVectorProduct construction paths, and re-run CUDA/non-CUDA tests before marking this ready for review.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I updated the PR accordingly.

The CUDA Jacobian upload is now handled in CSysMatrixVectorProduct, so the upload
logic is centralized there instead of being repeated at individual caller sites. This
keeps GPUMatrixVectorProduct() free of the per-matvec matrix upload while covering the
linear solve and Newton-Krylov paths consistently.

I also re-ran the CUDA benchmarks against the latest develop. The performance benefit
remains, with the original large self-contained CUDA cases still showing about 1.28x to
1.31x speedup. nsys shows that this comes from reduced HtoD transfer traffic rather
than changes in the GPU matvec kernel itself.

I additionally ran supplemental targeted NK coverage to exercise the
CNewtonIntegration path affected by this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants