Reduce redundant CUDA Jacobian uploads during a linear solve#2806
Reduce redundant CUDA Jacobian uploads during a linear solve#2806LwhJesse wants to merge 2 commits intosu2code:developfrom
Conversation
| #ifdef HAVE_CUDA | ||
| if (config->GetCUDA()) Jacobian.HtDTransfer(); | ||
| #endif | ||
| auto mat_vec = CSysMatrixVectorProduct<ScalarType>(Jacobian, geometry, config); |
There was a problem hiding this comment.
It seems we could make this part of CSysMatrixVectorProduct to handle all cases.
There was a problem hiding this comment.
Good point, I agree.
I will revise this so the CUDA matrix upload is handled inside CSysMatrixVectorProduct, rather than requiring each caller to do it explicitly before constructing the matvec wrapper. Then GPUMatrixVectorProduct() can stay free of the per-matvec matrix upload, while the device-side matrix is reused across repeated operator() calls.
I will also remove the explicit HtDTransfer() calls from CSysSolve.cpp and CNewtonIntegration.hpp, check the other CSysMatrixVectorProduct construction paths, and re-run CUDA/non-CUDA tests before marking this ready for review.
There was a problem hiding this comment.
Thanks, I updated the PR accordingly.
The CUDA Jacobian upload is now handled in CSysMatrixVectorProduct, so the upload
logic is centralized there instead of being repeated at individual caller sites. This
keeps GPUMatrixVectorProduct() free of the per-matvec matrix upload while covering the
linear solve and Newton-Krylov paths consistently.
I also re-ran the CUDA benchmarks against the latest develop. The performance benefit
remains, with the original large self-contained CUDA cases still showing about 1.28x to
1.31x speedup. nsys shows that this comes from reduced HtoD transfer traffic rather
than changes in the GPU matvec kernel itself.
I additionally ran supplemental targeted NK coverage to exercise the
CNewtonIntegration path affected by this change.
Proposed Changes
This draft PR reduces redundant CUDA Jacobian uploads in the CUDA matrix-vector product
path.
Previously, the CUDA matvec path uploaded the Jacobian from host to device inside each
GPUMatrixVectorProduct()call. This could repeatedly transfer the same matrix during asingle linear solve.
This revision keeps the per-matvec upload removed from
GPUMatrixVectorProduct(), butnow handles the CUDA matrix upload in
CSysMatrixVectorProductso that the upload isperformed when the matvec wrapper is constructed, rather than at scattered caller sites.
The current implementation is:
HtDTransfer()call fromCSysMatrixGPU.cu;CSysMatrixVectorProductconstructor when CUDAis enabled;
including the Newton-Krylov preconditioner path.
This keeps the original optimization goal while aligning the upload lifetime with the
abstraction boundary suggested in review. The change assumes that the Jacobian remains
unchanged while the same matvec wrapper is reused during a linear solve.
Validation
Updated local CUDA benchmarks against the latest
developon the original self-contained cases show:
Geometric mean speedup: approximately 1.234x.
nsysindicates that the speedup mainly comes from reduced Host-to-Device memcpytraffic (time / count / bytes), while the
GPUMatrixVectorProductAddkernel itselfremains essentially unchanged.
I also ran supplemental targeted Newton-Krylov coverage to exercise the
CNewtonIntegrationpath affected by thisRelated Work
None.
PR Checklist
--warn level=3whenusing meson).
(https://su2code.github.io/docs_v7/Style-Guide/).
pre-co mmit run --allto format old commits.sary.
config_template.cpp), if necessary.