Conversation
|
|
||
| // Calibrate loop count to hit a minimum wall time per value. | ||
| template <typename Fn> | ||
| std::uint64_t calibrate_loops(const Options& options, Fn&& fn) { |
There was a problem hiding this comment.
calibrate_loops() looks like a custom reimplementation of the same minimum-time calibration idea that tools like pyperf and Google Benchmark already provide. I understand that there are advantages to keeping the existing custom C++ harness / pyperf-compatible JSON output, but if we go that route, I think the custom calibration should match the usual/expected --min-time semantics more closely.
Right now, I do not think --min-time actually guarantees the requested minimum sample duration for a large chunk of these microbenchmarks. The calibration stops once it hits max_loops = 1000000, and BenchmarkSuite::run() then uses that capped loop count as-is. For very fast benchmarks, that cap is still well short of 0.1 s per value. For example:
- ~
6 ns/optops out around6 ms - ~
33 ns/optops out around33 ms - even ~
79 ns/oponly reaches around79 ms
So the new "stable collection" mode still under-times many of the exact fast C++ benchmarks it is meant to stabilize. The same issue also seems to apply to the explicit-loop overload, which means the --min-time semantics are not consistently honored there either.
I think that is important to fix before we rely on these numbers as "min-time collected" results, because readers will reasonably assume that --min-time 0.1 means each measured value is actually being calibrated to about 100 ms.
There was a problem hiding this comment.
yeah thats a good point. I raised the default --max-loops 1M to 100M so typical microbenchmarks (down to ~6 ns/op) can reach a 100 ms target. I kept it for safety but I would also be fine removing the default.
I am mostly matching the pyperf JSON so we can compare stuff but I am happy to change to something else if we think thats better.
| std::uint64_t loops = options_.loops; | ||
| Options custom = options_; | ||
| if (options_.min_time_sec > 0.0) { | ||
| loops = calibrate_loops(options_, fn); |
There was a problem hiding this comment.
Generated by Cursor GPT-5.4 Extra High Fast after a few round of prompting, with a few small edits
I think the --min-time direction is great, but for async / stream-backed benchmarks the calibration pass may change the state being measured before the first timed sample. For synchronous operations that is probably harmless, but for enqueue-style operations it looks like calibration can leave a substantial backlog on the persistent stream, so the harness may end up measuring "enqueue on an already busy stream" rather than the quieter starting condition these benchmarks had before.
More concretely, calibrate_loops() repeatedly calls fn() until the target wall time is reached, and BenchmarkSuite::run() then immediately goes into run_benchmark() with no post-calibration drain/reset. That means calibration is not just estimating a loop count; it is also mutating benchmark state right before measurement begins.
Why I think this matters:
- For async benchmarks,
fn()often enqueues work onto a persistent stream rather than fully completing it. - A
0.1 starget can mean a lot of enqueued work before the first warmup/value. For example, a~1.6 uskernel launch benchmark needs on the order of~60klaunches to get there. - So the first measured sample may start from a deeply backlogged stream rather than an effectively empty stream.
That seems especially relevant for benchmarks like:
- kernel launches in
bench_launch.cpp cuMemAllocAsync/cuMemFreeAsyncinbench_memory.cppcuEventRecordinbench_event.cpp
The effect may be small for some cases, but since this PR is specifically about improving measurement stability, I think it is worth making sure the calibration step is not also changing what is being measured.
Helpful follow-up directions could be:
- add a post-calibration drain/reset step before the first measured warmup/value for async benchmarks
- or do a quick A/B comparison with and without a
cuStreamSynchronize()after calibration - or calibrate on a separate warmup path/stream if that is practical
If those variants all produce essentially the same numbers, that would be reassuring. If they move the launch/async-memory/event benchmarks materially, then this concern is real.
There was a problem hiding this comment.
Added a set_post_calibrate() hook on BenchmarkSuite that runs after calibration and before the first measured warmup. Wired cuStreamSynchronize(stream) as the hook in the four binaries that enqueue onto the
persistent stream: bench_launch.cpp, bench_memory.cpp, bench_event.cpp, bench_stream.cpp. So calibration no longer leaves a backlog for the measurement phase to start from.
|
Added those changes from the review. I ran it again on my dev server and didnt see a big change on the results but agree that they should make things more stable. |
|
I just deleted the last comment. My agent posted this even though I didn't really ask it to. I'm still reviewing. |
|
This suggested change tightens the bookkeeping in commit 19444493405249c49bbf0381db96b694c1dba549
Author: Ralf W. Grosse-Kunstleve <rgrossekunst@nvidia.com>
Date: Mon Apr 27 12:22:17 2026 -0700
best_capped, best_elapsed tracking
diff --git a/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp b/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp
index 131f69de54..3c76b93caf 100644
--- a/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp
+++ b/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp
@@ -136,15 +136,16 @@ std::uint64_t calibrate_loops(
// Allow callers (e.g. the explicit-loop overload) to request a minimum
// starting loop count via options.loops.
const std::uint64_t start_loops = std::max<std::uint64_t>(1, options.loops);
- std::uint64_t best = start_loops;
const std::uint64_t max_loops = std::max<std::uint64_t>(start_loops, options.max_loops);
const std::uint64_t rounds = std::max<std::uint64_t>(1, options.calibrate_rounds);
- bool capped = false;
- double last_elapsed = 0.0;
+ std::uint64_t best_loops = 0;
+ bool best_capped = false;
+ double best_elapsed = 0.0;
for (std::uint64_t round = 0; round < rounds; ++round) {
std::uint64_t loops = start_loops;
+ bool round_capped = false;
double elapsed = 0.0;
while (true) {
@@ -159,7 +160,7 @@ std::uint64_t calibrate_loops(
break;
}
if (loops >= max_loops) {
- capped = true;
+ round_capped = true;
break;
}
if (loops > max_loops / 2) {
@@ -169,15 +170,16 @@ std::uint64_t calibrate_loops(
}
}
- if (loops > best) {
- best = loops;
+ if (loops >= best_loops) {
+ best_loops = loops;
+ best_capped = round_capped;
+ best_elapsed = elapsed;
}
- last_elapsed = elapsed;
}
- if (capped_out) *capped_out = capped;
- if (last_elapsed_out) *last_elapsed_out = last_elapsed;
- return best;
+ if (capped_out) *capped_out = best_capped;
+ if (last_elapsed_out) *last_elapsed_out = best_elapsed;
+ return best_loops;
}
// Run a benchmark function. The function signature is: void fn() — one call = one operation. |
|
This suggested change moves the existing post-calibration hook into commit e531d9a3ca1c2817ff80edce2f4029a23b9665ce
Author: Ralf W. Grosse-Kunstleve <rgrossekunst@nvidia.com>
Date: Mon Apr 27 12:29:36 2026 -0700
Integrate post-calibration hook into calibrate_loops to run between calibration probes.
diff --git a/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp b/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp
index 3c76b93caf..d47a63797c 100644
--- a/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp
+++ b/benchmarks/cuda_bindings/benchmarks/cpp/bench_support.hpp
@@ -124,6 +124,7 @@ template <typename Fn>
std::uint64_t calibrate_loops(
const Options& options,
Fn&& fn,
+ const std::function<void()>& post_calibrate = {},
bool* capped_out = nullptr,
double* last_elapsed_out = nullptr
) {
@@ -156,6 +157,9 @@ std::uint64_t calibrate_loops(
const auto t1 = std::chrono::steady_clock::now();
elapsed = std::chrono::duration<double>(t1 - t0).count();
+ if (post_calibrate) {
+ post_calibrate();
+ }
if (elapsed >= options.min_time_sec) {
break;
}
@@ -324,18 +328,18 @@ class BenchmarkSuite {
public:
explicit BenchmarkSuite(Options options) : options_(std::move(options)) {}
- // Post-calibration hook. If set, invoked after calibration and before the
- // first measured warmup/value, for every benchmark in this suite. Intended
- // for async benchmarks that need to drain state left behind by calibration
- // (e.g. cuStreamSynchronize on a persistent stream). Can be overridden
- // per-call via the `post_calibrate` parameter on `run()`.
+ // Post-calibration hook. If set, invoked between calibration probes so
+ // async benchmarks can drain state left behind by each probe before the
+ // next one runs. The final probe leaves the benchmark in a drained state
+ // before the first measured warmup/value. Can be overridden per-call via
+ // the `post_calibrate` parameter on `run()`.
void set_post_calibrate(std::function<void()> hook) {
post_calibrate_ = std::move(hook);
}
// Run a benchmark and record it. The name is used as the benchmark ID.
// If --min-time is set, loop count is auto-calibrated. `post_calibrate`,
- // if provided, runs after calibration and before measurement.
+ // if provided, runs between calibration probes to reset async state.
template <typename Fn>
void run(
const std::string& name,
@@ -345,9 +349,8 @@ public:
std::uint64_t loops = options_.loops;
Options custom = options_;
if (options_.min_time_sec > 0.0) {
- loops = calibrate_and_warn(name, options_, fn);
+ loops = calibrate_and_warn(name, options_, fn, select_post_calibrate(post_calibrate));
custom.loops = loops;
- invoke_post_calibrate(post_calibrate);
}
auto results = run_benchmark(custom, std::forward<Fn>(fn));
print_summary(name, results);
@@ -370,9 +373,8 @@ public:
if (options_.min_time_sec > 0.0) {
Options calib_opts = options_;
calib_opts.loops = loops_override; // floor
- loops = calibrate_and_warn(name, calib_opts, fn);
+ loops = calibrate_and_warn(name, calib_opts, fn, select_post_calibrate(post_calibrate));
custom.loops = loops;
- invoke_post_calibrate(post_calibrate);
}
auto results = run_benchmark(custom, std::forward<Fn>(fn));
print_summary(name, results);
@@ -391,24 +393,24 @@ private:
std::vector<BenchmarkEntry> entries_;
std::function<void()> post_calibrate_;
- void invoke_post_calibrate(const std::function<void()>& per_call) const {
+ std::function<void()> select_post_calibrate(const std::function<void()>& per_call) const {
if (per_call) {
- per_call();
- } else if (post_calibrate_) {
- post_calibrate_();
+ return per_call;
}
+ return post_calibrate_;
}
template <typename Fn>
std::uint64_t calibrate_and_warn(
const std::string& name,
const Options& calib_opts,
- Fn&& fn
+ Fn&& fn,
+ const std::function<void()>& post_calibrate
) const {
bool capped = false;
double last_elapsed = 0.0;
std::uint64_t loops = calibrate_loops(
- calib_opts, std::forward<Fn>(fn), &capped, &last_elapsed
+ calib_opts, std::forward<Fn>(fn), post_calibrate, &capped, &last_elapsed
);
if (capped) {
std::cerr << "WARNING: " << name |
rwgk
left a comment
There was a problem hiding this comment.
This is my last suggestion in this review round.
| py_mean = py_stats[0] | ||
| cpp_mean = cpp_stats[0] | ||
| overhead_ns = (py_mean - cpp_mean) * 1e9 | ||
| overhead_str = f"+{overhead_ns:.0f} ns" |
There was a problem hiding this comment.
| overhead_str = f"+{overhead_ns:.0f} ns" | |
| overhead_str = f"{overhead_ns:+.0f} ns" |
Super easy, and this will also be correct if a benchmark ever comes out with Python faster than C++.
Description
Follow up #1580
This one is to discuss and improve Cpp harness to make a more stable data collection.
I added a with min-time flag and to report the min/std on the compare.
This is the last one i got, where most of them are <2%.
stream.stream_create_destroydid went back from like 8% to 6% when I collected more time.This makes me feel a bit better about those numbers but the Python ones do seem more stable except
event.event_query.If we think its worth it to do the process isolation for the Cpp one let me know and I will take a look.
Checklist