FIX: Stable random sampling in DatasetConfiguration#1697
FIX: Stable random sampling in DatasetConfiguration#1697adrian-gavrila wants to merge 1 commit intomicrosoft:mainfrom
Conversation
Memoize get_seed_groups() and get_all_seeds() so the random subset selected when max_dataset_size is set is stable for the lifetime of the configuration. Reassigning max_dataset_size invalidates the cache. Without this, baseline and strategy atomic attacks each call get_all_seed_attack_groups() independently and receive different random subsets of objectives, making baseline-vs-strategy comparison meaningless. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
| self._scenario_strategies = scenario_strategies | ||
| self._resolved_groups_cache: Optional[dict[str, list[SeedGroup]]] = None | ||
| self._resolved_seeds_cache: Optional[list[Seed]] = None | ||
| self._max_dataset_size: Optional[int] = None |
There was a problem hiding this comment.
Could we simplify this?
Instead of a cache, what if we added a baseline scenario technique that is just PromptSending. We get rid of this in initialize
if self._include_baseline:
baseline_attack = self._get_baseline()
self._atomic_attacks.insert(0, baseline_attack)
and
def _get_baseline(self) -> AtomicAttack:
And instead add a tag in _get_attack_technique_factories that adds a PromptSending technique as baseline?
_build_display_group would also likely need to be updated to support baseline?
There might be some hiccups, but it feels like a more natural place to include it as an additional technique vs trying to cache the datasets
There was a problem hiding this comment.
I like this design change, and I think it is the right direction. My only concern is on doing this instead of the caching / memoization. Many of our scenarios never call _get_attack_technique_factories which means migrating those to the factory pattern. I can certainly add those changes here but going forward EncodingDatasetConfiguration.get_all_seed_attack_groups() still gets to its own call of random.sample which would bypass the factory loop and reintroduce the issue. I think making both changes here makes sense, I just don't want to increase scope and leave the underlying cause of the bug latent.
I could certainly be misreading the underlying architecture so feel free to push back on my framing of the issue if the baseline change alone would be sufficient to resolve this bug.
Description
When a
Scenarioruns withinclude_default_baseline=Trueand aDatasetConfigurationwhosemax_dataset_sizeis set, the baseline atomic attack ended up evaluating a different random subset ofobjectives than the strategy-based atomic attacks. Baseline-vs-strategy success-rate comparisons measured two different populations and were meaningless.
Root cause:
random.sampleran fresh on every call toDatasetConfiguration.get_seed_groups()(Path 1, used by most scenarios) andget_all_seeds()(Path 2, used byEncodingDatasetConfiguration).Scenario._get_atomic_attacks_asyncandScenario._get_baseline_dataeach called these methods independently and got different samples.Fix: memoize both methods. The resolved sample is cached for the lifetime of the configuration object, and reassigning
max_dataset_sizeinvalidates the cache. Returns are defensive container copies socallers can mutate without poisoning the cache.
max_dataset_sizeis now a property whose setter re-validates the value (mirroring__init__).Subclasses inherit the fix automatically when they use base resolution methods. A short subclassing note in the class docstring flags the two methods that any future override must memoize itself.
Tests and Documentation
TestDatasetConfigurationMemoizationandTestDatasetConfigurationMaxDatasetSizeSetterclasses intest_dataset_configuration.pycovering both call paths, multi-dataset stability, cacheinvalidation, setter validation, and defensive-copy semantics. All randomness-sensitive tests patch
random.samplefor determinism.test_encoding.py(the override routes throughget_all_seeds, which is why both paths needed memoization).test_scenario.pyassertingset(baseline.objectives) == set(strategy.objectives)afterinitialize_asyncwithmax_dataset_sizeset.Verified by stashing the production change and watching the new tests fail (7 failures), then restoring and watching them pass.