-
Notifications
You must be signed in to change notification settings - Fork 261
Slay: discard duplicates with censor_period_ms in computing RP violations
#4530
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
alejoe91
wants to merge
29
commits into
SpikeInterface:main
Choose a base branch
from
alejoe91:slay-discard-duplicates
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+155
−21
Draft
Changes from all commits
Commits
Show all changes
29 commits
Select commit
Hold shift + click to select a range
3dc5729
Test IBL extractors tests failing for PI update
alejoe91 d1a0532
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 33c6769
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 2c94bac
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 a412bd8
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 504e19d
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 cd09c19
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 a40d073
Merge branch 'main' of github.com:alejoe91/spikeinterface
alejoe91 a1da327
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 ef19a8e
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 a098b51
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 61c317a
Fix OpenEphys tests
alejoe91 c9ff247
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 3520138
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 f61329d
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 d64ae6a
Merge branch 'main' of github.com:alejoe91/spikeinterface
alejoe91 aef197d
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 e82331b
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 710cb6f
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 c2f8db1
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 161d25b
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 1d09ec6
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 afb7d33
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 fa556ba
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 8e68f16
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 1c80910
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 5eff246
Merge branch 'main' of github.com:SpikeInterface/spikeinterface
alejoe91 2c7bceb
wip: add censor_period_ms to slay
alejoe91 e0ea859
subtract coincident spikes and use merged ACG for slay RP violation
alejoe91 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -88,7 +88,7 @@ | |
| "censored_period_ms": 0.3, | ||
| }, | ||
| "quality_score": {"firing_contamination_balance": 1.5, "refractory_period_ms": 1.0, "censored_period_ms": 0.3}, | ||
| "slay_score": {"k1": 0.25, "k2": 1, "slay_threshold": 0.5}, | ||
| "slay_score": {"k1": 0.25, "k2": 1, "slay_threshold": 0.5, "censored_period_ms": 0.2}, | ||
| } | ||
|
|
||
|
|
||
|
|
@@ -309,7 +309,6 @@ def compute_merge_unit_groups( | |
| win_sizes, | ||
| pair_mask=pair_mask, | ||
| ) | ||
| # print(correlogram_diff) | ||
| pair_mask = pair_mask & (correlogram_diff < params["corr_diff_thresh"]) | ||
| outs["correlograms"] = correlograms | ||
| outs["bins"] = bins | ||
|
|
@@ -373,12 +372,20 @@ def compute_merge_unit_groups( | |
| outs["pairs_decreased_score"] = pairs_decreased_score | ||
|
|
||
| elif step == "slay_score": | ||
|
|
||
| M_ij = compute_slay_matrix( | ||
| sorting_analyzer, params["k1"], params["k2"], templates_diff=outs["templates_diff"], pair_mask=pair_mask | ||
| M_ij, sigma_ij, rho_ij, eta_ij = compute_slay_matrix( | ||
| sorting_analyzer, | ||
| params["k1"], | ||
| params["k2"], | ||
| params["censored_period_ms"], | ||
| templates_diff=outs["templates_diff"], | ||
| pair_mask=pair_mask, | ||
| ) | ||
|
|
||
| pair_mask = pair_mask & (M_ij > params["slay_threshold"]) | ||
| outs["slay_M_ij"] = M_ij | ||
| outs["slay_sigma_ij"] = sigma_ij | ||
| outs["slay_rho_ij"] = rho_ij | ||
| outs["slay_eta_ij"] = eta_ij | ||
|
|
||
| # FINAL STEP : create the final list from pair_mask boolean matrix | ||
| ind1, ind2 = np.nonzero(pair_mask) | ||
|
|
@@ -1552,6 +1559,7 @@ def compute_slay_matrix( | |
| sorting_analyzer: SortingAnalyzer, | ||
| k1: float, | ||
| k2: float, | ||
| censor_period_ms: float, | ||
| templates_diff: np.ndarray | None, | ||
| pair_mask: np.ndarray | None = None, | ||
| ): | ||
|
|
@@ -1569,6 +1577,9 @@ def compute_slay_matrix( | |
| Coefficient determining the importance of the cross-correlation significance | ||
| k2 : float | ||
| Coefficient determining the importance of the sliding rp violation | ||
| censor_period_ms : float | ||
| The censored period to exclude from the refractory period computation to discard | ||
| duplicated spikes. | ||
| templates_diff : np.ndarray | None | ||
| Pre-computed template similarity difference matrix. If None, it will be retrieved from the sorting_analyzer. | ||
| pair_mask : None | np.ndarray, default: None | ||
|
|
@@ -1592,14 +1603,35 @@ def compute_slay_matrix( | |
| sigma_ij = 1 - templates_diff | ||
| else: | ||
| sigma_ij = sorting_analyzer.get_extension("template_similarity").get_data() | ||
| rho_ij, eta_ij = compute_xcorr_and_rp(sorting_analyzer, pair_mask) | ||
| rho_ij, eta_ij = compute_xcorr_and_rp(sorting_analyzer, pair_mask, censor_period_ms) | ||
|
|
||
| M_ij = sigma_ij + k1 * rho_ij - k2 * eta_ij | ||
|
|
||
| return M_ij | ||
| return M_ij, sigma_ij, rho_ij, eta_ij | ||
|
|
||
|
|
||
| def compute_xcorr_and_rp(sorting_analyzer: SortingAnalyzer, pair_mask: np.ndarray): | ||
| def _count_coincident_spikes(t1, t2, max_samples): | ||
| """ | ||
| Count spikes in t1 that have a matching spike in t2 within max_samples, | ||
| split by lag direction. | ||
|
|
||
| Returns (n_nonneg, n_neg) where n_nonneg counts pairs where t2 >= t1 | ||
| (non-negative lag, mapped to the right center CCG bin) and n_neg counts | ||
| pairs where t2 < t1 (negative lag, mapped to the left center CCG bin). | ||
| """ | ||
| if len(t1) == 0 or len(t2) == 0: | ||
| return 0, 0 | ||
| indices = np.searchsorted(t2, t1, side="left") | ||
| right_valid = indices < len(t2) | ||
| right_diffs = np.where(right_valid, t2[np.minimum(indices, len(t2) - 1)] - t1, max_samples + 1) | ||
| left_valid = indices > 0 | ||
| left_diffs = np.where(left_valid, t1 - t2[np.maximum(indices - 1, 0)], max_samples + 1) | ||
| n_nonneg = int(np.sum(right_diffs <= max_samples)) | ||
| n_neg = int(np.sum((left_diffs <= max_samples) & (left_diffs > 0))) | ||
| return n_nonneg, n_neg | ||
|
|
||
|
|
||
| def compute_xcorr_and_rp(sorting_analyzer: SortingAnalyzer, pair_mask: np.ndarray, censor_period_ms: float): | ||
| """ | ||
| Computes a cross-correlation significance measure and a sliding refractory period violation | ||
| measure for all units in the `sorting_analyzer`. | ||
|
|
@@ -1610,14 +1642,36 @@ def compute_xcorr_and_rp(sorting_analyzer: SortingAnalyzer, pair_mask: np.ndarra | |
| The sorting analyzer object containing the spike sorting data | ||
| pair_mask : np.ndarray | ||
| A bool matrix describing which pairs are possible merges based on previous steps | ||
| censor_period_ms : float | ||
| The censored period to exclude from the refractory period computation to discard | ||
| duplicated spikes. | ||
|
|
||
| Returns | ||
| ------- | ||
| rho_ij : np.ndarray | ||
| The cross-correlation significance measure for each pair of units. | ||
| eta_ij : np.ndarray | ||
| The sliding refractory period violation measure for each pair of units. | ||
| """ | ||
|
|
||
| correlograms_extension = sorting_analyzer.get_extension("correlograms") | ||
| ccgs, _ = correlograms_extension.get_data() | ||
| ccgs, bin_edges = correlograms_extension.get_data() | ||
|
|
||
| # convert to seconds for SLAy functions | ||
| bin_size_ms = correlograms_extension.params["bin_ms"] | ||
|
|
||
| # pre-fetch spike trains for duplicate counting (sub-bin resolution) | ||
| if censor_period_ms > 0: | ||
| sorting = sorting_analyzer.sorting | ||
| censor_period_samples = int(censor_period_ms / 1000 * sorting_analyzer.sampling_frequency) | ||
| n_segments = sorting_analyzer.get_num_segments() | ||
| spike_trains = [ | ||
| [sorting.get_unit_spike_train(unit_id=uid, segment_index=seg) for seg in range(n_segments)] | ||
| for uid in sorting_analyzer.unit_ids | ||
| ] | ||
| # lag=0 spike pairs land in the bin starting at 0: xgram[num_half_bins] | ||
| center_bin = ccgs.shape[2] // 2 | ||
|
|
||
| rho_ij = np.zeros([len(sorting_analyzer.unit_ids), len(sorting_analyzer.unit_ids)]) | ||
| eta_ij = np.zeros([len(sorting_analyzer.unit_ids), len(sorting_analyzer.unit_ids)]) | ||
|
|
||
|
|
@@ -1630,10 +1684,39 @@ def compute_xcorr_and_rp(sorting_analyzer: SortingAnalyzer, pair_mask: np.ndarra | |
|
|
||
| xgram = ccgs[unit_index_1, unit_index_2, :] | ||
|
|
||
| # Merged ACG approximation: sum of individual ACGs and both CCG directions. | ||
| # _sliding_RP_viol_pair expects the ACG of the merged unit; the merged ACG | ||
| # has a large center bin when duplicates are present, which the LP filter | ||
| # attenuates so bin_rate_max reflects the flank rate — making RP violations | ||
| # detectable (unlike using the CCG alone where bin_rate_max is dominated by | ||
| # the duplicate peak and masks violations). | ||
| merged_acg = ( | ||
| ccgs[unit_index_1, unit_index_1, :] | ||
| + ccgs[unit_index_2, unit_index_2, :] | ||
| + ccgs[unit_index_1, unit_index_2, :] | ||
| + ccgs[unit_index_2, unit_index_1, :] | ||
| ) | ||
|
Comment on lines
+1693
to
+1698
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think this is right. In SLAy, This changes the input to be the auto-correlogram of the putative merge. |
||
|
|
||
| if censor_period_ms > 0: | ||
| # count number of spikes within the censor period from the two units | ||
| n_right, n_left = 0, 0 | ||
| for seg in range(n_segments): | ||
| r, l = _count_coincident_spikes( | ||
| spike_trains[unit_index_1][seg], spike_trains[unit_index_2][seg], censor_period_samples | ||
| ) | ||
| n_right += r | ||
| n_left += l | ||
| # subtract number of duplicates from central bin(s) of the merged ACG: | ||
| # n_right pairs land in center_bin (lag ≥ 0), n_left in center_bin-1 (lag < 0); | ||
| # each direction is counted in both ccgs[i,j] and ccgs[j,i], hence the factor 2 | ||
| merged_acg = merged_acg.copy() | ||
| merged_acg[center_bin] = max(0, merged_acg[center_bin] - 2 * n_right) | ||
| merged_acg[center_bin - 1] = max(0, merged_acg[center_bin - 1] - 2 * n_left) | ||
|
|
||
| rho_ij[unit_index_1, unit_index_2] = _compute_xcorr_pair( | ||
| xgram, bin_size_s=bin_size_ms / 1000, min_xcorr_rate=0 | ||
| ) | ||
| eta_ij[unit_index_1, unit_index_2] = _sliding_RP_viol_pair(xgram, bin_size_ms=bin_size_ms) | ||
| eta_ij[unit_index_1, unit_index_2] = _sliding_RP_viol_pair(merged_acg, bin_size_ms=bin_size_ms) | ||
|
|
||
| return rho_ij, eta_ij | ||
|
|
||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm against changing the default behavior without consulting the SLAy team.