Dev/prvep experiment with docs#8
Draft
pellet wants to merge 90 commits into
Draft
Conversation
c67758a to
fb931fa
Compare
Future-proof VEP naming now that more VEP types will be added.
…tions for peak latency analysis
…uploaded - Add visual-PRVEP to datasets.py with None placeholder gdrive ID - fetch_dataset raises a clear error if gdrive ID is missing - Rewrite example to use fetch_dataset instead of dotenv/DATA_DIR - Add visual_vep to sphinx-gallery examples_dirs/gallery_dirs - Exclude visual_vep examples from execution until dataset is on Drive (TODO comment)
…il dataset available
- Move intro + examples include to top (matching other experiment pages) - Move Running the Experiment up as quick-start - Add Participant Preparation section on glasses/contacts requirement - Detail sections (stimulus, VR, electrodes, timing) follow as reference
Add parabolic interpolation to get_peak() for ~0.5ms latency resolution at 250 Hz sampling rate. Increase default blocks from 4 to 8 (400 reversals per eye). Add Latency Resolution section to PR-VEP docs.
Add 02r__pattern_reversal_longitudinal.py example that loads multiple sessions for a subject, extracts per-eye P100 latency with parabolic interpolation, and plots trends over time. Add Longitudinal Tracking section to PR-VEP docs with baseline guidance.
- Updated 01r__pattern_reversal_viz to automatically fetch the example dataset from Google Drive via gdown if missing (enables CI/CD doc builds). - Modified PR-VEP block schedule to group trials by eye, allowing the use of a physical patch (ISCEV standard) without interrupting the VR session. - Upgraded VR instruction screens to render stereoscopically with color-coded backgrounds (black for patched eye, grey for open eye). - Fixed diagnostics.py signal check to correctly identify and warn about shared reference (M1/A2) failures when all channels inflate simultaneously.
- Removed the complicated state-machine logic used to decode v1 markers from block-start codes. - The pipeline now expects v2 markers (where both eye and size are fully encoded into integers 1-4) by default. - Removed bitwise math comments to clarify the condition-to-integer mapping.
- Split Oz evoked plots into two distinct cells (Large vs Small checks) for better negative space and intuitive side-by-side L/R eye comparison. - Switched default REF_SCHEME to 'Mastoid M2' and added comment explaining why M1 was too noisy for linked mastoids in session 016, allowing BM12 (Halliday inversion) to compute.
- Kept overall pipeline referenced to Fz (ISCEV) for KISS compliance. - Modified BM12 to temporarily re-reference to M2 locally, enabling the Halliday Fz polarity inversion check even when M1 is too noisy for a linked mastoid reference.
- Rewrote get_peak to find the absolute local maximum within the search window, removing MNE's strict requirement for positive values. - Supports waveforms with large downward baseline shifts (e.g. from Quest 2 pixel response time) without needing distortive high-pass filtering.
update notes * added support for VR/PsychXR up to 3.10. * fixed recording with Thinkpulse electrodes. build / CI: * CI matrix expanded — default Python 3.10 (was 3.8); experimental jobs on py3.11 (full env), py3.12 / py3.13 (streaming env). * dropped support for Python 3.8 and 3.9 across all conda envs and requirements. * matrix now parameterized by env_file / env_name so streaming-only builds can run without psychopy/psychxr. * typecheck job moved from py3.9 to py3.10. dependencies: * psychopy bumped to fork pellet/psychopy@v2026.2.0-rift-fix — fixes Rift stereo projection-matrix crash under strict-ndim psychxr. * psychopy-sounddevice switched from local editable install to official upstream Git tip (handles macOS arm64 sound). * psychxr — prebuilt Windows wheels from pellet/psychxr fork for cp310–cp313 (PyPI only ships 0.2.4 for ≤ py3.9). Adds experimental Quest-link VR support on py3.10–3.13. * pyobjc bumped 7.3 → ≥8.0 for newer psychopy. * pyxid2 added to streaming requirements. * pywinhook removed — obsolete once py3.9 was dropped (modern pynput on py3.10+ doesn't need it). * pyo removed from Analysis and Streaming sections (audio was never needed in those envs). * numpy py3.8 pin removed (numpy>=1.26 across the board). * setuptools<81 pinned in docsbuild reqs — brainflow imports pkg_resources at runtime, which setuptools 81 deprecated and 82 removed. * test deps (pytest, pytest-cov, nbval) moved into streaming requirements so streaming-only CI can run them. streaming env decoupling: * new eegnb/devices/vr.py with class VR — encapsulates psychxr/Rift integration (clock sync, per-trial telemetry buffering, telemetry CSV save, optical-axis offset computation) behind a generic VR-device name. Imported lazily so the streaming-only conda env (no psychxr) can still use the package. * lazy/optional imports added in eegnb/cli/utils.py, eegnb/cli/introprompt.py, eegnb/experiments/__init__.py so the package can be imported under a streaming-only env (no VR / no sound libs). * new tests/test_acquisition.py (acquisition smoke test) — what the streaming-env CI job actually runs. * new conftest.py for shared pytest setup. * eegnb/devices/eeg.py — `import pyxid2` wrapped in try/except so users without a Cedrus FTDI driver don't crash at import time. experiment runtime: * per-eye stimulus alignment now queried from the HMD runtime instead of being hard-coded to ±0.2. Quest 2/3 lenses are angled inward and offset within their own image, so a single fixed value left the checkerboard slightly off-centre and looking tilted outward. The new compute_optical_axis_offsets() asks the runtime for the actual per-lens position so the stimulus sits where each eye is naturally looking. * trial loop now runs at higher OS scheduling priority (psychopy core.rush) with Python's garbage collector paused, so the GC can't pause stimulus rendering mid-trial. * end-of-run timing summary added (frame timing stats) for spotting dropped or delayed frames after the fact. * HMD-clock to system-clock sync moved into a single VR.sync_vr_clock() call at the start of each run, so per-trial telemetry timestamps line up with EEG markers.
- Integrated VR compositor and display comments from incoming - Preserved local telemetry, frame tracking, and display_check features - Added MissingExperiment fallback in __init__.py - Updated test_acquisition with explicit timestamps
- Added eegnb/utils/display.py with standard refresh rate utilities - Deleted examples/visual_vep/01r__pattern_reversal_viz.py as visualization is moving to notebooks - Updated core experiment classes and vep_utils.py to refine pattern reversal VEP
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.