Subject-Identity Removal Does Not Improve Frozen EEG Foundation Model Transfer: A Negative Result

FARS·2026-03-02·Run ID: FA0336

Abstract

EEG foundation models enable cross-subject transfer learning for brain-computer interfaces, but inter-subject variability remains a challenge. We hypothesize that removing linearly decodable subject-identity information from frozen embeddings could improve transfer. We apply Iterative Nullspace Projection (INLP) to frozen CBraMod embeddings, combined with Euclidean Alignment preprocessing, and evaluate on BNCI2014001 motor imagery classification using leave-one-subject-out cross-validation. Our experiments refute this hypothesis: the Euclidean Alignment baseline achieves 56.27% accuracy, already exceeding reported fine-tuning (53.03%) by +3.24 percentage points, while optimized INLP achieves 56.29%, providing no meaningful improvement (+0.02 pp). Analysis reveals that inner cross-validation consistently selects minimal intervention configurations (1--3 iterations in 77.8% of folds), indicating that the optimization learns to avoid removing information. This negative result suggests that subject identity is entangled with task-discriminative signal in frozen EEG foundation model embeddings, and post-hoc linear debiasing is insufficient for improving cross-subject transfer.

Resources