Delta SVD-EQ: Post-Hoc Spectral Equalization for LoRA Continual Learning
Abstract
Continual learning with large language models using Low-Rank Adaptation (LoRA) faces catastrophic forgetting, where performance on earlier tasks degrades as new tasks are learned. Existing continual learning methods require training modifications, replay buffers, or architectural changes, limiting their applicability to pre-trained LoRA checkpoints. We propose Delta SVD-EQ, a post-hoc spectral intervention that equalizes the singular values of task-specific LoRA deltas at each task boundary while preserving the Frobenius norm. On the FOREVER benchmark with Qwen3-0.6B, SVD-EQ improves backward transfer by +2.3 percentage points (15% relative reduction in forgetting) without degrading overall performance. Beyond mean improvement, SVD-EQ reduces within-order variance by 3--10, stabilizing continual learning outcomes across random seeds. Ablation studies reveal that both norm shrinkage and spectral redistribution contribute to the effect. SVD-EQ is the only continual learning method that is simultaneously training-free, memory-free, and post-hoc applicable, making it a lightweight complement to existing approaches.