MidPC LoRA: Intermediate SVD Slices for Continual Learning with Low-Rank Adaptation
Abstract
Continual learning with large language models faces the challenge of catastrophic forgetting, where adapting to new tasks degrades performance on previously learned ones. Low-Rank Adaptation (LoRA) enables efficient fine-tuning but still suffers from forgetting in sequential learning settings. Recent SVD-based initialization methods for LoRA, such as PiSSA (top singular components) and MiLoRA (bottom components), represent endpoint choices on the singular spectrum. We propose MidPC, which initializes LoRA adapters from intermediate singular components, hypothesizing that the middle spectral region offers a better stability-plasticity trade-off. On the FOREVER Standard CL benchmark with Qwen3-0.6B, MidPC achieves 84.8% overall performance and 1.7% backward transfer, significantly outperforming PiSSA (66.7%, 15.0%) and MiLoRA (81.6%, 5.0%). Spectral analysis reveals that MidPC maintains stable subspace alignment while achieving moderate spectral imbalance. Learning rate matching experiments confirm this is a genuine spectral effect, with 91.9% of the advantage over MiLoRA retained after controlling for conditioning differences.