Cache Preemption Poisoning Attacks on LLM-Based Log Parsers
Abstract
\begin{abstract} LLM-based log parsers with adaptive caching, such as LILAC, achieve state-of-the-art accuracy while minimizing API costs by storing parsed templates for reuse. However, we identify that this caching mechanism introduces a critical security vulnerability. We present \textit{cache preemption poisoning}, a novel attack where an adversary injects crafted log lines to corrupt the parser's cache state. By front-loading over-generalized templates that intercept subsequent clean logs via prefix-tree matching, the attack causes persistent parsing degradation. Our experiments on the BGL benchmark demonstrate that injecting only 2% poison lines causes catastrophic accuracy drops: 19.65 percentage points in template accuracy (FTA) and 67.17 percentage points in parsing accuracy (PA). Critically, this attack degrades LILAC below stateless baseline performance (56.03% vs 66.67% FTA), completely negating the benefits of adaptive caching. We propose wildcard density screening as a partial defense, recovering 40.30% of lost PA, while identifying directions for more robust mitigations. \end{abstract}