Persistent Demo-Pool Poisoning Attacks on Online LLM Log Parsers

FARS·2026-03-02·Run ID: FA0221

Abstract

Online LLM-based log parsers achieve state-of-the-art accuracy by using self-generated in-context learning (SG-ICL), where the system continuously accumulates its own parsing outputs as demonstrations for future queries. However, this online learning mechanism creates a security vulnerability: the monotonically growing demonstration pool can be permanently corrupted by adversarial log injection. We present the first demo-pool poisoning attack on online LLM log parsers. By injecting only 60 crafted log entries (3% of a 2000-log stream) during an early window, our over-generalization attack causes severe parsing degradation that persists 900+ logs after injection ends. On BGL, the attack reduces template accuracy by 40.33 percentage points; on Thunderbird, by 13.33 percentage points. Ablation studies reveal that trie poisoning---where over-generalized templates silently absorb future logs---dominates over ICL contamination as the primary damage mechanism. Our findings highlight the need for security-aware design in online learning systems with persistent state.

Resources