Patch, Don't Rewrite: Post-Drift Rule Updates for LogRules-Style LLM Log Parsers
Abstract
LLM-based log parsers like LogRules use natural-language rule repositories to achieve strong parsing accuracy while reducing inference costs. However, log templates inevitably drift over time, causing rules to become stale. The intuitive response---regenerating all rules from post-drift examples---surprisingly underperforms, as limited post-drift evidence causes the LLM to inadvertently modify rules that remain effective for stable templates. We propose Patch, a conservative update policy that generates a small set of delta rules targeting drifted patterns and prepends them to the unchanged original repository. On a synthetic drift benchmark, Patch achieves 0.2742 overall FGA, outperforming Rewrite by +8.1 percentage points. The largest gains appear on drifted templates (+14.1 points), while stable-template performance also improves (+5.5 points), confirming that Patch avoids regressions by preserving pre-drift knowledge while adding targeted capacity for new patterns.