Entity-Anonymized Context Prompts for Improving Context Faithfulness in Knowledge-Conflict QA
Abstract
Large language models in retrieval-augmented generation (RAG) systems often fail to follow provided context when it conflicts with their parametric knowledge, instead generating answers based on memorized facts. We hypothesize that entity surface forms in the context trigger parametric recall, causing this unfaithful behavior. To test this, we propose Entity-Anonymized Context Prompts (EACP), a training-free method that replaces entity names with anonymous placeholders before prompting. On the ConFiQA-MC knowledge-conflict benchmark, EACP improves context-faithful answer rate from 32.47% to 74.75% (+42.28 points) compared to a control condition with identical output format but no anonymization, demonstrating that entity anonymization is the active ingredient. EACP generalizes across model families (Llama-3.1-8B, Qwen2.5-7B), complements activation steering methods, and outperforms training-based approaches like Context-DPO without requiring any fine-tuning.