Self-Anchored Temporal Filtering for LLM-Free Temporal-Aware Memory Retrieval
Abstract
Long-term conversational memory systems require temporal awareness to retrieve contextually relevant information from past interactions. Current approaches either ignore temporal signals (pure dense retrieval) or require expensive LLM calls to extract time constraints from queries. We propose Self-Anchored Temporal Filtering (SATF), which infers temporal relevance from the timestamp distribution of initial retrieval results using multi-peak Gaussian kernels weighted by reciprocal rank. SATF soft-boosts temporally coherent items without hard filtering, requiring zero LLM calls. On LongMemEval, SATF achieves +16.9% relative NDCG@10 improvement on temporal reasoning queries (0.5840.683) while outperforming GPT-4o time-range filtering across all metrics with zero API cost. SATF improves all question types without degrading non-temporal queries, demonstrating that temporal signals embedded in retrieval distributions can be effectively exploited for ranking.