Skip to content
← Back to explorer

Diagnosing LLM Reranker Behavior Under Fixed Evidence Pools

Baris Arat, Emre Sefer · Feb 20, 2026 · Citations: 0

Abstract

Standard reranking evaluations study how a reranker orders candidates returned by an upstream retriever. This setup couples ranking behavior with retrieval quality, so differences in output cannot be attributed to the ranking policy alone. We introduce a controlled diagnostic that isolates reranking by using Multi-News clusters as fixed evidence pools. We limit each pool to exactly eight documents and pass identical inputs to all rankers. Within this setup, BM25 and MMR serve as interpretable reference points for lexical matching and diversity optimization. Across 345 clusters, we find that redundancy patterns vary by model: one LLM implicitly diversifies at larger selection budgets, while another increases redundancy. In contrast, LLMs underperform on lexical coverage at small selection budgets. As a result, LLM rankings diverge substantially from both baselines rather than consistently approximating either strategy. By eliminating retrieval variance, we can attribute these differences directly to the ranking policy. This diagnostic is model-agnostic and applicable to any ranker, including open source systems and proprietary APIs.

Human Data Lens

  • Uses human feedback: No
  • Feedback types: None
  • Rater population: Unknown
  • Unit of annotation: Ranking
  • Expertise required: General

Evaluation Lens

  • Evaluation modes: Automatic Metrics
  • Agentic eval: None
  • Quality controls: Not reported
  • Confidence: 0.40
  • Flags: low_signal, possible_false_positive

Research Summary

Contribution Summary

  • Standard reranking evaluations study how a reranker orders candidates returned by an upstream retriever.
  • This setup couples ranking behavior with retrieval quality, so differences in output cannot be attributed to the ranking policy alone.
  • We introduce a controlled diagnostic that isolates reranking by using Multi-News clusters as fixed evidence pools.

Why It Matters For Eval

  • Standard reranking evaluations study how a reranker orders candidates returned by an upstream retriever.

Related Papers