Skip to content
← Back to explorer

HFEPX Hub

CS.IR + Automatic Metrics Papers

Updated from current HFEPX corpus (Feb 27, 2026). 59 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics, Human Eval. Most common rater population: Domain Experts. Common annotation unit: Ranking. Frequent quality control: Calibration. Frequently cited benchmark: Retrieval. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Feb 26, 2026.

Papers: 59 Last published: Feb 26, 2026 Global RSS Tag RSS
Cs.IRAutomatic Metrics

Research Narrative

Grounded narrative Model: deterministic-grounded Source: persisted

Updated from current HFEPX corpus (Feb 27, 2026). This page tracks 59 papers for CS.IR + Automatic Metrics Papers. Dominant protocol signals include automatic metrics, human evaluation, with frequent benchmark focus on Retrieval, BrowseComp and metric focus on accuracy, latency. Use the grounded sections below to prioritize reproducible protocol choices, benchmark-matched comparisons, and judge-vs-human evaluation checks.

Why This Matters For Eval Research

Protocol Takeaways

Benchmark Interpretation

  • Retrieval appears in 50.8% of hub papers (30/59); use this cohort for benchmark-matched comparisons.
  • BrowseComp appears in 1.7% of hub papers (1/59); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 18.6% of hub papers (11/59); compare with a secondary metric before ranking methods.
  • latency is reported in 11.9% of hub papers (7/59); compare with a secondary metric before ranking methods.

Researcher Checklist

  • Close gap on Papers with explicit human feedback. Coverage is a replication risk (10.2% vs 45% target).
  • Close gap on Papers reporting quality controls. Coverage is a replication risk (1.7% vs 30% target).
  • Maintain strength on Papers naming benchmarks/datasets. Coverage is strong (57.6% vs 35% target).
  • Maintain strength on Papers naming evaluation metrics. Coverage is strong (54.2% vs 35% target).
  • Close gap on Papers with known rater population. Coverage is a replication risk (6.8% vs 35% target).
  • Close gap on Papers with known annotation unit. Coverage is a replication risk (18.6% vs 35% target).

Papers with explicit human feedback

Coverage is a replication risk (10.2% vs 45% target).

Papers reporting quality controls

Coverage is a replication risk (1.7% vs 30% target).

Papers naming benchmarks/datasets

Coverage is strong (57.6% vs 35% target).

Papers naming evaluation metrics

Coverage is strong (54.2% vs 35% target).

Papers with known rater population

Coverage is a replication risk (6.8% vs 35% target).

Papers with known annotation unit

Coverage is a replication risk (18.6% vs 35% target).

Suggested Reading Order

  1. 1. SPARTA: Scalable and Principled Benchmark of Tree-Structured Multi-hop QA over Text and Tables

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  2. 2. MoDora: Tree-Based Semi-Structured Document Analysis System

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  3. 3. Vectorizing the Trie: Efficient Constrained Decoding for LLM-based Generative Retrieval on Accelerators

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  4. 4. A Benchmark for Deep Information Synthesis

    Include a human-eval paper to anchor calibration against automated judge settings.

  5. 5. Search-P1: Path-Centric Reward Shaping for Stable and Efficient Agentic RAG Training

    Adds automatic metrics for broader coverage within this hub.

  6. 6. LiCQA : A Lightweight Complex Question Answering System

    Adds automatic metrics for broader coverage within this hub.

  7. 7. Revisiting RAG Retrievers: An Information Theoretic Benchmark

    Adds automatic metrics for broader coverage within this hub.

  8. 8. Enhancing Multilingual Embeddings via Multi-Way Parallel Text Alignment

    Adds automatic metrics for broader coverage within this hub.

Known Limitations

  • Only 1.7% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (6.8% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.

Research Utility Links

human_eval vs automatic_metrics

both=2, left_only=0, right_only=57

2 papers use both Human Eval and Automatic Metrics.

Benchmark Brief

BrowseComp

Coverage: 1 papers (1.7%)

1 papers (1.7%) mention BrowseComp.

Examples: Revisiting Text Ranking in Deep Research

Benchmark Brief

DROP

Coverage: 1 papers (1.7%)

1 papers (1.7%) mention DROP.

Examples: SPARTA: Scalable and Principled Benchmark of Tree-Structured Multi-hop QA over Text and Tables

Top Papers

Related Hubs