Skip to content
← Back to explorer

HFEPX Metric Hub

Coherence Metric Papers

Updated from current HFEPX corpus (Apr 9, 2026). 66 papers are grouped in this metric page.

Read Full Context

Updated from current HFEPX corpus (Apr 9, 2026). 66 papers are grouped in this metric page. Common evaluation modes: Automatic Metrics, Simulation Env. Most common rater population: Domain Experts. Common annotation unit: Pairwise. Frequent quality control: Inter Annotator Agreement Reported. Frequently cited benchmark: ALFWorld. Common metric signal: coherence. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 9, 2026.

Papers: 66 Last published: Mar 9, 2026 Global RSS

Researcher Quick Triage

Use this page to compare metric behavior across protocols and benchmarks before selecting your reporting stack. Quality band: High .

Analysis blocks are computed from the loaded sample (60 of 66 papers).

Metric Coverage

38.3%

23 sampled papers include metric names.

Benchmark Anchoring

10.0%

Papers with explicit dataset/benchmark anchors for fair comparison.

Quality Controls

1.7%

1 papers report calibration/adjudication/IAA controls.

  • 60 papers are not low-signal flagged in this sample.
  • Use the protocol matrix below to avoid comparing metrics across incompatible eval setups.

Primary action: Use the top metric-reliable papers first, then compare benchmark context in the matrix before drawing conclusions.

Why This Matters (Expanded)

Why This Matters For Eval Research

  • 50% of papers report explicit human-feedback signals, led by pairwise preferences.
  • automatic metrics appears in 25.8% of papers in this hub.
  • ALFWorld is a recurring benchmark anchor for cross-paper comparisons in this page.
Metric Notes (Expanded)

Metric-Driven Protocol Takeaways

  • Most common quality-control signal is inter-annotator agreement reporting (1.5% of papers).
  • Rater context is mostly domain experts, and annotation is commonly pairwise annotation; use this to scope replication staffing.
  • Compare papers that report both human_eval and llm_as_judge to quantify judge-human agreement drift.

Metric Interpretation

  • coherence is reported in 100% of hub papers (24/66); compare with a secondary metric before ranking methods.
  • accuracy is reported in 29.2% of hub papers (7/66); compare with a secondary metric before ranking methods.

Benchmark Context

  • ALFWorld appears in 4.2% of hub papers (1/66); use this cohort for benchmark-matched comparisons.
  • LongBench appears in 4.2% of hub papers (1/66); use this cohort for benchmark-matched comparisons.

Start Here (Metric-Reliable First 6)

Ranked for metric reporting completeness and comparability.

Metric Protocol Matrix (Top 10)

Compare metric, benchmark, and evaluation context side by side.

Paper Metrics Benchmarks Eval Modes Quality Controls
SleepVLM: Explainable and Rule-Grounded Sleep Staging via a Vision-Language Model

Mar 22, 2026

Accuracy, Kappa Not reported Automatic Metrics Inter Annotator Agreement Reported
\$OneMillion-Bench: How Far are Language Agents from Human Experts?

Mar 9, 2026

Accuracy, Coherence Onemillion Bench Automatic Metrics Not reported
Document Reconstruction Unlocks Scalable Long-Context RLVR

Feb 9, 2026

Coherence LongBench Automatic Metrics Not reported
$\texttt{YC-Bench}$: Benchmarking AI Agents for Long-Term Planning and Consistent Execution

Apr 1, 2026

Cost, Inference cost Yc Bench Automatic Metrics Not reported
QChunker: Learning Question-Aware Text Chunking for Domain RAG via Multi-Agent Debate

Mar 12, 2026

Coherence Understanding Retrieval Automatic Metrics Not reported
Embodied Task Planning via Graph-Informed Action Generation with Large Language Model

Jan 29, 2026

Pass@1, Cost ALFWorld Simulation Env Not reported
Towards Reward Modeling for AI Tutors in Math Mistake Remediation

Mar 25, 2026

Accuracy, Coherence Not reported Automatic Metrics Not reported
PLOT: Enhancing Preference Learning via Optimal Transport

Apr 2, 2026

Coherence Not reported Automatic Metrics Not reported
BeliefShift: Benchmarking Temporal Belief Consistency and Opinion Drift in LLM Agents

Mar 25, 2026

Accuracy, Coherence Not reported Automatic Metrics Not reported
VRM: Teaching Reward Models to Understand Authentic Human Preferences

Mar 5, 2026

Coherence Not reported Human Eval Not reported
Researcher Workflow (Detailed)

Checklist

  • Strong: Papers with explicit human feedback

    Coverage is strong (50% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (4.2% vs 30% target).

  • Moderate: Papers naming benchmarks/datasets

    Coverage is usable but incomplete (25% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (100% vs 35% target).

  • Gap: Papers with known rater population

    Coverage is a replication risk (12.5% vs 35% target).

  • Strong: Papers with known annotation unit

    Coverage is strong (41.7% vs 35% target).

Strengths

  • Strong human-feedback signal (50% of papers).
  • Contains both human-eval and LLM-as-judge protocols for head-to-head methodology comparison.
  • Agentic evaluation appears in 54.2% of papers.

Known Gaps

  • Only 4.2% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (12.5% coverage).

Suggested Next Analyses

  • Compare papers that report both human_eval and llm_as_judge to quantify judge-human agreement drift.
  • Stratify by benchmark (ALFWorld vs LongBench) before comparing methods.
  • Track metric sensitivity by reporting both coherence and accuracy.

Recommended Queries

Known Limitations
  • Only 4.2% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (12.5% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
Research Utility Snapshot (Detailed)

Top Metrics

  • Coherence (24)
  • Accuracy (7)
  • Cost (2)
  • Conciseness (1)

Evaluation Modes

  • Automatic Metrics (17)
  • Simulation Env (4)
  • Llm As Judge (2)
  • Human Eval (1)

Top Benchmarks

  • ALFWorld (1)
  • LongBench (1)
  • MLE Bench (1)
  • Onemillion Bench (1)

Agentic Mix

  • Long Horizon (10)
  • Multi Agent (3)
  • Tool Use (1)

Top Papers Reporting This Metric

Related Metric Hubs

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.