Skip to content
← Back to explorer

HFEPX Hub

Expert Verification Or Pairwise Preference Papers

Updated from current HFEPX corpus (Feb 27, 2026). 92 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics, Human Eval. Most common rater population: Domain Experts. Common annotation unit: Pairwise. Frequent quality control: Calibration. Frequently cited benchmark: Retrieval. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Feb 26, 2026.

Papers: 92 Last published: Feb 26, 2026 Global RSS Tag RSS
Expert VerificationPairwise Preference

Research Narrative

Grounded narrative Model: deterministic-grounded Source: persisted

Updated from current HFEPX corpus (Feb 27, 2026). This page tracks 92 papers for Expert Verification Or Pairwise Preference Papers. Dominant protocol signals include automatic metrics, human evaluation, simulation environments, with frequent benchmark focus on Retrieval, LiveCodeBench and metric focus on accuracy, cost. Use the grounded sections below to prioritize reproducible protocol choices, benchmark-matched comparisons, and judge-vs-human evaluation checks.

Why This Matters For Eval Research

Protocol Takeaways

Benchmark Interpretation

  • Retrieval appears in 10.9% of hub papers (10/92); use this cohort for benchmark-matched comparisons.
  • LiveCodeBench appears in 3.3% of hub papers (3/92); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 17.4% of hub papers (16/92); compare with a secondary metric before ranking methods.
  • cost is reported in 8.7% of hub papers (8/92); compare with a secondary metric before ranking methods.

Researcher Checklist

  • Maintain strength on Papers with explicit human feedback. Coverage is strong (100% vs 45% target).
  • Close gap on Papers reporting quality controls. Coverage is a replication risk (9.8% vs 30% target).
  • Tighten coverage on Papers naming benchmarks/datasets. Coverage is usable but incomplete (27.2% vs 35% target).
  • Maintain strength on Papers naming evaluation metrics. Coverage is strong (43.5% vs 35% target).
  • Tighten coverage on Papers with known rater population. Coverage is usable but incomplete (33.7% vs 35% target).
  • Tighten coverage on Papers with known annotation unit. Coverage is usable but incomplete (32.6% vs 35% target).

Papers with explicit human feedback

Coverage is strong (100% vs 45% target).

Papers reporting quality controls

Coverage is a replication risk (9.8% vs 30% target).

Papers naming benchmarks/datasets

Coverage is usable but incomplete (27.2% vs 35% target).

Papers naming evaluation metrics

Coverage is strong (43.5% vs 35% target).

Papers with known rater population

Coverage is usable but incomplete (33.7% vs 35% target).

Papers with known annotation unit

Coverage is usable but incomplete (32.6% vs 35% target).

Suggested Reading Order

  1. 1. An artificial intelligence framework for end-to-end rare disease phenotyping from clinical notes using large language models

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  2. 2. Moral Preferences of LLMs Under Directed Contextual Influence

    High citation traction makes this a useful baseline for method and protocol context.

  3. 3. TherapyProbe: Generating Design Knowledge for Relational Safety in Mental Health Chatbots Through Adversarial Simulation

    High citation traction makes this a useful baseline for method and protocol context.

  4. 4. MEDSYN: Benchmarking Multi-EviDence SYNthesis in Complex Clinical Cases for Multimodal Large Language Models

    High citation traction makes this a useful baseline for method and protocol context.

  5. 5. Balancing Multiple Objectives in Urban Traffic Control with Reinforcement Learning from AI Feedback

    Include a human-eval paper to anchor calibration against automated judge settings.

  6. 6. ExpLang: Improved Exploration and Exploitation in LLM Reasoning with On-Policy Thinking Language Selection

    Adds automatic metrics with pairwise preferences for broader coverage within this hub.

  7. 7. DynamicGTR: Leveraging Graph Topology Representation Preferences to Boost VLM Capabilities on Graph QAs

    Adds automatic metrics with pairwise preferences for broader coverage within this hub.

  8. 8. The ASIR Courage Model: A Phase-Dynamic Framework for Truth Transitions in Human and AI Systems

    Adds automatic metrics with pairwise preferences for broader coverage within this hub.

Known Limitations

  • Only 9.8% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
  • Cross-page comparisons should be benchmark- and metric-matched to avoid protocol confounding.

Research Utility Links

human_eval vs llm_as_judge

both=1, left_only=8, right_only=2

1 papers use both Human Eval and Llm As Judge.

human_eval vs automatic_metrics

both=0, left_only=9, right_only=74

0 papers use both Human Eval and Automatic Metrics.

llm_as_judge vs automatic_metrics

both=0, left_only=3, right_only=74

0 papers use both Llm As Judge and Automatic Metrics.

Top Papers

Related Hubs