Skip to content
← Back to explorer

HFEPX Hub

Automatic Metrics + Expert Verification + General Papers

Updated from current HFEPX corpus (Apr 9, 2026). 10 papers are grouped in this hub page.

Read Full Context

Updated from current HFEPX corpus (Apr 9, 2026). 10 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics. Most common rater population: Domain Experts. Common annotation unit: Trajectory. Frequently cited benchmark: Re-Bench. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 19, 2026.

Papers: 10 Last published: Mar 19, 2026 Global RSS Tag RSS
Automatic MetricsExpert VerificationGeneral

Researcher Quick Triage

This hub is best used for protocol triage and replication planning from abstract-level evidence. Quality band: Developing .

High-Signal Coverage

100.0%

10 / 10 sampled papers are not low-signal flagged.

Replication-Ready Set

2

Benchmark + metric + eval mode explicitly present.

Judge/Human Comparability

0

Papers containing both `human_eval` and `llm_as_judge`.

  • 2 papers are replication-ready (benchmark + metric + explicit evaluation mode).
  • 0 papers support judge-vs-human agreement analysis.
  • 0 papers report explicit quality controls (calibration/adjudication/IAA).

Primary action: Use this page for scouting only; collect additional papers before attempting replication-critical comparisons.

Currently showing only replication-ready papers in ranking and matrix sections (2 papers).

Need evaluators for this research workflow?

Post a Job →

Why This Matters For Eval Research

  • 100% of papers report explicit human-feedback signals, led by expert verification.
  • automatic metrics appears in 100% of papers in this hub.
  • Re-Bench is a recurring benchmark anchor for cross-paper comparisons in this page.

Protocol Takeaways

  • Quality-control reporting is sparse in this slice; prioritize papers with explicit calibration or adjudication steps.
  • Rater context is mostly domain experts, and annotation is commonly trajectory-level annotation; use this to scope replication staffing.
  • Stratify by benchmark (Re-Bench vs Sodium-Bench) before comparing methods.

Benchmark Interpretation

  • Re-Bench appears in 10% of hub papers (1/10); use this cohort for benchmark-matched comparisons.
  • Sodium-Bench appears in 10% of hub papers (1/10); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 40% of hub papers (4/10); compare with a secondary metric before ranking methods.
  • cost is reported in 20% of hub papers (2/10); compare with a secondary metric before ranking methods.
Researcher Checklist (Expanded)

Researcher Checklist

  • Strong: Papers with explicit human feedback

    Coverage is strong (100% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (0% vs 30% target).

  • Gap: Papers naming benchmarks/datasets

    Coverage is a replication risk (20% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (90% vs 35% target).

  • Strong: Papers with known rater population

    Coverage is strong (100% vs 35% target).

  • Gap: Papers with known annotation unit

    Coverage is a replication risk (10% vs 35% target).

Strengths

  • Strong human-feedback signal (100% of papers).

Known Gaps

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Annotation unit is under-specified (10% coverage).

Suggested Next Analyses

  • Stratify by benchmark (Re-Bench vs Sodium-Bench) before comparing methods.
  • Track metric sensitivity by reporting both accuracy and cost.
Recommended Queries (Expanded)

Recommended Queries

Start with These 3

Use these when you need one protocol anchor, one benchmark anchor, and one recent comparison point before reading the wider hub.

Start Here (Best First 6)

Ranked for protocol completeness (human signal, benchmark + metric anchors, quality controls, and judge/human overlap).

Protocol Matrix (Top 12)

Use this to quickly compare protocol ingredients instead of scanning long prose.

Protocol Diff (Top Papers)

Fast side-by-side comparison for the highest-ranked papers in this hub.

Signal SODIUM: From Open Web Data to Queryable Databases Measuring AI Ability to Complete Long Software Tasks
Human Feedback Expert VerificationExpert Verification
Evaluation Modes Automatic MetricsAutomatic Metrics
Benchmarks Sodium BenchRe Bench
Metrics AccuracySuccess rate
Quality Controls Not reportedNot reported
Rater Population Domain ExpertsDomain Experts
Annotation Unit UnknownUnknown
Suggested Reading Order (Extended)

This section is intentionally expanded only when needed; use “Start Here” above for a faster pass.

Suggested Reading Order

  1. Application-Driven Pedagogical Knowledge Optimization of Open-Source LLMs via Reinforcement Learning and Supervised Fine-Tuning

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics + expert verification. Focus: accuracy. Abstract: We present an innovative multi-stage optimization strategy combining reinforcement learning.

  2. SODIUM: From Open Web Data to Queryable Databases

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics + expert verification. Focus: Sodium-Bench / accuracy. Abstract: During research, domain experts often ask analytical questions.

  3. An Industrial-Scale Insurance LLM Achieving Verifiable Domain Mastery and Hallucination Control without Competence Trade-offs

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics + expert verification. Focus: hallucination rate. Abstract: Adapting Large Language Models (LLMs) to high-stakes vertical domains.

  4. Measuring AI Ability to Complete Long Software Tasks

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: automatic metrics + expert verification. Focus: Re-Bench / success rate. Abstract: Despite rapid progress on AI benchmarks,.

  5. Evaluation of LLMs in retrieving food and nutritional context for RAG systems

    Adds automatic metrics with expert verification for broader protocol coverage within this hub. Signals: automatic metrics + expert verification. Focus: accuracy. Abstract: In this article, we evaluate four.

  6. An Expert Schema for Evaluating Large Language Model Errors in Scholarly Question-Answering Systems

    Adds automatic metrics with expert verification for broader protocol coverage within this hub. Signals: automatic metrics + expert verification. Focus: precision. Abstract: Large Language Models (LLMs) are transforming.

  7. LM-Lexicon: Improving Definition Modeling via Harmonizing Semantic Experts

    Adds automatic metrics with expert verification for broader protocol coverage within this hub. Signals: automatic metrics + expert verification. Focus: bleu. Abstract: We introduce LM-Lexicon, an innovative definition.

  8. Measuring Complexity at the Requirements Stage: Spectral Metrics as Development Effort Predictors

    Adds automatic metrics with expert verification for broader protocol coverage within this hub. Signals: automatic metrics + expert verification. Focus: cost. Abstract: Complexity in engineered systems presents one.

Known Limitations

Known Limitations

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Annotation unit is under-specified (10% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
Research Utility Snapshot (Detailed)

Research Utility Snapshot

Human Feedback Mix

  • Expert Verification (10)
  • Rlaif Or Synthetic Feedback (1)

Evaluation Modes

  • Automatic Metrics (10)

Top Benchmarks

  • Re Bench (1)
  • Sodium Bench (1)

Top Metrics

  • Accuracy (4)
  • Cost (2)
  • Bleu (1)
  • Hallucination rate (1)

Rater Population Mix

  • Domain Experts (10)

Quality Controls

Coverage diagnostics (sample-based): human-feedback 100.0% · benchmarks 20.0% · metrics 90.0% · quality controls 0.0%.

Top Papers

  • SODIUM: From Open Web Data to Queryable Databases

    Chuxuan Hu, Philip Li, Maxwell Yang, Daniel Kang · Mar 19, 2026 · Citations: 0

    Expert Verification Automatic Metrics Multi Agent

    Existing systems struggle with SODIUM tasks: we evaluate 6 advanced AI agents on SODIUM-Bench, with the strongest baseline achieving only 46.5% accuracy.

  • Measuring AI Ability to Complete Long Software Tasks

    Thomas Kwa, Ben West, Joel Becker, Amy Deng, Katharyn Garcia · Mar 18, 2025 · Citations: 0

    Expert Verification Automatic Metrics Tool Use

    Despite rapid progress on AI benchmarks, the real-world meaning of benchmark performance remains unclear.

Related Hubs

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.