Skip to content
← Back to explorer

HFEPX Metric Hub

Faithfulness Metric Papers

Updated from current HFEPX corpus (Apr 12, 2026). 24 papers are grouped in this metric page.

Read Full Context

Updated from current HFEPX corpus (Apr 12, 2026). 24 papers are grouped in this metric page. Common evaluation modes: Automatic Metrics, Simulation Env. Most common rater population: Domain Experts. Common annotation unit: Ranking. Frequent quality control: Inter Annotator Agreement Reported. Frequently cited benchmark: Deceptarena. Common metric signal: faithfulness. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 20, 2026.

Papers: 24 Last published: Mar 20, 2026 Global RSS

When This Metric Page Is Useful

Useful for background comparison, but still validate benchmark and protocol details in the linked papers. Quality band: Medium .

Metric Coverage

45.8%

11 sampled papers include metric names.

Benchmark Anchoring

12.5%

Papers with explicit dataset/benchmark anchors for fair comparison.

Quality Controls

4.2%

1 papers report calibration/adjudication/IAA controls.

  • 24 papers are not low-signal flagged in this sample.
  • Use the protocol matrix below to avoid comparing metrics across incompatible eval setups.

Recommended next step: Treat this as directional signal only; metric reporting is present but benchmark anchoring is still thin.

Main limitation: Benchmark coverage is still thin, so avoid treating this page as a definitive guide to the metric.

What This Metric Page Tells You

What This Metric Page Tells You

  • 45.5% of papers report explicit human-feedback signals, led by pairwise preferences.
  • automatic metrics appears in 41.7% of papers in this hub.
  • Deceptarena is a recurring benchmark anchor for cross-paper comparisons in this page.
Metric Notes (Expanded)

Metric-Driven Protocol Takeaways

  • Most common quality-control signal is inter-annotator agreement reporting (4.2% of papers).
  • Rater context is mostly domain experts, and annotation is commonly ranking annotation; use this to scope replication staffing.
  • Compare papers that report both human_eval and llm_as_judge to quantify judge-human agreement drift.

Metric Interpretation

  • faithfulness is reported in 100% of hub papers (11/24); compare with a secondary metric before ranking methods.
  • accuracy is reported in 54.5% of hub papers (6/24); compare with a secondary metric before ranking methods.

Benchmark Context

  • Deceptarena appears in 9.1% of hub papers (1/24); use this cohort for benchmark-matched comparisons.
  • DROP appears in 9.1% of hub papers (1/24); use this cohort for benchmark-matched comparisons.

Start Here (Metric-Reliable First 6)

Ranked for metric reporting completeness and comparability.

Metric Protocol Matrix (Top 10)

Compare metric, benchmark, and evaluation context side by side.

Paper Metrics Benchmarks Eval Modes Quality Controls
Measuring Faithfulness Depends on How You Measure: Classifier Sensitivity in LLM Chain-of-Thought Evaluation

Mar 20, 2026

Kappa, Faithfulness Not reported Automatic Metrics Inter Annotator Agreement Reported
PaperBanana: Automating Academic Illustration for AI Scientists

Jan 30, 2026

Faithfulness, Conciseness Paperbananabench Automatic Metrics Not reported
LLM-as-a-Judge for Time Series Explanations

Apr 2, 2026

Accuracy, Faithfulness DROP Llm As Judge, Automatic Metrics Not reported
DeceptGuard :A Constitutional Oversight Framework For Detecting Deception in LLM Agents

Mar 14, 2026

Faithfulness Deceptarena Automatic Metrics, Simulation Env Not reported
PONTE: Personalized Orchestration for Natural Language Trustworthy Explanations

Mar 6, 2026

Agreement, Faithfulness Not reported Human Eval Not reported
Reason and Verify: A Framework for Faithful Retrieval-Augmented Generation

Mar 10, 2026

Accuracy, Faithfulness Not reported Automatic Metrics Not reported
From Evidence-Based Medicine to Knowledge Graph: Retrieval-Augmented Generation for Sports Rehabilitation and a Domain Benchmark

Jan 1, 2026

Accuracy, Recall Not reported Automatic Metrics Not reported
Verify Before You Commit: Towards Faithful Reasoning in LLM Agents via Self-Auditing

Apr 9, 2026

Faithfulness Not reported Automatic Metrics Not reported
Replayable Financial Agents: A Determinism-Faithfulness Assurance Harness for Tool-Using LLM Agents

Jan 17, 2026

Accuracy, Cost Not reported Automatic Metrics Not reported
Counterfactual Simulation Training for Chain-of-Thought Faithfulness

Feb 24, 2026

Accuracy, Faithfulness Not reported Automatic Metrics, Simulation Env Not reported
How To Use This Page

Checklist

  • Strong: Papers with explicit human feedback

    Coverage is strong (45.5% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (9.1% vs 30% target).

  • Moderate: Papers naming benchmarks/datasets

    Coverage is usable but incomplete (27.3% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (100% vs 35% target).

  • Moderate: Papers with known rater population

    Coverage is usable but incomplete (27.3% vs 35% target).

  • Strong: Papers with known annotation unit

    Coverage is strong (45.5% vs 35% target).

Strengths

  • Strong human-feedback signal (45.5% of papers).
  • Contains both human-eval and LLM-as-judge protocols for head-to-head methodology comparison.
  • Agentic evaluation appears in 36.4% of papers.

Known Gaps

  • Only 9.1% of papers report quality controls; prioritize calibration/adjudication evidence.

Suggested Next Analyses

  • Compare papers that report both human_eval and llm_as_judge to quantify judge-human agreement drift.
  • Stratify by benchmark (Deceptarena vs DROP) before comparing methods.
  • Track metric sensitivity by reporting both faithfulness and accuracy.

Recommended Queries

Known Limitations
  • Only 9.1% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
  • Cross-page comparisons should be benchmark- and metric-matched to avoid protocol confounding.
Coverage Snapshot

Top Metrics

  • Faithfulness (11)
  • Accuracy (6)
  • Agreement (2)
  • Conciseness (1)

Evaluation Modes

  • Automatic Metrics (10)
  • Simulation Env (2)
  • Human Eval (1)
  • Llm As Judge (1)

Top Benchmarks

  • Deceptarena (1)
  • DROP (1)
  • Paperbananabench (1)

Agentic Mix

  • Long Horizon (3)
  • Multi Agent (1)

Top Papers Reporting This Metric

Related Metrics And Hubs

Get Started

Join the #1 Platform for AI Training Talent

Where top AI builders and expert AI Trainers connect to build the future of AI.
Self-Service
Post a Job
Post your project and get a shortlist of qualified AI Trainers and Data Labelers. Hire and manage your team in the tools you already use.
Managed Service
For Large Projects
Done-for-You
We recruit, onboard, and manage a dedicated team inside your tools. End-to-end operations for large or complex projects.
For Freelancers
Join as an AI Trainer
Find AI training and data labeling projects across platforms, all in one place. One profile, one application process, more opportunities.