Skip to content
← Back to explorer

Metric Hub

F1 + General Metric Papers

Updated from current HFEPX corpus (Feb 27, 2026). 20 papers are grouped in this metric page. Common evaluation modes: Automatic Metrics, Human Eval. Most common rater population: Domain Experts. Common annotation unit: Scalar. Frequent quality control: Inter Annotator Agreement Reported. Frequently cited benchmark: Retrieval. Common metric signal: f1. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Feb 26, 2026.

Papers: 20 Last published: Feb 26, 2026 Global RSS

Research Narrative

Grounded narrative Model: deterministic-grounded Source: persisted

Updated from current HFEPX corpus (Feb 27, 2026). This page tracks 20 papers for F1 + General Metric Papers. Dominant protocol signals include automatic metrics, human evaluation, simulation environments, with frequent benchmark focus on Retrieval, BrowseComp and metric focus on f1, accuracy. Use the grounded sections below to prioritize reproducible protocol choices, benchmark-matched comparisons, and judge-vs-human evaluation checks.

Why This Matters For Eval Research

Protocol Takeaways

Benchmark Interpretation

  • Retrieval appears in 20% of hub papers (4/20); use this cohort for benchmark-matched comparisons.
  • BrowseComp appears in 5% of hub papers (1/20); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • f1 is reported in 100% of hub papers (20/20); compare with a secondary metric before ranking methods.
  • accuracy is reported in 40% of hub papers (8/20); compare with a secondary metric before ranking methods.

Researcher Checklist

  • Close gap on Papers with explicit human feedback. Coverage is a replication risk (5% vs 45% target).
  • Close gap on Papers reporting quality controls. Coverage is a replication risk (15% vs 30% target).
  • Tighten coverage on Papers naming benchmarks/datasets. Coverage is usable but incomplete (25% vs 35% target).
  • Maintain strength on Papers naming evaluation metrics. Coverage is strong (100% vs 35% target).
  • Close gap on Papers with known rater population. Coverage is a replication risk (5% vs 35% target).
  • Close gap on Papers with known annotation unit. Coverage is a replication risk (10% vs 35% target).

Papers with explicit human feedback

Coverage is a replication risk (5% vs 45% target).

Papers reporting quality controls

Coverage is a replication risk (15% vs 30% target).

Papers naming benchmarks/datasets

Coverage is usable but incomplete (25% vs 35% target).

Papers naming evaluation metrics

Coverage is strong (100% vs 35% target).

Papers with known rater population

Coverage is a replication risk (5% vs 35% target).

Papers with known annotation unit

Coverage is a replication risk (10% vs 35% target).

Suggested Reading Order

  1. 1. A Mixture-of-Experts Model for Multimodal Emotion Recognition in Conversations

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  2. 2. Improving Neural Argumentative Stance Classification in Controversial Topics with Emotion-Lexicon Features

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  3. 3. Probing for Knowledge Attribution in Large Language Models

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  4. 4. Distill and Align Decomposition for Enhanced Claim Verification

    Include a human-eval paper to anchor calibration against automated judge settings.

  5. 5. Peeking inside the Black-Box: Reinforcement Learning for Explainable and Accurate Relation Extraction

    Include a human-eval paper to anchor calibration against automated judge settings.

  6. 6. A Fusion of context-aware based BanglaBERT and Two-Layer Stacked LSTM Framework for Multi-Label Cyberbullying Detection

    Adds automatic metrics for broader coverage within this hub.

  7. 7. Voices of the Mountains: Deep Learning-Based Vocal Error Detection System for Kurdish Maqams

    Adds automatic metrics for broader coverage within this hub.

  8. 8. How to Train Your Deep Research Agent? Prompt, Reward, and Policy Optimization in Search-R1

    Adds automatic metrics for broader coverage within this hub.

Known Limitations

  • Only 15% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (5% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.

Research Utility Links

human_eval vs automatic_metrics

both=2, left_only=0, right_only=18

2 papers use both Human Eval and Automatic Metrics.

automatic_metrics vs simulation_env

both=2, left_only=18, right_only=0

2 papers use both Automatic Metrics and Simulation Env.

human_eval vs simulation_env

both=0, left_only=2, right_only=2

0 papers use both Human Eval and Simulation Env.

Benchmark Brief

BrowseComp

Coverage: 1 papers (5%)

1 papers (5%) mention BrowseComp.

Examples: Hybrid Deep Searcher: Scalable Parallel and Sequential Search Reasoning

Benchmark Brief

HotpotQA

Coverage: 1 papers (5%)

1 papers (5%) mention HotpotQA.

Examples: RELOOP: Recursive Retrieval with Multi-Hop Reasoner and Planners for Heterogeneous QA

Top Papers Reporting This Metric

Other Metric Hubs