Skip to content
← Back to explorer

HFEPX Hub

CS.SE + Automatic Metrics Papers

Updated from current HFEPX corpus (Mar 8, 2026). 10 papers are grouped in this hub page.

Read Full Context

Updated from current HFEPX corpus (Mar 8, 2026). 10 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics. Most common rater population: Domain Experts. Common annotation unit: Trajectory. Frequently cited benchmark: SWE-bench. Common metric signal: pass@1. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Feb 25, 2026.

Papers: 10 Last published: Feb 25, 2026 Global RSS Tag RSS
Cs.SEAutomatic Metrics

Researcher Quick Triage

This hub is best used for protocol triage and replication planning from abstract-level evidence. Quality band: Developing .

High-Signal Coverage

100.0%

10 / 10 sampled papers are not low-signal flagged.

Replication-Ready Set

2

Benchmark + metric + eval mode explicitly present.

Judge/Human Comparability

0

Papers containing both `human_eval` and `llm_as_judge`.

  • 2 papers are replication-ready (benchmark + metric + explicit evaluation mode).
  • 0 papers support judge-vs-human agreement analysis.
  • 0 papers report explicit quality controls (calibration/adjudication/IAA).

Primary action: Use this page for scouting only; collect additional papers before attempting replication-critical comparisons.

Currently showing only replication-ready papers in ranking and matrix sections (2 papers).

Why This Matters For Eval Research

  • automatic metrics appears in 100% of papers in this hub.
  • SWE-bench is a recurring benchmark anchor for cross-paper comparisons in this page.
  • long-horizon tasks appears in 40% of papers, indicating agentic evaluation demand.

Protocol Takeaways

  • Quality-control reporting is sparse in this slice; prioritize papers with explicit calibration or adjudication steps.
  • Rater context is mostly domain experts, and annotation is commonly trajectory-level annotation; use this to scope replication staffing.
  • Stratify by benchmark (SWE-bench vs SWE-bench Verified) before comparing methods.

Benchmark Interpretation

  • SWE-bench appears in 20% of hub papers (2/10); use this cohort for benchmark-matched comparisons.
  • SWE-bench Verified appears in 20% of hub papers (2/10); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • pass@1 is reported in 20% of hub papers (2/10); compare with a secondary metric before ranking methods.
  • coherence is reported in 10% of hub papers (1/10); compare with a secondary metric before ranking methods.
Researcher Checklist (Expanded)

Researcher Checklist

  • Gap: Papers with explicit human feedback

    Coverage is a replication risk (0% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (0% vs 30% target).

  • Gap: Papers naming benchmarks/datasets

    Coverage is a replication risk (20% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (40% vs 35% target).

  • Gap: Papers with known rater population

    Coverage is a replication risk (10% vs 35% target).

  • Gap: Papers with known annotation unit

    Coverage is a replication risk (10% vs 35% target).

Strengths

  • Agentic evaluation appears in 40% of papers.

Known Gaps

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (10% coverage).
  • Annotation unit is under-specified (10% coverage).

Suggested Next Analyses

  • Stratify by benchmark (SWE-bench vs SWE-bench Verified) before comparing methods.
  • Track metric sensitivity by reporting both pass@1 and coherence.
Recommended Queries (Expanded)

Recommended Queries

Start with These 3

Use these when you need one protocol anchor, one benchmark anchor, and one recent comparison point before reading the wider hub.

Start Here (Best First 6)

Ranked for protocol completeness (human signal, benchmark + metric anchors, quality controls, and judge/human overlap).

Protocol Matrix (Top 12)

Use this to quickly compare protocol ingredients instead of scanning long prose.

Protocol Diff (Top Papers)

Fast side-by-side comparison for the highest-ranked papers in this hub.

Signal SWE-Protégé: Learning to Selectively Collaborate Wi… EVALOOOP: A Self-Consistency-Centered Framework for…
Human Feedback Not reportedNot reported
Evaluation Modes Automatic MetricsAutomatic Metrics
Benchmarks SWE Bench, SWE Bench VerifiedMBPP+, DROP
Metrics Pass@1, LatencyAccuracy, Pass@1
Quality Controls Not reportedNot reported
Rater Population Domain ExpertsUnknown
Annotation Unit UnknownUnknown
Suggested Reading Order (Extended)

This section is intentionally expanded only when needed; use “Start Here” above for a faster pass.

Suggested Reading Order

  1. SkillCraft: Can LLM Agents Learn to Use Tools Skillfully?

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics. Focus: success rate. Abstract: We further propose a lightweight evaluation protocol that enables agents to auto-compose.

  2. SWE-Protégé: Learning to Selectively Collaborate With an Expert Unlocks Small Language Models as Software Engineering Agents

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics. Focus: SWE-bench / pass@1. Abstract: Small language models (SLMs) offer compelling advantages in cost, latency, and.

  3. Structurally Aligned Subtask-Level Memory for Software Engineering Agents

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics. Focus: SWE-bench. Abstract: Large Language Models (LLMs) have demonstrated significant potential as autonomous software engineering (SWE).

  4. ToolMATH: A Math Tool Benchmark for Realistic Long-Horizon Multi-Tool Reasoning

    Adds automatic metrics for broader protocol coverage within this hub. Signals: automatic metrics. Focus: coherence. Abstract: We introduce \ToolMATH, a math-grounded benchmark that evaluates tool-augmented language models in.

  5. SpecMind: Cognitively Inspired, Interactive Multi-Turn Framework for Postcondition Inference

    Adds automatic metrics for broader protocol coverage within this hub. Signals: automatic metrics. Focus: accuracy. Abstract: Specifications are vital for ensuring program correctness, yet writing them manually remains.

  6. Exploring LLMs for User Story Extraction from Mockups

    Adds automatic metrics for broader protocol coverage within this hub. Signals: automatic metrics. Focus: accuracy. Abstract: User stories are one of the most widely used artifacts in the.

  7. The Invisible Hand of AI Libraries Shaping Open Source Projects and Communities

    Adds automatic metrics for broader protocol coverage within this hub. Signals: automatic metrics. Focus: relevance. Abstract: In the early 1980s, Open Source Software emerged as a revolutionary concept.

  8. Imitation Game: Reproducing Deep Learning Bugs Leveraging an Intelligent Agent

    Adds automatic metrics for broader protocol coverage within this hub. Signals: automatic metrics. Focus: success rate. Abstract: Despite their wide adoption in various domains (e.g., healthcare, finance, software.

Known Limitations

Known Limitations

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (10% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
Research Utility Snapshot (Detailed)

Research Utility Snapshot

Human Feedback Mix

Evaluation Modes

  • Automatic Metrics (10)

Top Benchmarks

  • SWE Bench (2)
  • SWE Bench Verified (2)

Top Metrics

  • Pass@1 (2)
  • Coherence (1)
  • Cost (1)
  • Latency (1)

Rater Population Mix

  • Domain Experts (1)

Quality Controls

Coverage diagnostics (sample-based): human-feedback 0.0% · benchmarks 30.0% · metrics 90.0% · quality controls 0.0%.

Top Papers

Related Hubs

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.