Skip to content
← Back to explorer

HFEPX Hub

Math Papers

Updated from current HFEPX corpus (Apr 12, 2026). 78 papers are grouped in this hub page.

Read Full Context

Updated from current HFEPX corpus (Apr 12, 2026). 78 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics, Simulation Env. Most common rater population: Domain Experts. Common annotation unit: Trajectory. Frequent quality control: Calibration. Frequently cited benchmark: GSM8K. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Jul 15, 2025.

Papers: 78 Last published: Jul 15, 2025 Global RSS Tag RSS
Math

Researcher Quick Triage

This hub is best used for protocol triage and replication planning from abstract-level evidence. Quality band: High .

Analysis blocks below are computed from the currently loaded sample (60 of 78 total papers in this hub).

High-Signal Coverage

100.0%

60 / 60 sampled papers are not low-signal flagged.

Replication-Ready Set

19

Benchmark + metric + eval mode explicitly present.

Judge/Human Comparability

0

Papers containing both `human_eval` and `llm_as_judge`.

  • 19 papers are replication-ready (benchmark + metric + explicit evaluation mode).
  • 0 papers support judge-vs-human agreement analysis.
  • 3 papers report explicit quality controls (calibration/adjudication/IAA).

Primary action: Start with the top 2 papers in “Start Here”, then validate assumptions in the protocol matrix.

Need evaluators for this research workflow?

Post a Job →

Why This Matters For Eval Research

  • 53.8% of papers report explicit human-feedback signals, led by pairwise preferences.
  • automatic metrics appears in 60.3% of papers in this hub.
  • GSM8K is a recurring benchmark anchor for cross-paper comparisons in this page.

Protocol Takeaways

  • Most common quality-control signal is rater calibration (2.6% of papers).
  • Rater context is mostly domain experts, and annotation is commonly trajectory-level annotation; use this to scope replication staffing.
  • Compare papers that report both human_eval and llm_as_judge to quantify judge-human agreement drift.

Benchmark Interpretation

  • GSM8K appears in 16.7% of hub papers (13/78); use this cohort for benchmark-matched comparisons.
  • LiveCodeBench appears in 5.1% of hub papers (4/78); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 41% of hub papers (32/78); compare with a secondary metric before ranking methods.
  • cost is reported in 17.9% of hub papers (14/78); compare with a secondary metric before ranking methods.
Researcher Checklist (Expanded)

Researcher Checklist

  • Strong: Papers with explicit human feedback

    Coverage is strong (53.8% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (3.8% vs 30% target).

  • Strong: Papers naming benchmarks/datasets

    Coverage is strong (35.9% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (61.5% vs 35% target).

  • Gap: Papers with known rater population

    Coverage is a replication risk (14.1% vs 35% target).

  • Moderate: Papers with known annotation unit

    Coverage is usable but incomplete (32.1% vs 35% target).

Strengths

  • Strong human-feedback signal (53.8% of papers).
  • Most papers provide measurable evaluation context (35.9% benchmarks, 61.5% metrics).
  • Contains both human-eval and LLM-as-judge protocols for head-to-head methodology comparison.

Known Gaps

  • Only 3.8% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (14.1% coverage).
  • LLM-as-judge appears without enough inter-annotator agreement reporting.

Suggested Next Analyses

  • Compare papers that report both human_eval and llm_as_judge to quantify judge-human agreement drift.
  • Stratify by benchmark (GSM8K vs LiveCodeBench) before comparing methods.
  • Track metric sensitivity by reporting both accuracy and cost.
  • Add inter-annotator agreement checks when reproducing these protocols.
Recommended Queries (Expanded)

Recommended Queries

Start with These 3

Use these when you need one protocol anchor, one benchmark anchor, and one recent comparison point before reading the wider hub.

Start Here (Best First 6)

Ranked for protocol completeness (human signal, benchmark + metric anchors, quality controls, and judge/human overlap).

Protocol Matrix (Top 12)

Use this to quickly compare protocol ingredients instead of scanning long prose.

Paper HF Signal Eval Modes Benchmarks Metrics QC
Stabilizing Iterative Self-Training with Verified Reasoning via Symbolic Recursive Self-Alignment

Mar 23, 2026

Yes Automatic Metrics GSM8K Accuracy Not Reported
PAVE: Premise-Aware Validation and Editing for Retrieval-Augmented LLMs

Mar 21, 2026

Yes Automatic Metrics Post Retrieval Accuracy Not Reported
Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters

Feb 11, 2026

Yes Not Reported LiveCodeBench , BrowseComp Latency , Cost Not Reported
$V_1$: Unifying Generation and Self-Verification for Parallel Reasoners

Mar 4, 2026

Yes Automatic Metrics SWE Bench , AIME Pass@1 Not Reported
Duel-Evolve: Reward-Free Test-Time Scaling via LLM Self-Preferences

Feb 25, 2026

Yes Automatic Metrics LiveCodeBench , Mathbench Accuracy Not Reported
Entropy trajectory shape predicts LLM reasoning reliability: A diagnostic study of uncertainty dynamics in chain-of-thought

Mar 19, 2026

No
Not Reported
Automatic Metrics GSM8K Accuracy , Calibration error Calibration
Team of Thoughts: Efficient Test-time Scaling of Agentic Systems through Orchestrated Tool Calling

Feb 18, 2026

No
Not Reported
Automatic Metrics LiveCodeBench Accuracy Calibration
GIFT: Group-Relative Implicit Fine-Tuning Integrates GRPO with DPO and UNA

Oct 27, 2025

Yes Automatic Metrics LMSYS Chatbot Arena , GSM8K Mse Not Reported
Let's Think in Two Steps: Mitigating Agreement Bias in MLLMs with Self-Grounded Verification

Jul 15, 2025

Yes Automatic Metrics , Simulation Env VisualWebArena , OSWorld Accuracy Not Reported
RASPRef: Retrieval-Augmented Self-Supervised Prompt Refinement for Large Reasoning Models

Mar 27, 2026

Yes Not Reported GSM8K Not Reported Not Reported
FOR-Prompting: From Objection to Revision via an Asymmetric Prompting Protocol

Oct 2, 2025

Yes Automatic Metrics GSM8K Accuracy Not Reported
TARo: Token-level Adaptive Routing for LLM Test-time Alignment

Mar 19, 2026

Yes Not Reported AlpacaEval Not Reported Not Reported

Protocol Diff (Top Papers)

Fast side-by-side comparison for the highest-ranked papers in this hub.

Signal Stabilizing Iterative Self-Training with Verified R… PAVE: Premise-Aware Validation and Editing for Retr… Step 3.5 Flash: Open Frontier-Level Intelligence wi…
Human Feedback Pairwise PreferenceCritique EditPairwise Preference
Evaluation Modes Automatic MetricsAutomatic MetricsNot reported
Benchmarks GSM8KPost RetrievalLiveCodeBench, BrowseComp
Metrics AccuracyAccuracyLatency, Cost
Quality Controls Not reportedNot reportedNot reported
Rater Population UnknownUnknownDomain Experts
Annotation Unit UnknownUnknownUnknown
Suggested Reading Order (Extended)

This section is intentionally expanded only when needed; use “Start Here” above for a faster pass.

Suggested Reading Order

  1. Don't Overthink It: Inter-Rollout Action Agreement as a Free Adaptive-Compute Signal for LLM Agents

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics. Focus: GSM8K / accuracy. Abstract: Inference-time compute scaling has emerged as a powerful technique for improving.

  2. Think$^{2}$: Grounded Metacognitive Reasoning in Large Language Models

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: human evaluation + pairwise preferences. Focus: GSM8K. Abstract: Blinded human evaluations over 580 query pairs show an.

  3. Let's Think in Two Steps: Mitigating Agreement Bias in MLLMs with Self-Grounded Verification

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: automatic metrics + pairwise preferences. Focus: VisualWebArena / accuracy. Abstract: Multimodal LLMs (MLLMs) offer a promising solution,.

  4. Does LLM Alignment Really Need Diversity? An Empirical Study of Adapting RLVR Methods for Moral Reasoning

    Include an LLM-as-judge paper to test judge design and agreement assumptions. Signals: LLM-as-judge + rubric ratings. Focus: Morebench. Abstract: To enable stable RLVR training, we build a rubric-grounded.

  5. RuleForge: Automated Generation and Validation for Web Vulnerability Detection at Scale

    Include an LLM-as-judge paper to test judge design and agreement assumptions. Signals: LLM-as-judge + expert verification. Focus: auroc. Abstract: This paper focuses on RuleForge's architecture and operational deployment.

  6. Step 3.5 Flash: Open Frontier-Level Intelligence with 11B Active Parameters

    Adds evaluation protocol evidence with pairwise preferences for broader protocol coverage within this hub. Signals: pairwise preferences. Focus: LiveCodeBench / latency. Abstract: To reach frontier-level intelligence, we design.

Known Limitations

Known Limitations

  • Only 3.8% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (14.1% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
Research Utility Snapshot (Detailed)

Research Utility Snapshot

Human Feedback Mix

  • Pairwise Preference (19)
  • Critique Edit (9)
  • Expert Verification (5)
  • Rubric Rating (5)

Evaluation Modes

  • Automatic Metrics (47)
  • Simulation Env (7)
  • Human Eval (3)
  • Llm As Judge (3)

Top Benchmarks

  • GSM8K (13)
  • LiveCodeBench (4)
  • AIME (3)
  • MATH 500 (3)

Top Metrics

  • Accuracy (32)
  • Cost (14)
  • Coherence (5)
  • Inference cost (4)

Rater Population Mix

  • Domain Experts (10)
  • Mixed (1)

Quality Controls

  • Calibration (2)
  • Gold Questions (1)
Coverage diagnostics (sample-based): human-feedback 56.7% · benchmarks 43.3% · metrics 68.3% · quality controls 5.0%.

Top Papers

Related Hubs

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.