Skip to content
← Back to explorer

HFEPX Hub

Human Eval + Coding Papers

Updated from current HFEPX corpus (Apr 27, 2026). 13 papers are grouped in this hub page.

Read Full Context

Updated from current HFEPX corpus (Apr 27, 2026). 13 papers are grouped in this hub page. Common evaluation modes: Human Eval, Automatic Metrics. Most common rater population: Domain Experts. Common annotation unit: Multi Dim Rubric. Frequent quality control: Adjudication. Frequently cited benchmark: APPS. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 31, 2026.

Papers: 13 Last published: Mar 31, 2026 Global RSS Tag RSS
Human EvalCoding

Researcher Quick Triage

This hub is best used for protocol triage and replication planning from abstract-level evidence. Quality band: Developing .

High-Signal Coverage

100.0%

13 / 13 sampled papers are not low-signal flagged.

Replication-Ready Set

0

Benchmark + metric + eval mode explicitly present.

Judge/Human Comparability

1

Papers containing both `human_eval` and `llm_as_judge`.

  • 0 papers are replication-ready (benchmark + metric + explicit evaluation mode).
  • 1 papers support judge-vs-human agreement analysis.
  • 4 papers report explicit quality controls (calibration/adjudication/IAA).

Primary action: Use this page for scouting only; collect additional papers before attempting replication-critical comparisons.

Need evaluators for this research workflow?

Post a Job →

Why This Matters For Eval Research

  • 53.8% of papers report explicit human-feedback signals, led by pairwise preferences.
  • human evaluation appears in 100% of papers in this hub.
  • APPS is a recurring benchmark anchor for cross-paper comparisons in this page.

Protocol Takeaways

  • 1 sampled papers report both human evaluation and LLM-as-judge, supporting direct agreement checks.
  • Most common quality-control signal is adjudication (15.4% of papers).
  • Rater context is mostly domain experts, and annotation is commonly multi-dimensional rubrics; use this to scope replication staffing.

Benchmark Interpretation

  • APPS appears in 7.7% of hub papers (1/13); use this cohort for benchmark-matched comparisons.
  • Paperbench appears in 7.7% of hub papers (1/13); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 23.1% of hub papers (3/13); compare with a secondary metric before ranking methods.
  • agreement is reported in 7.7% of hub papers (1/13); compare with a secondary metric before ranking methods.
Researcher Checklist (Expanded)

Researcher Checklist

  • Strong: Papers with explicit human feedback

    Coverage is strong (53.8% vs 45% target).

  • Strong: Papers reporting quality controls

    Coverage is strong (30.8% vs 30% target).

  • Moderate: Papers naming benchmarks/datasets

    Coverage is usable but incomplete (30.8% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (46.2% vs 35% target).

  • Moderate: Papers with known rater population

    Coverage is usable but incomplete (30.8% vs 35% target).

  • Strong: Papers with known annotation unit

    Coverage is strong (46.2% vs 35% target).

Strengths

  • Strong human-feedback signal (53.8% of papers).
  • Quality-control evidence appears in 30.8% of papers.
  • Contains both human-eval and LLM-as-judge protocols for head-to-head methodology comparison.

Known Gaps

  • No dominant metadata gap detected in current extraction coverage.

Suggested Next Analyses

  • Compare papers that report both human_eval and llm_as_judge to quantify judge-human agreement drift.
  • Stratify by benchmark (APPS vs Paperbench) before comparing methods.
  • Track metric sensitivity by reporting both accuracy and agreement.
Recommended Queries (Expanded)

Recommended Queries

Start with These 3

Use these when you need one protocol anchor, one benchmark anchor, and one recent comparison point before reading the wider hub.

Start Here (Best First 6)

Ranked for protocol completeness (human signal, benchmark + metric anchors, quality controls, and judge/human overlap).

Protocol Matrix (Top 12)

Use this to quickly compare protocol ingredients instead of scanning long prose.

Paper HF Signal Eval Modes Benchmarks Metrics QC
Is this Idea Novel? An Automated Benchmark for Judgment of Research Ideas

Mar 11, 2026

Yes Human Eval Rinobench Not Reported Gold Questions
Grounding Arabic LLMs in the Doha Historical Dictionary: Retrieval-Augmented Understanding of Quran and Hadith

Mar 25, 2026

No
Not Reported
Human Eval , Llm As Judge Not Reported Accuracy , Kappa Inter Annotator Agreement Reported
CounselReflect: A Toolkit for Auditing Mental-Health Dialogues

Mar 31, 2026

Yes Human Eval Not Reported Not Reported Adjudication
IntelliAsk: Learning to Ask High-Quality Research Questions via RLVR

Jan 23, 2026

Yes Human Eval Writingbench Not Reported Not Reported
Automated Coding of Communication Data Using ChatGPT: Consistency Across Subgroups

Oct 23, 2025

Yes Human Eval , Automatic Metrics Not Reported Accuracy Not Reported
XtraGPT: Context-Aware and Controllable Academic Paper Revision via Human-AI Collaboration

May 16, 2025

Yes Human Eval Not Reported Coherence Not Reported
EasyAnimate: High-Performance Video Generation Framework with Hybrid Windows Attention and Reward Backpropagation

May 29, 2024

Yes Human Eval APPS Not Reported Not Reported
FrameRef: A Framing Dataset and Simulation Testbed for Modeling Bounded Rational Information Health

Feb 17, 2026

No
Not Reported
Human Eval , Simulation Env Not Reported Not Reported Adjudication
EvoScientist: Towards Multi-Agent Evolving AI Scientists for End-to-End Scientific Discovery

Mar 9, 2026

No
Not Reported
Human Eval Not Reported Relevance Not Reported
Learning to Predict Future-Aligned Research Proposals with Language Models

Mar 28, 2026

No
Not Reported
Human Eval , Automatic Metrics Not Reported Accuracy Not Reported
Incentivizing Agentic Reasoning in LLM Judges via Tool-Integrated Reinforcement Learning

Oct 27, 2025

Yes Human Eval Not Reported Not Reported Not Reported
Habibi: Laying the Open-Source Foundation of Unified-Dialectal Arabic Speech Synthesis

Jan 20, 2026

No
Not Reported
Human Eval Not Reported Jailbreak success rate Not Reported

Protocol Diff (Top Papers)

Fast side-by-side comparison for the highest-ranked papers in this hub.

Signal Is this Idea Novel? An Automated Benchmark for Judg… Grounding Arabic LLMs in the Doha Historical Dictio… CounselReflect: A Toolkit for Auditing Mental-Healt…
Human Feedback Rubric RatingNot reportedRubric Rating, Expert Verification
Evaluation Modes Human EvalHuman Eval, Llm As JudgeHuman Eval
Benchmarks RinobenchNot reportedNot reported
Metrics Not reportedAccuracy, KappaNot reported
Quality Controls Gold QuestionsInter Annotator Agreement ReportedAdjudication
Rater Population Domain ExpertsUnknownDomain Experts
Annotation Unit Multi Dim RubricUnknownMulti Dim Rubric
Suggested Reading Order (Extended)

This section is intentionally expanded only when needed; use “Start Here” above for a faster pass.

Suggested Reading Order

  1. CounselReflect: A Toolkit for Auditing Mental-Health Dialogues

    Start here for detailed protocol reporting and quality-control evidence. Signals: human evaluation + rubric ratings. Abstract: The system integrates two families of evaluation signals: (i) 12 model-based metrics.

  2. Is this Idea Novel? An Automated Benchmark for Judgment of Research Ideas

    Start here for detailed protocol reporting and quality-control evidence. Signals: human evaluation + rubric ratings. Focus: Rinobench. Abstract: Yet, evaluation of these approaches remains largely inconsistent and is.

  3. Learning to Predict Future-Aligned Research Proposals with Language Models

    High citation traction makes this a strong baseline for protocol comparison. Signals: human evaluation. Focus: accuracy. Abstract: Large language models (LLMs) are increasingly used to assist ideation in.

  4. Grounding Arabic LLMs in the Doha Historical Dictionary: Retrieval-Augmented Understanding of Quran and Hadith

    High citation traction makes this a strong baseline for protocol comparison. Signals: human evaluation. Focus: accuracy. Abstract: Gemini also serves as an LLM-as-a-judge system for automatic evaluation in.

  5. IntelliAsk: Learning to Ask High-Quality Research Questions via RLVR

    Adds human evaluation with pairwise preferences for broader protocol coverage within this hub. Signals: human evaluation + pairwise preferences. Focus: Writingbench. Abstract: To address this gap, we curate.

  6. Automated Coding of Communication Data Using ChatGPT: Consistency Across Subgroups

    Adds human evaluation with rubric ratings for broader protocol coverage within this hub. Signals: human evaluation + rubric ratings. Focus: accuracy. Abstract: Prior research has established that ChatGPT.

  7. EasyAnimate: High-Performance Video Generation Framework with Hybrid Windows Attention and Reward Backpropagation

    Adds human evaluation with pairwise preferences for broader protocol coverage within this hub. Signals: human evaluation + pairwise preferences. Focus: APPS. Abstract: To enhance video generation quality, we.

  8. XtraGPT: Context-Aware and Controllable Academic Paper Revision via Human-AI Collaboration

    Adds human evaluation with pairwise preferences for broader protocol coverage within this hub. Signals: human evaluation + pairwise preferences. Focus: coherence. Abstract: Both automated preference assessments and human.

Known Limitations

Known Limitations

  • No dominant metadata gap detected in current extraction coverage.
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
  • Cross-page comparisons should be benchmark- and metric-matched to avoid protocol confounding.
Research Utility Snapshot (Detailed)

Research Utility Snapshot

Human Feedback Mix

  • Pairwise Preference (4)
  • Rubric Rating (3)
  • Critique Edit (2)
  • Expert Verification (2)

Evaluation Modes

  • Human Eval (13)
  • Automatic Metrics (3)
  • Llm As Judge (1)
  • Simulation Env (1)

Top Benchmarks

  • APPS (1)
  • Paperbench (1)
  • Rinobench (1)
  • Writingbench (1)

Top Metrics

  • Accuracy (3)
  • Agreement (1)
  • Coherence (1)
  • Jailbreak success rate (1)

Rater Population Mix

  • Domain Experts (4)

Quality Controls

  • Adjudication (2)
  • Gold Questions (1)
  • Inter Annotator Agreement Reported (1)
Coverage diagnostics (sample-based): human-feedback 53.8% · benchmarks 30.8% · metrics 46.2% · quality controls 30.8%.

Top Papers

Related Hubs

Get Started

Join the #1 Platform for AI Training Talent

Where top AI builders and expert AI Trainers connect to build the future of AI.
Self-Service
Post a Job
Post your project and get a shortlist of qualified AI Trainers and Data Labelers. Hire and manage your team in the tools you already use.
Managed Service
For Large Projects
Done-for-You
We recruit, onboard, and manage a dedicated team inside your tools. End-to-end operations for large or complex projects.
For Freelancers
Join as an AI Trainer
Find AI training and data labeling projects across platforms, all in one place. One profile, one application process, more opportunities.