Skip to content
← Back to explorer

HFEPX Hub

Web Browsing + Coding (Last 45 Days)

Updated from current HFEPX corpus (Apr 9, 2026). 10 papers are grouped in this hub page.

Read Full Context

Updated from current HFEPX corpus (Apr 9, 2026). 10 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics, Simulation Env. Most common rater population: Domain Experts. Common annotation unit: Multi Dim Rubric. Frequent quality control: Adjudication. Frequently cited benchmark: BIRD. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 31, 2026.

Papers: 10 Last published: Mar 31, 2026 Global RSS Tag RSS
Web BrowsingCodingLast 45d

Researcher Quick Triage

This hub is best used for protocol triage and replication planning from abstract-level evidence. Quality band: Developing .

High-Signal Coverage

100.0%

10 / 10 sampled papers are not low-signal flagged.

Replication-Ready Set

2

Benchmark + metric + eval mode explicitly present.

Judge/Human Comparability

0

Papers containing both `human_eval` and `llm_as_judge`.

  • 2 papers are replication-ready (benchmark + metric + explicit evaluation mode).
  • 0 papers support judge-vs-human agreement analysis.
  • 1 papers report explicit quality controls (calibration/adjudication/IAA).

Primary action: Use this page for scouting only; collect additional papers before attempting replication-critical comparisons.

Need evaluators for this research workflow?

Post a Job →

Why This Matters For Eval Research

  • 40% of papers report explicit human-feedback signals, led by pairwise preferences.
  • automatic metrics appears in 70% of papers in this hub.
  • BIRD is a recurring benchmark anchor for cross-paper comparisons in this page.

Protocol Takeaways

  • Most common quality-control signal is adjudication (10% of papers).
  • Rater context is mostly domain experts, and annotation is commonly multi-dimensional rubrics; use this to scope replication staffing.
  • Pair this hub with llm_as_judge pages to benchmark automated-vs-human evaluation tradeoffs.

Benchmark Interpretation

  • BIRD appears in 10% of hub papers (1/10); use this cohort for benchmark-matched comparisons.
  • Interruptbench appears in 10% of hub papers (1/10); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 40% of hub papers (4/10); compare with a secondary metric before ranking methods.
  • cost is reported in 30% of hub papers (3/10); compare with a secondary metric before ranking methods.
Researcher Checklist (Expanded)

Researcher Checklist

  • Moderate: Papers with explicit human feedback

    Coverage is usable but incomplete (40% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (10% vs 30% target).

  • Moderate: Papers naming benchmarks/datasets

    Coverage is usable but incomplete (30% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (80% vs 35% target).

  • Gap: Papers with known rater population

    Coverage is a replication risk (10% vs 35% target).

  • Gap: Papers with known annotation unit

    Coverage is a replication risk (20% vs 35% target).

Strengths

  • Agentic evaluation appears in 100% of papers.

Known Gaps

  • Only 10% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (10% coverage).
  • Annotation unit is under-specified (20% coverage).

Suggested Next Analyses

  • Pair this hub with llm_as_judge pages to benchmark automated-vs-human evaluation tradeoffs.
  • Stratify by benchmark (BIRD vs Interruptbench) before comparing methods.
  • Track metric sensitivity by reporting both accuracy and cost.
  • Add inter-annotator agreement checks when reproducing these protocols.
Recommended Queries (Expanded)

Recommended Queries

Start with These 3

Use these when you need one protocol anchor, one benchmark anchor, and one recent comparison point before reading the wider hub.

Start Here (Best First 6)

Ranked for protocol completeness (human signal, benchmark + metric anchors, quality controls, and judge/human overlap).

Protocol Matrix (Top 12)

Use this to quickly compare protocol ingredients instead of scanning long prose.

Paper HF Signal Eval Modes Benchmarks Metrics QC
When Users Change Their Mind: Evaluating Interruptible Agents in Long-Horizon Web Navigation

Apr 1, 2026

Yes Simulation Env WebArena , Interruptbench Not Reported Not Reported
CounselReflect: A Toolkit for Auditing Mental-Health Dialogues

Mar 31, 2026

Yes Human Eval Not Reported Not Reported Adjudication
LUDOBENCH: Evaluating LLM Behavioural Decision-Making Through Spot-Based Board Game Scenarios in Ludo

Apr 7, 2026

No
Not Reported
Simulation Env Ludobench Dice Not Reported
LRC-WeatherNet: LiDAR, RADAR, and Camera Fusion Network for Real-time Weather-type Classification in Autonomous Driving

Mar 23, 2026

No
Not Reported
Automatic Metrics BIRD Precision Not Reported
Vibe Code Bench: Evaluating AI Models on End-to-End Web Application Development

Mar 4, 2026

Yes Automatic Metrics Not Reported Accuracy , Agreement Not Reported
Sabiá-4 Technical Report

Mar 10, 2026

Yes Automatic Metrics Not Reported Accuracy , Cost Not Reported
AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning

Apr 7, 2026

No
Not Reported
Automatic Metrics Not Reported Accuracy Not Reported
From Guessing to Placeholding: A Cost-Theoretic Framework for Uncertainty-Aware Code Completion

Apr 2, 2026

No
Not Reported
Automatic Metrics Not Reported Cost Not Reported
A Benchmark for Deep Information Synthesis

Feb 24, 2026

No
Not Reported
Automatic Metrics Not Reported F1 Not Reported
GUI-Libra: Training Native GUI Agents to Reason and Act with Action-aware Supervision and Partially Verifiable RL

Feb 25, 2026

No
Not Reported
Automatic Metrics Not Reported Accuracy , Task success Not Reported

Protocol Diff (Top Papers)

Fast side-by-side comparison for the highest-ranked papers in this hub.

Signal When Users Change Their Mind: Evaluating Interrupti… CounselReflect: A Toolkit for Auditing Mental-Healt… LUDOBENCH: Evaluating LLM Behavioural Decision-Maki…
Human Feedback Critique EditRubric Rating, Expert VerificationNot reported
Evaluation Modes Simulation EnvHuman EvalSimulation Env
Benchmarks WebArena, InterruptbenchNot reportedLudobench
Metrics Not reportedNot reportedDice
Quality Controls Not reportedAdjudicationNot reported
Rater Population UnknownDomain ExpertsUnknown
Annotation Unit UnknownMulti Dim RubricUnknown
Suggested Reading Order (Extended)

This section is intentionally expanded only when needed; use “Start Here” above for a faster pass.

Suggested Reading Order

  1. CounselReflect: A Toolkit for Auditing Mental-Health Dialogues

    Start here for detailed protocol reporting and quality-control evidence. Signals: human evaluation + rubric ratings. Abstract: The system integrates two families of evaluation signals: (i) 12 model-based metrics.

  2. AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning

    High citation traction makes this a strong baseline for protocol comparison. Signals: automatic metrics. Focus: accuracy. Abstract: Large Language Models (LLMs) increasingly rely on agentic capabilities-iterative retrieval, tool.

  3. LUDOBENCH: Evaluating LLM Behavioural Decision-Making Through Spot-Based Board Game Scenarios in Ludo

    High citation traction makes this a strong baseline for protocol comparison. Signals: simulation environments. Focus: Ludobench / dice. Abstract: We introduce LudoBench, a benchmark for evaluating LLM strategic.

  4. From Guessing to Placeholding: A Cost-Theoretic Framework for Uncertainty-Aware Code Completion

    High citation traction makes this a strong baseline for protocol comparison. Signals: automatic metrics. Focus: cost. Abstract: While Large Language Models (LLMs) have demonstrated exceptional proficiency in code.

  5. When Users Change Their Mind: Evaluating Interruptible Agents in Long-Horizon Web Navigation

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: simulation environments + critique/edit feedback. Focus: WebArena. Abstract: As LLM agents transition from short, static problem solving.

  6. Vibe Code Bench: Evaluating AI Models on End-to-End Web Application Development

    Adds automatic metrics with pairwise preferences for broader protocol coverage within this hub. Signals: automatic metrics + pairwise preferences. Focus: accuracy. Abstract: We identify self-testing during generation as.

  7. Sabiá-4 Technical Report

    Adds automatic metrics with pairwise preferences for broader protocol coverage within this hub. Signals: automatic metrics + pairwise preferences. Focus: accuracy. Abstract: The models were developed through a.

  8. LRC-WeatherNet: LiDAR, RADAR, and Camera Fusion Network for Real-time Weather-type Classification in Autonomous Driving

    Adds automatic metrics for broader protocol coverage within this hub. Signals: automatic metrics. Focus: BIRD / precision. Abstract: Autonomous vehicles face major perception and navigation challenges in adverse.

Known Limitations

Known Limitations

  • Only 10% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (10% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
Research Utility Snapshot (Detailed)

Research Utility Snapshot

Human Feedback Mix

  • Pairwise Preference (2)
  • Critique Edit (1)
  • Expert Verification (1)
  • Rubric Rating (1)

Evaluation Modes

  • Automatic Metrics (7)
  • Simulation Env (2)
  • Human Eval (1)

Top Benchmarks

  • BIRD (1)
  • Interruptbench (1)
  • Ludobench (1)
  • WebArena (1)

Top Metrics

  • Accuracy (4)
  • Cost (3)
  • Agreement (1)
  • Dice (1)

Rater Population Mix

  • Domain Experts (1)

Quality Controls

  • Adjudication (1)
Coverage diagnostics (sample-based): human-feedback 40.0% · benchmarks 30.0% · metrics 80.0% · quality controls 10.0%.

Top Papers

  • CounselReflect: A Toolkit for Auditing Mental-Health Dialogues

    Yahan Li, Chaohao Du, Zeyang Li, Christopher Chun Kuizon, Shupeng Cheng · Mar 31, 2026 · Citations: 0

    Rubric RatingExpert Verification Human Eval Web Browsing

    The system integrates two families of evaluation signals: (i) 12 model-based metrics produced by task-specific predictors, and (ii) rubric-based metrics that extend coverage via a literature-derived library (69 metrics) and user-defined…

  • When Users Change Their Mind: Evaluating Interruptible Agents in Long-Horizon Web Navigation

    Henry Peng Zou, Chunyu Miao, Wei-Chieh Huang, Yankai Chen, Yue Zhou · Apr 1, 2026 · Citations: 0

    Critique Edit Simulation Env Long Horizon

    As LLM agents transition from short, static problem solving to executing complex, long-horizon tasks in dynamic environments, the ability to handle user interruptions, such as adding requirement or revising goals, during mid-task execution…

  • Vibe Code Bench: Evaluating AI Models on End-to-End Web Application Development

    Hung Tran, Langston Nashold, Rayan Krishnan, Antoine Bigeard, Alex Gu · Mar 4, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics Web Browsing

    We introduce Vibe Code Bench, a benchmark of 100 web application specifications (50 public validation, 50 held-out test) with 964 browser-based workflows comprising 10,131 substeps, evaluated against deployed applications by an autonomous…

  • Sabiá-4 Technical Report

    Thiago Laitz, Thales Sales Almeida, Hugo Abonizio, Roseval Malaquias Junior, Giovana Kerche Bonás · Mar 10, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics Tool Use

    The models were developed through a four-stage training pipeline: continued pre-training on Portuguese and Brazilian legal corpora, long-context extension to 128K tokens, supervised fine-tuning on instruction data spanning chat, code, legal…

  • LUDOBENCH: Evaluating LLM Behavioural Decision-Making Through Spot-Based Board Game Scenarios in Ludo

    Ojas Jain, Dhruv Kumar · Apr 7, 2026 · Citations: 0

    Simulation Env Multi Agent

    We introduce LudoBench, a benchmark for evaluating LLM strategic reasoning in Ludo, a stochastic multi-agent board game whose dice mechanics, piece capture, safe-square navigation, and home-path progression introduce meaningful planning…

  • LRC-WeatherNet: LiDAR, RADAR, and Camera Fusion Network for Real-time Weather-type Classification in Autonomous Driving

    Nour Alhuda Albashir, Lars Pernickel, Danial Hamoud, Idriss Gouigah, Eren Erdal Aksoy · Mar 23, 2026 · Citations: 0

    Automatic Metrics Web Browsing

    Autonomous vehicles face major perception and navigation challenges in adverse weather such as rain, fog, and snow, which degrade the performance of LiDAR, RADAR, and RGB camera sensors.

  • A Benchmark for Deep Information Synthesis

    Debjit Paul, Daniel Murphy, Milan Gritta, Ronald Cardenas, Victor Prokhorov · Feb 24, 2026 · Citations: 0

    Automatic Metrics Tool Use

    To address this, we introduce DEEPSYNTH, a novel benchmark designed to evaluate agents on realistic, time-consuming problems that combine information gathering, synthesis, and structured reasoning to produce insights.

  • AgentGL: Towards Agentic Graph Learning with LLMs via Reinforcement Learning

    Yuanfu Sun, Kang Li, Dongzhe Fan, Jiajin Liu, Qiaoyu Tan · Apr 7, 2026 · Citations: 0

    Automatic Metrics Tool Use

    To bridge this gap, we introduce Agentic Graph Learning (AGL), a paradigm that reframes graph learning as an interleaved process of topology-aware navigation and LLM-based inference.

  • From Guessing to Placeholding: A Cost-Theoretic Framework for Uncertainty-Aware Code Completion

    Liang Zhu, Haolin Chen, Lidong Zhao, Xian Wu · Apr 2, 2026 · Citations: 0

    Automatic Metrics Web Browsing

    Extensive evaluations across 1.5B--14B parameter models demonstrate that APC reduces expected editing costs from 19% to 50% while preserving standard HC performance.

  • GUI-Libra: Training Native GUI Agents to Reason and Act with Action-aware Supervision and Partially Verifiable RL

    Rui Yang, Qianhui Wu, Zhaoyang Wang, Hanyang Chen, Ke Yang · Feb 25, 2026 · Citations: 0

    Automatic Metrics Long Horizon

    Open-source native GUI agents still lag behind closed-source systems on long-horizon navigation tasks.

Related Hubs

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.