Skip to content
← Back to explorer

HFEPX Hub

Coding + Pairwise Preference (Last 30 Days)

Updated from current HFEPX corpus (Apr 9, 2026). 13 papers are grouped in this hub page.

Read Full Context

Updated from current HFEPX corpus (Apr 9, 2026). 13 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics, Simulation Env. Common annotation unit: Pairwise. Frequently cited benchmark: APPS. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 25, 2026.

Papers: 13 Last published: Mar 25, 2026 Global RSS Tag RSS
CodingPairwise PreferenceLast 30d

Researcher Quick Triage

This hub is best used for protocol triage and replication planning from abstract-level evidence. Quality band: Developing .

High-Signal Coverage

100.0%

13 / 13 sampled papers are not low-signal flagged.

Replication-Ready Set

3

Benchmark + metric + eval mode explicitly present.

Judge/Human Comparability

0

Papers containing both `human_eval` and `llm_as_judge`.

  • 3 papers are replication-ready (benchmark + metric + explicit evaluation mode).
  • 0 papers support judge-vs-human agreement analysis.
  • 0 papers report explicit quality controls (calibration/adjudication/IAA).

Primary action: Start with the top 2 papers in “Start Here”, then validate assumptions in the protocol matrix.

Need evaluators for this research workflow?

Post a Job →

Why This Matters For Eval Research

  • 100% of papers report explicit human-feedback signals, led by pairwise preferences.
  • automatic metrics appears in 46.2% of papers in this hub.
  • APPS is a recurring benchmark anchor for cross-paper comparisons in this page.

Protocol Takeaways

  • Quality-control reporting is sparse in this slice; prioritize papers with explicit calibration or adjudication steps.
  • Rater context is mostly unspecified rater pools, and annotation is commonly pairwise annotation; use this to scope replication staffing.
  • Stratify by benchmark (APPS vs Esdr-Bench) before comparing methods.

Benchmark Interpretation

  • APPS appears in 7.7% of hub papers (1/13); use this cohort for benchmark-matched comparisons.
  • Esdr-Bench appears in 7.7% of hub papers (1/13); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 30.8% of hub papers (4/13); compare with a secondary metric before ranking methods.
  • cost is reported in 23.1% of hub papers (3/13); compare with a secondary metric before ranking methods.
Researcher Checklist (Expanded)

Researcher Checklist

  • Strong: Papers with explicit human feedback

    Coverage is strong (100% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (0% vs 30% target).

  • Moderate: Papers naming benchmarks/datasets

    Coverage is usable but incomplete (30.8% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (61.5% vs 35% target).

  • Gap: Papers with known rater population

    Coverage is a replication risk (0% vs 35% target).

  • Moderate: Papers with known annotation unit

    Coverage is usable but incomplete (30.8% vs 35% target).

Strengths

  • Strong human-feedback signal (100% of papers).

Known Gaps

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (0% coverage).

Suggested Next Analyses

  • Stratify by benchmark (APPS vs Esdr-Bench) before comparing methods.
  • Track metric sensitivity by reporting both accuracy and cost.
Recommended Queries (Expanded)

Recommended Queries

Start with These 3

Use these when you need one protocol anchor, one benchmark anchor, and one recent comparison point before reading the wider hub.

Start Here (Best First 6)

Ranked for protocol completeness (human signal, benchmark + metric anchors, quality controls, and judge/human overlap).

Protocol Matrix (Top 12)

Use this to quickly compare protocol ingredients instead of scanning long prose.

Paper HF Signal Eval Modes Benchmarks Metrics QC
Modeling and Benchmarking Spoken Dialogue Rewards with Modality and Colloquialness

Mar 16, 2026

Yes Automatic Metrics Esdr Bench Accuracy Not Reported
Do Phone-Use Agents Respect Your Privacy?

Apr 1, 2026

Yes Automatic Metrics APPS , Myphonebench Task success Not Reported
CausalRM: Causal-Theoretic Reward Modeling for RLHF from Observational User Feedbacks

Mar 19, 2026

Yes Automatic Metrics Harmbench Cost Not Reported
VehicleMemBench: An Executable Benchmark for Multi-User Long-Term Memory in In-Vehicle Agents

Mar 25, 2026

Yes Simulation Env Vehiclemembench Not Reported Not Reported
FEAST: Fully Connected Expressive Attention for Spatial Transcriptomics

Mar 26, 2026

Yes Not Reported Not Reported Cost Not Reported
IslamicMMLU: A Benchmark for Evaluating LLMs on Islamic Knowledge

Mar 24, 2026

Yes Automatic Metrics Not Reported Accuracy Not Reported
Truth as a Compression Artifact in Language Model Training

Mar 12, 2026

Yes Automatic Metrics Not Reported Accuracy Not Reported
From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs

Mar 25, 2026

Yes Not Reported Not Reported Wer , Jailbreak success rate Not Reported
Sabiá-4 Technical Report

Mar 10, 2026

Yes Automatic Metrics Not Reported Accuracy , Cost Not Reported
Comparing Developer and LLM Biases in Code Evaluation

Mar 25, 2026

Yes Not Reported Not Reported Not Reported Not Reported
From Isolated Scoring to Collaborative Ranking: A Comparison-Native Framework for LLM-Based Paper Evaluation

Mar 18, 2026

Yes Not Reported Not Reported Not Reported Not Reported
You Didn't Have to Say It like That: Subliminal Learning from Faithful Paraphrases

Mar 10, 2026

Yes Not Reported Not Reported Not Reported Not Reported

Protocol Diff (Top Papers)

Fast side-by-side comparison for the highest-ranked papers in this hub.

Signal Modeling and Benchmarking Spoken Dialogue Rewards w… Do Phone-Use Agents Respect Your Privacy? CausalRM: Causal-Theoretic Reward Modeling for RLHF…
Human Feedback Pairwise PreferencePairwise PreferencePairwise Preference
Evaluation Modes Automatic MetricsAutomatic MetricsAutomatic Metrics
Benchmarks Esdr BenchAPPS, MyphonebenchHarmbench
Metrics AccuracyTask successCost
Quality Controls Not reportedNot reportedNot reported
Rater Population UnknownUnknownUnknown
Annotation Unit PairwiseUnknownUnknown
Suggested Reading Order (Extended)

This section is intentionally expanded only when needed; use “Start Here” above for a faster pass.

Suggested Reading Order

  1. Do Phone-Use Agents Respect Your Privacy?

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics + pairwise preferences. Focus: APPS / task success. Abstract: Across five frontier models on 10 mobile.

  2. FEAST: Fully Connected Expressive Attention for Spatial Transcriptomics

    Start here for detailed protocol reporting and quality-control evidence. Signals: pairwise preferences. Focus: cost. Abstract: To address this, we propose FEAST (Fully connected Expressive Attention for Spatial Transcriptomics),.

  3. Comparing Developer and LLM Biases in Code Evaluation

    Start here for detailed protocol reporting and quality-control evidence. Signals: pairwise preferences. Abstract: As LLMs are increasingly used as judges in code applications, they should be evaluated in.

  4. VehicleMemBench: An Executable Benchmark for Multi-User Long-Term Memory in In-Vehicle Agents

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: simulation environments + pairwise preferences. Focus: Vehiclemembench. Abstract: This evolution requires agents to continuously model multi-user preferences.

  5. Modeling and Benchmarking Spoken Dialogue Rewards with Modality and Colloquialness

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: automatic metrics + pairwise preferences. Focus: Esdr-Bench / accuracy. Abstract: To address these challenges, we introduce SDiaReward,.

  6. Sabiá-4 Technical Report

    Adds automatic metrics with pairwise preferences for broader protocol coverage within this hub. Signals: automatic metrics + pairwise preferences. Focus: accuracy. Abstract: The models were developed through a.

  7. CausalRM: Causal-Theoretic Reward Modeling for RLHF from Observational User Feedbacks

    Adds automatic metrics with pairwise preferences for broader protocol coverage within this hub. Signals: automatic metrics + pairwise preferences. Focus: Harmbench / cost. Abstract: We identify two fundamental.

  8. IslamicMMLU: A Benchmark for Evaluating LLMs on Islamic Knowledge

    Adds automatic metrics with pairwise preferences for broader protocol coverage within this hub. Signals: automatic metrics + pairwise preferences. Focus: accuracy. Abstract: The Quran track shows the widest.

Known Limitations

Known Limitations

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Rater population is under-specified (0% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
Research Utility Snapshot (Detailed)

Research Utility Snapshot

Human Feedback Mix

  • Pairwise Preference (13)
  • Rubric Rating (1)

Evaluation Modes

  • Automatic Metrics (6)
  • Simulation Env (1)

Top Benchmarks

  • APPS (1)
  • Esdr Bench (1)
  • Harmbench (1)
  • Myphonebench (1)

Top Metrics

  • Accuracy (4)
  • Cost (3)
  • Jailbreak success rate (1)
  • Task success (1)

Rater Population Mix

Quality Controls

Coverage diagnostics (sample-based): human-feedback 100.0% · benchmarks 30.8% · metrics 61.5% · quality controls 0.0%.

Top Papers

  • VehicleMemBench: An Executable Benchmark for Multi-User Long-Term Memory in In-Vehicle Agents

    Yuhao Chen, Yi Xu, Xinyun Ding, Xiang Fang, Shuochen Liu · Mar 25, 2026 · Citations: 0

    Pairwise Preference Simulation Env Tool Use

    With the growing demand for intelligent in-vehicle experiences, vehicle-based agents are evolving from simple assistants to long-term companions.

  • Modeling and Benchmarking Spoken Dialogue Rewards with Modality and Colloquialness

    Jingyu Lu, Yuhan Wang, Fan Zhuo, Xize Cheng, Changhao Pan · Mar 16, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics

    To address these challenges, we introduce SDiaReward, an end-to-end multi-turn reward model trained on SDiaReward-Dataset, a novel collection of episode-level preference pairs explicitly targeting these gaps.

  • Do Phone-Use Agents Respect Your Privacy?

    Zhengyang Tang, Ke Ji, Xidong Wang, Zihan Ye, Xinyuan Wang · Apr 1, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics

    We study whether phone-use agents respect privacy while completing benign mobile tasks.

  • CausalRM: Causal-Theoretic Reward Modeling for RLHF from Observational User Feedbacks

    Hao Wang, Licheng Pan, Zhichao Chen, Chunyuan Zheng, Zhixuan Chu · Mar 19, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics

    Despite the success of reinforcement learning from human feedback (RLHF) in aligning language models, current reward modeling heavily relies on experimental feedback data collected from human annotators under controlled and costly…

  • Sabiá-4 Technical Report

    Thiago Laitz, Thales Sales Almeida, Hugo Abonizio, Roseval Malaquias Junior, Giovana Kerche Bonás · Mar 10, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics Tool Use

    The models were developed through a four-stage training pipeline: continued pre-training on Portuguese and Brazilian legal corpora, long-context extension to 128K tokens, supervised fine-tuning on instruction data spanning chat, code, legal…

  • FEAST: Fully Connected Expressive Attention for Spatial Transcriptomics

    Taejin Jeong, Joohyeok Kim, Jinyeong Kim, Chanyoung Kim, Seong Jae Hwang · Mar 26, 2026 · Citations: 0

    Pairwise Preference

    To address this, we propose FEAST (Fully connected Expressive Attention for Spatial Transcriptomics), an attention-based framework that models the tissue as a fully connected graph, enabling the consideration of all pairwise interactions.

  • IslamicMMLU: A Benchmark for Evaluating LLMs on Islamic Knowledge

    Ali Abdelaal, Mohammed Nader Al Haffar, Mahmoud Fawzi, Walid Magdy · Mar 24, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics

    We introduce IslamicMMLU, a benchmark of 10,013 multiple-choice questions spanning three tracks: Quran (2,013 questions), Hadith (4,000 questions), and Fiqh (jurisprudence, 4,000 questions).

  • Truth as a Compression Artifact in Language Model Training

    Konstantin Krestnikov · Mar 12, 2026 · Citations: 0

    Pairwise Preference Automatic Metrics

    In the random-error setting, models strongly prefer correct completions in paired evaluation: 83.1% accuracy at balanced data and 67.0% even when correct rules appear in only 10% of the corpus.

  • From Oracle to Noisy Context: Mitigating Contextual Exposure Bias in Speech-LLMs

    Xiaoyong Guo, Nanjie Li, Zijie Zeng, Kai Wang, Hao Huang · Mar 25, 2026 · Citations: 0

    Pairwise Preference

    We propose a unified training framework to improve robustness under realistic histories: (i) Teacher Error Knowledge by using Whisper large-v3 hypotheses as training-time history, (ii) Context Dropout to regularize over-reliance on history,…

  • Comparing Developer and LLM Biases in Code Evaluation

    Aditya Mittal, Ryan Shar, Zichu Wu, Shyam Agarwal, Tongshuang Wu · Mar 25, 2026 · Citations: 0

    Pairwise PreferenceRubric Rating

    We present TRACE (Tool for Rubric Analysis in Code Evaluation), a framework that evaluates LLM judges' ability to predict human preferences and automatically extracts rubric items to reveal systematic biases in how humans and models weigh…

  • From Isolated Scoring to Collaborative Ranking: A Comparison-Native Framework for LLM-Based Paper Evaluation

    Pujun Zheng, Jiacheng Yao, Jinquan Zheng, Chenyang Gu, Guoxiu He · Mar 18, 2026 · Citations: 0

    Pairwise Preference

    Large language models (LLMs) are currently applied to scientific paper evaluation by assigning an absolute score to each paper independently.

  • You Didn't Have to Say It like That: Subliminal Learning from Faithful Paraphrases

    Isaia Gisler, Zhonghao He, Tianyi Qiu · Mar 10, 2026 · Citations: 0

    Pairwise Preference

    We investigate whether transmission occurs through natural language paraphrases with fixed semantic content, and whether content explicitly contradicting the teacher's preference can block it.

  • Bioalignment: Measuring and Improving LLM Disposition Toward Biological Systems for AI Safety

    Trent R Northen, Mingxun Wang · Mar 10, 2026 · Citations: 0

    Pairwise Preference

    A sample of 5 frontier and 5 open-weight models were measured using 50 curated Bioalignment prompts with a Kelly criterion-inspired evaluation framework.

Related Hubs

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.