Skip to content

Researcher Tools

Human Feedback and Eval Paper Explorer

A focused feed for RLHF, preference data, rater protocols, agent evaluation, and LLM-as-judge research. Every paper includes structured metadata for quick triage.

Total papers: 67 Search mode: keyword Shortlist (0) RSS

Featured Papers

Popular high-signal papers with direct links to full protocol pages.

Browse by Topic

Jump directly into tag and hub pages to crawl deeper content clusters.

Popular Tags

Top Protocol Hubs

Weekly Eval Paper Digest

The top RLHF, evaluation, and human feedback papers — curated and summarized every Friday.

No spam. Unsubscribe anytime.

Start Here By Objective

Pick your immediate research objective and jump directly to high-signal pages, not generic search.

Scale Your Evaluation Team

Need human evaluators for your benchmark or preference study? OpenTrain sources pre-vetted domain experts into your annotation pipeline.

DSPO: Stable and Efficient Policy Optimization for Agentic Search and Reasoning

Chenyang Gu, Yewen Pu, Bruce Yang, Xiaofan Li, Huan Gao · Oct 10, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Simulation Env General
  • Current approaches either rely on prompting to elicit the model's innate agent capabilities, or suffer from performance ceilings and collapse when applying RL to complex interactive tasks, leaving their true agentic potential untapped.
  • To address this, we introduce Dynamic-filter Sequence-level Policy Optimization (DSPO), an improved RL algorithm designed for robust agent training through sequence-level optimization and dynamic sample filtering.
Open paper
Watch and Learn: Learning to Use Computers from Online Videos

Chan Hee Song, Yiwen Song, Palash Goyal, Yu Su, Oriana Riva, Hamid Palangi · Oct 6, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Long Horizon General
  • Computer-using agents (CUAs) must plan task workflows across diverse and evolving applications, yet progress is limited by the lack of large-scale, high-quality training data.
  • We present Watch & Learn (W&L), a framework that converts readily available Internet videos of human computer use into executable UI trajectories at scale.
Open paper
IA2: Alignment with ICL Activations Improves Supervised Fine-Tuning

Aayush Mishra, Daniel Khashabi, Anqi Liu · Sep 26, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% High protocol signal Freshness: Cold Status: Ready
Demonstrations Automatic Metrics General
  • Performing IA2 as a priming step before SFT significantly improves the accuracy and calibration of model outputs, as shown by our extensive empirical results on 12 popular benchmarks and two model families.
Open paper
LaTeXTrans: Structured LaTeX Translation with Multi-Agent Coordination

Ziming Zhu, Chenglong Wang, Haosong Xv, Shunjie Xing, Yifu Huo, Fengning Tian · Aug 26, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% High protocol signal Freshness: Cold Status: Ready
Demonstrations Automatic Metrics Multi Agent MathCoding
  • In this paper, we introduce LaTeXTrans, a collaborative multi-agent system designed to address this challenge.
  • LaTeXTrans ensures format preservation, structural fidelity, and terminology consistency through six specialized agents: 1) a Parser that decomposes LaTeX into translation-friendly units via placeholder substitution and syntax filtering; 2)…
Open paper
Incentivizing Strong Reasoning from Weak Supervision

Yige Yuan, Teng Xiao, Shuchang Tao, Xue Wang, Jinyang Gao, Bolin Ding · May 26, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Automatic Metrics Coding
  • Experiments across diverse benchmarks and model architectures demonstrate that weak reasoners can effectively incentivize reasoning in stronger student models, consistently improving performance across a wide range of reasoning tasks.
Open paper
Efficient Agent Training for Computer Use

Yanheng He, Jiahe Jin, Pengfei Liu · May 20, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Long Horizon Coding
  • We introduce PC Agent-E, an efficient agent training framework that significantly reduces reliance on large-scale human demonstrations.
  • Trained on these enriched trajectories, our PC Agent-E model achieved a remarkable 141 relative improvement, and even surpassed the Claude 3.7 Sonnet by 10% in relative terms on WindowsAgentArena-V2, an improved benchmark we also released.
Open paper
Structured Agent Distillation for Large Language Model

Jun Liu, Zhenglun Kong, Peiyan Dong, Changdi Yang, Tianqi Li, Hao Tang · May 20, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Simulation Env General
  • Large language models (LLMs) exhibit strong capabilities as decision-making agents by interleaving reasoning and actions, as seen in ReAct-style frameworks.
  • We propose Structured Agent Distillation, a framework that compresses large LLM-based agents into smaller student models while preserving both reasoning fidelity and action consistency.
Open paper
On Discovering Algorithms for Adversarial Imitation Learning

Shashank Reddy Chirra, Jayden Teoh, Praveen Paruchuri, Pradeep Varakantham · Oct 1, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 50% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Simulation Env Coding
  • RA functions in AIL are typically derived from divergence minimization objectives, relying heavily on human design and ingenuity.
  • Remarkably, DAIL generalises across unseen environments and policy optimization algorithms, outperforming the current state-of-the-art of \emph{human-designed} baselines.
Open paper
CausalARC: Abstract Reasoning with Causal World Models

Jacqueline Maasch, John Kalantari, Kia Khezeli · Sep 3, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 46% Sparse protocol signal Freshness: Cold Status: Fallback
Demonstrations Math
  • As a proof-of-concept, we illustrate the use of CausalARC for four language model evaluation settings: (1) abstract reasoning with test-time training, (2) counterfactual reasoning with in-context learning, (3) program synthesis, and (4)…
Open paper
AmbiSQL: Interactive Ambiguity Detection and Resolution for Text-to-SQL

Zhongjun Ding, Yin Lin, Tianjing Zeng, Rong Zhu, Bolin Ding, Jingren Zhou · Aug 21, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 46% Sparse protocol signal Freshness: Cold Status: Fallback
Demonstrations General
  • We provide 40 ambiguous queries collected from two real-world benchmarks that SIGMOD'26 attendees can use to explore how disambiguation improves SQL generation quality.
Open paper
NeuralOS: Towards Simulating Operating Systems via Neural Generative Models

Luke Rivard, Sun Sun, Hongyu Guo, Wenhu Chen, Yuntian Deng · Jul 11, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 46% Sparse protocol signal Freshness: Cold Status: Fallback
Demonstrations General
  • The model is trained on a dataset of Ubuntu XFCE recordings, which include both randomly generated interactions and realistic interactions produced by AI agents.
Open paper
Programming by Backprop: An Instruction is Worth 100 Examples When Finetuning LLMs

Jonathan Cook, Silvia Sapora, Arash Ahmadian, Akbir Khan, Tim Rocktaschel, Jakob Foerster · Jun 23, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 46% Sparse protocol signal Freshness: Cold Status: Fallback
Demonstrations Coding
  • Though execution of instructions in training data remains less reliable than when instructions are given in-context, our results demonstrate that procedural knowledge can be noisily `programmed' into LLMs through PBB, with important…
Open paper
MOBODY: Model Based Off-Dynamics Offline Reinforcement Learning

Yihong Guo, Yu Yang, Pan Xu, Anqi Liu · Jun 10, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 46% Sparse protocol signal Freshness: Cold Status: Fallback
Demonstrations General
  • We evaluate MOBODY on a wide range of MuJoCo and Adroit benchmarks, demonstrating that it outperforms state-of-the-art off-dynamics RL baselines as well as policy learning methods based on different dynamics learning baselines, with…
Open paper
Training with Pseudo-Code for Instruction Following

Prince Kumar, Rudra Murthy, Riyaz Bhat, Danish Contractor · May 23, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 46% Sparse protocol signal Freshness: Cold Status: Fallback
Demonstrations MathCoding
  • We evaluate our method on 12 publicly available benchmarks spanning instruction-following, mathematical reasoning, and commonsense reasoning, across six base models.
  • Our results show that models trained with pseudo-code follow instructions more reliably, achieving relative gains of 8-21\% on instruction following benchmarks, while largely preserving and in some cases improving performance on…
Open paper
REFLEX: Metacognitive Reasoning for Reflective Zero-Shot Robotic Planning with Large Language Models

Wenjie Lin, Jin Wei-Kocsis, Jiansong Zhang, Byung-Cheol Min, Dongming Gan, Paul Asunda · May 20, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 46% Sparse protocol signal Freshness: Cold Status: Fallback
Demonstrations General
  • Inspired by human metacognitive learning and creative problem-solving, we address this limitation by exploring a fundamental question: Can LLMs be empowered with metacognitive capabilities to reason, reflect, and create, thereby enhancing…
  • We propose a more challenging robotic benchmark task and evaluate our framework on the existing benchmark and the novel task.
Open paper

Protocol Hubs

Benchmark Hubs

Get Started

Join the #1 Platform for AI Training Talent

Where top AI builders and expert AI Trainers connect to build the future of AI.
Self-Service
Post a Job
Post your project and get a shortlist of qualified AI Trainers and Data Labelers. Hire and manage your team in the tools you already use.
Managed Service
For Large Projects
Done-for-You
We recruit, onboard, and manage a dedicated team inside your tools. End-to-end operations for large or complex projects.
For Freelancers
Join as an AI Trainer
Find AI training and data labeling projects across platforms, all in one place. One profile, one application process, more opportunities.