Skip to content

Researcher Tools

Human Feedback and Eval Paper Explorer

A focused feed for RLHF, preference data, rater protocols, agent evaluation, and LLM-as-judge research. Every paper includes structured metadata for quick triage.

Total papers: 31 Search mode: keyword Shortlist (0) RSS

Featured Papers

Popular high-signal papers with direct links to full protocol pages.

Browse by Topic

Jump directly into tag and hub pages to crawl deeper content clusters.

Popular Tags

Top Protocol Hubs

Start Here By Objective

Pick your immediate research objective and jump directly to high-signal pages, not generic search.

Orchestration-Free Customer Service Automation: A Privacy-Preserving and Flowchart-Guided Framework

Mengze Hong, Chen Jason Zhang, Zichang Guo, Hanlin Gu, Di Jiang, Li Qing · Feb 17, 2026

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 65% Moderate protocol signal Freshness: Hot Status: Ready
Demonstrations Automatic Metrics General
  • Existing approaches either rely on modular system designs with extensive agent orchestration or employ over-simplified instruction schemas, providing limited guidance and poor generalizability.
  • We first define the components and evaluation metrics for TOFs, then formalize a cost-efficient flowchart construction algorithm to abstract procedural knowledge from service dialogues.
Open paper
TimeWarp: Evaluating Web Agents by Revisiting the Past

Md Farhan Ishmam, Kenneth Marino · Mar 5, 2026

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 62% Moderate protocol signal Freshness: Hot Status: Ready
Demonstrations Web Browsing General
  • The improvement of web agents on current benchmarks raises the question: Do today's agents perform just as well when the web changes?
  • We introduce TimeWarp, a benchmark that emulates the evolving web using containerized environments that vary in UI, design, and layout.
Open paper
IDP Accelerator: Agentic Document Intelligence from Extraction to Compliance Validation

Md Mofijul Islam, Md Sirajus Salekin, Joe King, Priyashree Roy, Vamsi Thilak Gudi, Spencer Romo · Feb 26, 2026

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 65% Moderate protocol signal Freshness: Hot Status: Fallback
Demonstrations Automatic Metrics Coding
  • We present IDP (Intelligent Document Processing) Accelerator, a framework enabling agentic AI for end-to-end document intelligence with four key components: (1) DocSplit, a novel benchmark dataset and multimodal classifier using BIO tagging…
Open paper
AuditBench: Evaluating Alignment Auditing Techniques on Models with Hidden Behaviors

Abhay Sheshadri, Aidan Ewart, Kai Fronsdal, Isha Gupta, Samuel R. Bowman, Sara Price · Feb 26, 2026

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 62% Moderate protocol signal Freshness: Hot Status: Fallback
Demonstrations General
  • We introduce AuditBench, an alignment auditing benchmark.
  • To demonstrate AuditBench's utility, we develop an investigator agent that autonomously employs a configurable set of auditing tools.
Open paper
FewMMBench: A Benchmark for Multimodal Few-Shot Learning

Mustafa Dogan, Ilker Kesen, Iacer Calixto, Aykut Erdem, Erkut Erdem · Feb 25, 2026

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 62% Moderate protocol signal Freshness: Hot Status: Fallback
Demonstrations General
  • In this paper, we introduce FewMMBench, a comprehensive benchmark designed to evaluate MLLMs under few-shot conditions, with a focus on In-Context Learning (ICL) and Chain-of-Thought (CoT) prompting.
Open paper
Optimizing In-Context Demonstrations for LLM-based Automated Grading

Yucheng Chu, Hang Li, Kaiqi Yang, Yasemin Copur-Gencturk, Kevin Haudek, Joseph Krajcik · Feb 28, 2026

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 58% Sparse protocol signal Freshness: Hot Status: Fallback
Rubric RatingDemonstrations General
  • GUIDE paves the way for trusted, scalable assessment systems that align closely with human pedagogical standards.
Open paper
Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 58% Sparse protocol signal Freshness: Hot Status: Fallback
Demonstrations General
  • Argumentative LLMs (ArgLLMs) are an existing approach leveraging Large Language Models (LLMs) and computational argumentation for decision-making, with the aim of making the resulting decisions faithfully explainable to and contestable by…
  • Here we propose a web-based system implementing ArgLLM-empowered agents for binary tasks.
Open paper
Risk-Aware World Model Predictive Control for Generalizable End-to-End Autonomous Driving

Jiangxin Sun, Feng Xue, Teng Long, Chang Liu, Jian-Fang Hu, Wei-Shi Zheng · Feb 26, 2026

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 58% Sparse protocol signal Freshness: Hot Status: Fallback
Demonstrations General
  • Practically, RaWMPC leverages a world model to predict the consequences of multiple candidate actions and selects low-risk actions through explicit risk evaluation.
  • Furthermore, to generate low-risk candidate actions at test time, we introduce a self-evaluation distillation method to distill riskavoidance capabilities from the well-trained world model into a generative action proposal network without…
Open paper
Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 58% Sparse protocol signal Freshness: Hot Status: Fallback
Demonstrations General
  • Extensive experiments on five KGQA benchmark datasets demonstrate that, to the best of our knowledge, our method achieves state-of-the-art performance, outperforming not only open-source but also even closed-source LLMs.
Open paper
Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 58% Sparse protocol signal Freshness: Hot Status: Fallback
Demonstrations Coding
  • Effective human-AI coordination requires artificial agents capable of exhibiting and responding to human-like behaviors while adapting to changing contexts.
  • Drawing inspiration from the theory of human cognitive processes, where inner speech guides action selection before execution, we propose MIMIC (Modeling Inner Motivations for Imitation and Control), a framework that uses language as an…
Open paper

Match reason: Matches selected tags (Demonstrations).

Score: 58% Sparse protocol signal Freshness: Hot Status: Fallback
Demonstrations Coding
  • Generative AI is reshaping knowledge work, yet existing research focuses predominantly on software engineering and the natural sciences, with limited methodological exploration for the humanities and social sciences.
  • Positioned as a "methodological experiment," this study proposes an AI Agent-based collaborative research workflow (Agentic Workflow) for humanities and social science research.
Open paper

Match reason: Matches selected tags (Demonstrations).

Score: 58% Sparse protocol signal Freshness: Hot Status: Fallback
Demonstrations General
  • This paper introduces Perspectives, an interactive extension of the Discourse Analysis Tool Suite designed to empower Digital Humanities (DH) scholars to explore and organize large, unstructured document collections.
  • Perspectives implements a flexible, aspect-focused document clustering pipeline with human-in-the-loop refinement capabilities.
Open paper
MoMaGen: Generating Demonstrations under Soft and Hard Constraints for Multi-Step Bimanual Mobile Manipulation

Chengshu Li, Mengdi Xu, Arpit Bahety, Hang Yin, Yunfan Jiang, Huang Huang · Oct 21, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Simulation Env Long Horizon General
  • Imitation learning from large-scale, diverse human demonstrations has been shown to be effective for training robots, but collecting such data is costly and time-consuming.
  • This challenge intensifies for multi-step bimanual mobile manipulation, where humans must teleoperate both the mobile base and two high-DoF arms.
Open paper
SPACeR: Self-Play Anchoring with Centralized Reference Models

Wei-Jer Chang, Akshay Rangesh, Kevin Joseph, Matthew Strong, Masayoshi Tomizuka, Yihan Hu · Oct 20, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 53% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Simulation Env Multi Agent General
  • Developing autonomous vehicles (AVs) requires not only safety and efficiency, but also realistic, human-like behaviors that are socially aware and predictable.
  • Achieving this requires sim agent policies that are human-like, fast, and scalable in multi-agent settings.
Open paper
AITutor-EvalKit: Exploring the Capabilities of AI Tutors

Numaan Naeem, Kaushal Kumar Maurya, Kseniia Petukhova, Ekaterina Kochmar · Dec 3, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 52% Sparse protocol signal Freshness: Warm Status: Fallback
Demonstrations General
  • We present AITutor-EvalKit, an application that uses language technology to evaluate the pedagogical quality of AI tutors, provides software for demonstration and evaluation, as well as model inspection and data visualization.
Open paper
Supervised Reinforcement Learning: From Expert Trajectories to Step-wise Reasoning

Yihe Deng, I-Hung Hsu, Jun Yan, Zifeng Wang, Rujun Han, Gufeng Zhang · Oct 29, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 50% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Long Horizon Coding
  • Beyond reasoning benchmarks, SRL generalizes effectively to agentic software engineering tasks, establishing it as a robust and versatile training framework for reasoning-oriented LLMs.
Open paper
Learning to Answer from Correct Demonstrations

Nirmit Joshi, Gene Li, Siddharth Bhandari, Shiva Prasad Kasiviswanathan, Cong Ma, Nathan Srebro · Oct 17, 2025

Citations: 0

Match reason: Matches selected tags (Demonstrations).

Score: 50% Moderate protocol signal Freshness: Cold Status: Ready
Demonstrations Automatic Metrics General
Open paper

Protocol Hubs

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.