HFEPX Hub
CS.CL + Human Eval Papers
Updated from current HFEPX corpus (Apr 12, 2026). 84 papers are grouped in this hub page.
Read Full Context
Updated from current HFEPX corpus (Apr 12, 2026). 84 papers are grouped in this hub page. Common evaluation modes: Human Eval, Automatic Metrics. Most common rater population: Domain Experts. Common annotation unit: Multi Dim Rubric. Frequent quality control: Inter Annotator Agreement Reported. Frequently cited benchmark: Rewardbench. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 22, 2026.