HFEPX Benchmark Hub
MMLU Benchmark Papers (Last 90 Days)
Updated from current HFEPX corpus (Mar 31, 2026). 11 papers are grouped in this benchmark page.
Read Full Context
Updated from current HFEPX corpus (Mar 31, 2026). 11 papers are grouped in this benchmark page. Common evaluation modes: Automatic Metrics, Llm As Judge. Most common rater population: Domain Experts. Common annotation unit: Trajectory. Frequently cited benchmark: MMLU. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 28, 2026.