Skip to content
← Back to explorer

Benchmark Hub

GSM8K Benchmark Papers (Last 180 Days)

Updated from current HFEPX corpus (Feb 27, 2026). 10 papers are grouped in this benchmark page. Common evaluation modes: Automatic Metrics, Human Eval. Frequent quality control: Calibration. Frequently cited benchmark: GSM8K. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Feb 26, 2026.

Papers: 10 Last published: Feb 26, 2026 Global RSS

Research Narrative

Grounded narrative Model: deterministic-grounded Source: persisted

Updated from current HFEPX corpus (Feb 27, 2026). This page tracks 10 papers for GSM8K Benchmark Papers (Last 180 Days). Dominant protocol signals include automatic metrics, human evaluation, simulation environments, with frequent benchmark focus on GSM8K, HumanEval+ and metric focus on accuracy, latency. Use the grounded sections below to prioritize reproducible protocol choices, benchmark-matched comparisons, and judge-vs-human evaluation checks.

Why This Matters For Eval Research

Protocol Takeaways

Benchmark Interpretation

  • GSM8K appears in 100% of hub papers (10/10); use this cohort for benchmark-matched comparisons.
  • HumanEval+ appears in 30% of hub papers (3/10); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 60% of hub papers (6/10); compare with a secondary metric before ranking methods.
  • latency is reported in 20% of hub papers (2/10); compare with a secondary metric before ranking methods.

Researcher Checklist

  • Close gap on Papers with explicit human feedback. Coverage is a replication risk (10% vs 45% target).
  • Tighten coverage on Papers reporting quality controls. Coverage is usable but incomplete (20% vs 30% target).
  • Maintain strength on Papers naming benchmarks/datasets. Coverage is strong (100% vs 35% target).
  • Maintain strength on Papers naming evaluation metrics. Coverage is strong (70% vs 35% target).
  • Close gap on Papers with known rater population. Coverage is a replication risk (0% vs 35% target).
  • Close gap on Papers with known annotation unit. Coverage is a replication risk (0% vs 35% target).

Papers with explicit human feedback

Coverage is a replication risk (10% vs 45% target).

Papers reporting quality controls

Coverage is usable but incomplete (20% vs 30% target).

Papers naming benchmarks/datasets

Coverage is strong (100% vs 35% target).

Papers naming evaluation metrics

Coverage is strong (70% vs 35% target).

Papers with known rater population

Coverage is a replication risk (0% vs 35% target).

Papers with known annotation unit

Coverage is a replication risk (0% vs 35% target).

Suggested Reading Order

  1. 1. InnerQ: Hardware-aware Tuning-free Quantization of KV Cache for Large Language Models

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  2. 2. Black-Box Reliability Certification for AI Agents via Self-Consistency Sampling and Conformal Calibration

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  3. 3. Pyramid MoA: A Probabilistic Framework for Cost-Optimized Anytime Inference

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  4. 4. Think$^{2}$: Grounded Metacognitive Reasoning in Large Language Models

    Include a human-eval paper to anchor calibration against automated judge settings.

  5. 5. SPQ: An Ensemble Technique for Large Language Model Compression

    Adds automatic metrics for broader coverage within this hub.

  6. 6. TFL: Targeted Bit-Flip Attack on Large Language Model

    Adds automatic metrics for broader coverage within this hub.

  7. 7. Weight space Detection of Backdoors in LoRA Adapters

    Adds automatic metrics for broader coverage within this hub.

  8. 8. Scaling Beyond Masked Diffusion Language Models

    Adds automatic metrics for broader coverage within this hub.

Known Limitations

  • Rater population is under-specified (0% coverage).
  • Annotation unit is under-specified (0% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.

Research Utility Links

human_eval vs automatic_metrics

both=0, left_only=1, right_only=9

0 papers use both Human Eval and Automatic Metrics.

automatic_metrics vs simulation_env

both=1, left_only=8, right_only=0

1 papers use both Automatic Metrics and Simulation Env.

human_eval vs simulation_env

both=0, left_only=1, right_only=1

0 papers use both Human Eval and Simulation Env.

Metric Brief

perplexity

Coverage: 2 papers (20%)

2 papers (20%) mention perplexity.

Examples: SPQ: An Ensemble Technique for Large Language Model Compression , Scaling Beyond Masked Diffusion Language Models

Top Papers On This Benchmark

  • InnerQ: Hardware-aware Tuning-free Quantization of KV Cache for Large Language Models

    Sayed Mohammadreza Tayaranian Hosseini, Amir Ardakani, Warren J. Gross · Feb 26, 2026

    Automatic Metrics

    Our evaluation experiments on Llama models shows that InnerQ maintains a few-shot GSM8K performance comparable to non-quantized KV caches and surpasses prior KV cache quantization methods.

  • Black-Box Reliability Certification for AI Agents via Self-Consistency Sampling and Conformal Calibration

    Charafeddine Mouzouni · Feb 24, 2026

    Automatic Metrics

    We validate across five benchmarks, five models from three families, and both synthetic and real data.

  • Pyramid MoA: A Probabilistic Framework for Cost-Optimized Anytime Inference

    Arindam Khaled · Feb 23, 2026

    Automatic Metrics

    In this work, we propose "Pyramid MoA", a hierarchical Mixture-of-Agents architecture that uses a lightweight Router to dynamically escalate queries only when necessary.

  • Think$^{2}$: Grounded Metacognitive Reasoning in Large Language Models

    Abraham Paul Elenjical, Vivek Hruday Kavuri, Vasudeva Varma · Feb 21, 2026

    Human Eval

    We introduce a psychologically grounded metacognitive framework that operationalizes Ann Brown's regulatory cycle (Planning, Monitoring, and Evaluation) as a structured prompting architecture, and study its integration within a lightweight

  • SPQ: An Ensemble Technique for Large Language Model Compression

    Jiamin Yao, Eren Gultepe · Feb 20, 2026

    Automatic MetricsSimulation Env

    Applied to LLaMA-2-7B, SPQ achieves up to 75% memory reduction while maintaining or improving perplexity (e.g., WikiText-2 5.47 to 4.91) and preserving accuracy on downstream benchmarks such as C4, TruthfulQA, and GSM8K.

  • TFL: Targeted Bit-Flip Attack on Large Language Model

    Jingkai Guo, Chaitali Chakrabarti, Deliang Fan · Feb 19, 2026

    Automatic Metrics

    Large language models (LLMs) are increasingly deployed in safety and security critical applications, raising concerns about their robustness to model parameter fault injection attacks.

  • Weight space Detection of Backdoors in LoRA Adapters

    David Puertolas Merenciano, Ekaterina Vasyagina, Raghav Dixit, Kevin Zhu, Ruizhe Li · Feb 16, 2026

    Automatic Metrics

    We evaluate the method on 500 LoRA adapters -- 400 clean, and 100 poisoned for Llama-3.2-3B on instruction and reasoning datasets: Alpaca, Dolly, GSM8K, ARC-Challenge, SQuADv2, NaturalQuestions, HumanEval, and GLUE dataset.

  • Scaling Beyond Masked Diffusion Language Models

    Subham Sekhar Sahoo, Jean-Marie Lemercier, Zhihan Yang, Justin Deschenaux, Jingyu Liu · Feb 16, 2026

    Automatic Metrics

    Among discrete diffusion approaches, Masked diffusion currently dominates, largely driven by strong perplexity on language modeling benchmarks.

  • Search or Accelerate: Confidence-Switched Position Beam Search for Diffusion Language Models

    Mingyu Cao, Alvaro H. C. Correia, Christos Louizos, Shiwei Liu, Lu Yin · Feb 11, 2026

    Automatic Metrics

    Across mathematical reasoning and code generation benchmarks (GSM8K, MBPP, HumanEval) on Dream-7B and LLaDA-8B, SOAR improves generation quality while maintaining competitive inference speed, offering a practical way to balance quality and

  • Slm-mux: Orchestrating small language models for reasoning

    Chenyu Wang, Zishen Wan, Hao Kang, Emma Chen, Zhiqiang Xie · Oct 6, 2025

    Automatic Metrics

    Additional experiments show that the core principle of SLM-MUX extends to open-ended generation tasks (e.g., HumanEval) and benefits other model classes, including frontier LLMs and domain-specific fine-tuned SLMs.

Other Benchmark Hubs