Skip to content
← Back to explorer

PanCanBench: A Comprehensive Benchmark for Evaluating Large Language Models in Pancreatic Oncology

Yimin Zhao, Sheela R. Damle, Simone E. Dekker, Scott Geng, Karly Williams Silva, Jesse J Hubbard, Manuel F Fernandez, Fatima Zelada-Arenas, Alejandra Alvarez, Brianne Flores, Alexis Rodriguez, Stephen Salerno, Carrie Wright, Zihao Wang, Pang Wei Koh, Jeffrey T. Leek · Mar 2, 2026 · Citations: 0

How to use this page

High trust

Use this as a practical starting point for protocol research, then validate against the original paper.

Best use

Primary protocol reference for eval design

What to verify

Validate the evaluation procedure and quality controls in the full paper before operational use.

Evidence quality

High

Derived from extracted protocol signals and abstract evidence.

Abstract

Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety. As patients and clinicians increasingly use LLMs for guidance on complex conditions such as pancreatic cancer, evaluation must extend beyond general medical knowledge. Existing frameworks, such as HealthBench, rely on simulated queries and lack disease-specific depth. Moreover, high rubric-based scores do not ensure factual correctness, underscoring the need to assess hallucinations. We developed a human-in-the-loop pipeline to create expert rubrics for de-identified patient questions from the Pancreatic Cancer Action Network (PanCAN). The resulting benchmark, PanCanBench, includes 3,130 question-specific criteria across 282 authentic patient questions. We evaluated 22 proprietary and open-source LLMs using an LLM-as-a-judge framework, measuring clinical completeness, factual accuracy, and web-search integration. Models showed substantial variation in rubric-based completeness, with scores ranging from 46.5% to 82.3%. Factual errors were common, with hallucination rates (the percentages of responses containing at least one factual error) ranging from 6.0% for Gemini-2.5 Pro and GPT-4o to 53.8% for Llama-3.1-8B. Importantly, newer reasoning-optimized models did not consistently improve factuality: although o3 achieved the highest rubric score, it produced inaccuracies more frequently than other GPT-family models. Web-search integration did not inherently guarantee better responses. The average score changed from 66.8% to 63.9% for Gemini-2.5 Pro and from 73.8% to 72.8% for GPT-5 when web search was enabled. Synthetic AI-generated rubrics inflated absolute scores by 17.9 points on average while generally maintaining similar relative ranking.

Should You Rely On This Paper?

This paper has strong direct human-feedback and evaluation protocol signal and is suitable as a primary eval pipeline reference.

Best use

Primary protocol reference for eval design

Use if you need

A benchmark-and-metrics comparison anchor.

Main weakness

No major weakness surfaced.

Trust level

High

Usefulness score

77/100 • High

Use this as a primary source when designing or comparing eval protocols.

Human Feedback Signal

Detected

Evaluation Signal

Detected

Usefulness for eval research

High-confidence candidate

Extraction confidence 80%

What We Could Verify

These are the protocol signals we could actually recover from the available paper metadata. Use them to decide whether this paper is worth deeper reading.

Human Feedback Types

strong

Rubric Rating, Expert Verification

Directly usable for protocol triage.

"Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety."

Evaluation Modes

strong

Llm As Judge, Automatic Metrics

Includes extracted eval setup.

"Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety."

Quality Controls

missing

Not reported

No explicit QC controls found.

"Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety."

Benchmarks / Datasets

strong

Pancanbench, Healthbench

Useful for quick benchmark comparison.

"Existing frameworks, such as HealthBench, rely on simulated queries and lack disease-specific depth."

Reported Metrics

strong

Accuracy

Useful for evaluation criteria comparison.

"Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety."

Rater Population

strong

Domain Experts

Helpful for staffing comparability.

"Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety."

Human Feedback Details

  • Uses human feedback: Yes
  • Feedback types: Rubric Rating, Expert Verification
  • Rater population: Domain Experts
  • Unit of annotation: Multi Dim Rubric
  • Expertise required: Medicine

Evaluation Details

  • Evaluation modes: Llm As Judge, Automatic Metrics
  • Agentic eval: None
  • Quality controls: Not reported
  • Evidence quality: High
  • Use this page as: Primary protocol reference for eval design

Protocol And Measurement Signals

Benchmarks / Datasets

PancanbenchHealthbench

Reported Metrics

accuracy

Research Brief

Metadata summary

Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety.

Based on abstract + metadata only. Check the source paper before making high-confidence protocol decisions.

Key Takeaways

  • Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety.
  • As patients and clinicians increasingly use LLMs for guidance on complex conditions such as pancreatic cancer, evaluation must extend beyond general medical knowledge.
  • Existing frameworks, such as HealthBench, rely on simulated queries and lack disease-specific depth.

Researcher Actions

  • Compare this paper against nearby papers in the same arXiv category before using it for protocol decisions.
  • Validate inferred eval signals (Automatic metrics) against the full paper.
  • Use related-paper links to find stronger protocol-specific references.

Caveats

  • Generated from abstract + metadata only; no PDF parsing.
  • Signals below are heuristic and may miss details reported outside the abstract.

Research Summary

Contribution Summary

  • Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety.
  • As patients and clinicians increasingly use LLMs for guidance on complex conditions such as pancreatic cancer, evaluation must extend beyond general medical knowledge.
  • We evaluated 22 proprietary and open-source LLMs using an LLM-as-a-judge framework, measuring clinical completeness, factual accuracy, and web-search integration.

Why It Matters For Eval

  • Large language models (LLMs) have achieved expert-level performance on standardized examinations, yet multiple-choice accuracy poorly reflects real-world clinical utility and safety.
  • We evaluated 22 proprietary and open-source LLMs using an LLM-as-a-judge framework, measuring clinical completeness, factual accuracy, and web-search integration.

Researcher Checklist

  • Pass: Human feedback protocol is explicit

    Detected: Rubric Rating, Expert Verification

  • Pass: Evaluation mode is explicit

    Detected: Llm As Judge, Automatic Metrics

  • Gap: Quality control reporting appears

    No calibration/adjudication/IAA control explicitly detected.

  • Pass: Benchmark or dataset anchors are present

    Detected: Pancanbench, Healthbench

  • Pass: Metric reporting is present

    Detected: accuracy

Related Papers

Papers are ranked by protocol overlap, extraction signal alignment, and semantic proximity.

Get Started

Join the #1 Platform for AI Training Talent

Where top AI builders and expert AI Trainers connect to build the future of AI.
Self-Service
Post a Job
Post your project and get a shortlist of qualified AI Trainers and Data Labelers. Hire and manage your team in the tools you already use.
Managed Service
For Large Projects
Done-for-You
We recruit, onboard, and manage a dedicated team inside your tools. End-to-end operations for large or complex projects.
For Freelancers
Join as an AI Trainer
Find AI training and data labeling projects across platforms, all in one place. One profile, one application process, more opportunities.