Skip to content
← Back to explorer

CuraView: A Multi-Agent Framework for Medical Hallucination Detection with GraphRAG-Enhanced Knowledge Verification

Severin Ye, Xiao Kong, Xiaopeng He, Guangsu Yan, Dongsuk Oh · May 5, 2026 · Citations: 0

How to use this page

Moderate trust

Use this for comparison and orientation, not as your only source.

Best use

Background context only

What to verify

Validate the evaluation procedure and quality controls in the full paper before operational use.

Evidence quality

Moderate

Derived from extracted protocol signals and abstract evidence.

Abstract

Discharge summaries require extracting critical information from lengthy electronic health records (EHRs), a process that is labor-intensive when performed manually. Large language models (LLMs) can improve generation efficiency; however, they are prone to producing faithfulness hallucinations, statements that contradict source records, posing direct risks to patient safety. To address this, we present CuraView, a multi-agent framework for sentence-level detection and evidence-grounded explanation of faithfulness hallucinations in discharge summaries. CuraView constructs a GraphRAG-based knowledge graph from patient-level EHRs and implements a closed-loop generation-detection pipeline with sentence-level evidence retrieval and classification spanning four evidence grades from strong support to direct contradiction (E1-E4), yielding structured and interpretable evidence chains. We evaluate CuraView on a subset of 250 patients from the Discharge-Me benchmark, with 50 patients held out for testing. Our fine-tuned Qwen3-14B detection model achieves an F1 of 0.831 on the safety-critical E4 metric (90.9% recall, 76.5% precision) and an F1 of 0.823 on E3+E4, representing a 50.0% relative improvement over the base model and outperforming RAGTruth-style and QAGS-style baselines. These results demonstrate that evidence-chain-based graph retrieval verification substantially improves the factual reliability of clinical documentation, while simultaneously producing reusable annotated datasets for downstream model training and distillation.

Should You Rely On This Paper?

This paper is adjacent to HFEPX scope and is best used for background context, not as a primary protocol reference.

Best use

Background context only

Use if you need

A benchmark-and-metrics comparison anchor.

Main weakness

No major weakness surfaced.

Trust level

Moderate

Usefulness score

25/100 • Low

Treat as adjacent context, not a core eval-method reference.

Human Feedback Signal

Not explicit in abstract metadata

Evaluation Signal

Detected

Usefulness for eval research

Adjacent candidate

Extraction confidence 55%

What We Could Verify

These are the protocol signals we could actually recover from the available paper metadata. Use them to decide whether this paper is worth deeper reading.

Human Feedback Types

missing

None explicit

No explicit feedback protocol extracted.

"Discharge summaries require extracting critical information from lengthy electronic health records (EHRs), a process that is labor-intensive when performed manually."

Evaluation Modes

strong

Automatic Metrics

Includes extracted eval setup.

"Discharge summaries require extracting critical information from lengthy electronic health records (EHRs), a process that is labor-intensive when performed manually."

Quality Controls

missing

Not reported

No explicit QC controls found.

"Discharge summaries require extracting critical information from lengthy electronic health records (EHRs), a process that is labor-intensive when performed manually."

Benchmarks / Datasets

strong

RAGTruth

Useful for quick benchmark comparison.

"Our fine-tuned Qwen3-14B detection model achieves an F1 of 0.831 on the safety-critical E4 metric (90.9% recall, 76.5% precision) and an F1 of 0.823 on E3+E4, representing a 50.0% relative improvement over the base model and outperforming RAGTruth-style and QAGS-style baselines."

Reported Metrics

strong

F1, Precision, Recall, Faithfulness

Useful for evaluation criteria comparison.

"Large language models (LLMs) can improve generation efficiency; however, they are prone to producing faithfulness hallucinations, statements that contradict source records, posing direct risks to patient safety."

Human Feedback Details

  • Uses human feedback: No
  • Feedback types: None
  • Rater population: Not reported
  • Expertise required: Medicine

Evaluation Details

  • Evaluation modes: Automatic Metrics
  • Agentic eval: Multi Agent
  • Quality controls: Not reported
  • Evidence quality: Moderate
  • Use this page as: Background context only

Protocol And Measurement Signals

Benchmarks / Datasets

RAGTruth

Reported Metrics

f1precisionrecallfaithfulness

Research Brief

Metadata summary

Discharge summaries require extracting critical information from lengthy electronic health records (EHRs), a process that is labor-intensive when performed manually.

Based on abstract + metadata only. Check the source paper before making high-confidence protocol decisions.

Key Takeaways

  • Discharge summaries require extracting critical information from lengthy electronic health records (EHRs), a process that is labor-intensive when performed manually.
  • Large language models (LLMs) can improve generation efficiency; however, they are prone to producing faithfulness hallucinations, statements that contradict source records, posing direct risks to patient safety.
  • To address this, we present CuraView, a multi-agent framework for sentence-level detection and evidence-grounded explanation of faithfulness hallucinations in discharge summaries.

Researcher Actions

  • Compare this paper against nearby papers in the same arXiv category before using it for protocol decisions.
  • Validate inferred eval signals (Automatic metrics) against the full paper.
  • Use related-paper links to find stronger protocol-specific references.

Caveats

  • Generated from abstract + metadata only; no PDF parsing.
  • Signals below are heuristic and may miss details reported outside the abstract.

Recommended Queries

Research Summary

Contribution Summary

  • To address this, we present CuraView, a multi-agent framework for sentence-level detection and evidence-grounded explanation of faithfulness hallucinations in discharge summaries.
  • We evaluate CuraView on a subset of 250 patients from the Discharge-Me benchmark, with 50 patients held out for testing.
  • Our fine-tuned Qwen3-14B detection model achieves an F1 of 0.831 on the safety-critical E4 metric (90.9% recall, 76.5% precision) and an F1 of 0.823 on E3+E4, representing a 50.0% relative improvement over the base model and outperforming…

Why It Matters For Eval

  • To address this, we present CuraView, a multi-agent framework for sentence-level detection and evidence-grounded explanation of faithfulness hallucinations in discharge summaries.
  • We evaluate CuraView on a subset of 250 patients from the Discharge-Me benchmark, with 50 patients held out for testing.

Researcher Checklist

  • Gap: Human feedback protocol is explicit

    No explicit human feedback protocol detected.

  • Pass: Evaluation mode is explicit

    Detected: Automatic Metrics

  • Gap: Quality control reporting appears

    No calibration/adjudication/IAA control explicitly detected.

  • Pass: Benchmark or dataset anchors are present

    Detected: RAGTruth

  • Pass: Metric reporting is present

    Detected: f1, precision, recall, faithfulness

Related Papers

Papers are ranked by protocol overlap, extraction signal alignment, and semantic proximity.

Get Started

Join the #1 Platform for AI Training Talent

Where top AI builders and expert AI Trainers connect to build the future of AI.
Self-Service
Post a Job
Post your project and get a shortlist of qualified AI Trainers and Data Labelers. Hire and manage your team in the tools you already use.
Managed Service
For Large Projects
Done-for-You
We recruit, onboard, and manage a dedicated team inside your tools. End-to-end operations for large or complex projects.
For Freelancers
Join as an AI Trainer
Find AI training and data labeling projects across platforms, all in one place. One profile, one application process, more opportunities.