DeCode: Decoupling Content and Delivery for Medical QA
Po-Jen Ko, Chen-Han Tsai, Yu-Shao Peng · Jan 5, 2026 · Citations: 0
How to use this page
Low trustUse this as background context only. Do not make protocol decisions from this page alone.
Best use
Background context only
What to verify
Validate the evaluation procedure and quality controls in the full paper before operational use.
Evidence quality
Low
Derived from extracted protocol signals and abstract evidence.
Abstract
Large language models (LLMs) exhibit strong medical knowledge and can generate factually accurate responses. However, existing models often fail to account for individual patient contexts, producing answers that are clinically correct yet poorly aligned with patients' needs. In this work, we introduce DeCode (Decoupling Content and Delivery), a training-free, model-agnostic framework that adapts existing LLMs to produce contextualized answers in clinical settings. We evaluate DeCode on OpenAI HealthBench, a comprehensive and challenging benchmark designed to assess clinical relevance and validity of LLM responses. DeCode boosts zero-shot performance from 28.4% to 49.8% and achieves new state-of-the-art compared to existing methods. Experimental results suggest the effectiveness of DeCode in improving clinical question answering of LLMs.