Skip to content
← Back to explorer

Personalized Prediction of Perceived Message Effectiveness Using Large Language Model Based Digital Twins

Jasmin Han, Janardan Devkota, Joseph Waring, Amanda Luken, Felix Naughton, Roger Vilardaga, Jonathan Bricker, Carl Latkin, Meghan Moran, Yiqun Chen, Johannes Thrul · Feb 23, 2026 · Citations: 0

Data freshness

Extraction: Fresh

Check recency before relying on this page for active eval decisions. Use stale pages as context and verify against current hub results.

Metadata refreshed

Feb 23, 2026, 12:32 AM

Stale

Extraction refreshed

Apr 13, 2026, 6:35 AM

Fresh

Extraction source

Persisted extraction

Confidence 0.80

Abstract

Perceived message effectiveness (PME) by potential intervention end-users is important for selecting and optimizing personalized smoking cessation intervention messages for mobile health (mHealth) platform delivery. This study evaluates whether large language models (LLMs) can accurately predict PME for smoking cessation messages. We evaluated multiple models for predicting PME across three domains: content quality, coping support, and quitting support. The dataset comprised 3010 message ratings (5-point Likert scale) from 301 young adult smokers. We compared (1) supervised learning models trained on labeled data, (2) zero and few-shot LLMs prompted without task-specific fine-tuning, and (3) LLM-based digital twins that incorporate individual characteristics and prior PME histories to generate personalized predictions. Model performance was assessed on three held-out messages per participant using accuracy, Cohen's kappa, and F1. LLM-based digital twins outperformed zero and few-shot LLMs (12 percentage points on average) and supervised baselines (13 percentage points), achieving accuracies of 0.49 (content), 0.45 (coping), and 0.49 (quitting), with directional accuracies of 0.75, 0.66, and 0.70 on a simplified 3-point scale. Digital twin predictions showed greater dispersion across rating categories, indicating improved sensitivity to individual differences. Integrating personal profiles with LLMs captures person-specific differences in PME and outperforms supervised and zero and few-shot approaches. Improved PME prediction may enable more tailored intervention content in mHealth. LLM-based digital twins show potential for supporting personalization of mobile smoking cessation and other health behavior change interventions.

HFEPX Relevance Assessment

This paper has strong direct human-feedback and evaluation protocol signal and is suitable as a primary eval pipeline reference.

Best use

Primary benchmark and eval reference

Use if you need

A concrete protocol example with enough signal to inform rater workflow design.

Main weakness

No major weakness surfaced.

Trust level

High

Eval-Fit Score

75/100 • High

Use this as a primary source when designing or comparing eval protocols.

Human Feedback Signal

Detected

Evaluation Signal

Detected

HFEPX Fit

High-confidence candidate

Extraction confidence: High

Field Provenance & Confidence

Each key protocol field shows extraction state, confidence band, and data source so you can decide whether to trust it directly or validate from full text.

Human Feedback Types

strong

Rubric Rating

Confidence: High Source: Persisted extraction evidenced

Directly usable for protocol triage.

Evidence snippet: Perceived message effectiveness (PME) by potential intervention end-users is important for selecting and optimizing personalized smoking cessation intervention messages for mobile health (mHealth) platform delivery.

Evaluation Modes

strong

Automatic Metrics

Confidence: High Source: Persisted extraction evidenced

Includes extracted eval setup.

Evidence snippet: Perceived message effectiveness (PME) by potential intervention end-users is important for selecting and optimizing personalized smoking cessation intervention messages for mobile health (mHealth) platform delivery.

Quality Controls

strong

Inter Annotator Agreement Reported

Confidence: High Source: Persisted extraction evidenced

Calibration/adjudication style controls detected.

Evidence snippet: Perceived message effectiveness (PME) by potential intervention end-users is important for selecting and optimizing personalized smoking cessation intervention messages for mobile health (mHealth) platform delivery.

Benchmarks / Datasets

missing

Not extracted

Confidence: Low Source: Persisted extraction missing

No benchmark anchors detected.

Evidence snippet: Perceived message effectiveness (PME) by potential intervention end-users is important for selecting and optimizing personalized smoking cessation intervention messages for mobile health (mHealth) platform delivery.

Reported Metrics

strong

Accuracy, F1, Kappa

Confidence: High Source: Persisted extraction evidenced

Useful for evaluation criteria comparison.

Evidence snippet: Model performance was assessed on three held-out messages per participant using accuracy, Cohen's kappa, and F1.

Rater Population

missing

Unknown

Confidence: Low Source: Persisted extraction missing

Rater source not explicitly reported.

Evidence snippet: Perceived message effectiveness (PME) by potential intervention end-users is important for selecting and optimizing personalized smoking cessation intervention messages for mobile health (mHealth) platform delivery.

Human Data Lens

  • Uses human feedback: Yes
  • Feedback types: Rubric Rating
  • Rater population: Unknown
  • Unit of annotation: Scalar
  • Expertise required: General
  • Extraction source: Persisted extraction

Evaluation Lens

  • Evaluation modes: Automatic Metrics
  • Agentic eval: None
  • Quality controls: Inter Annotator Agreement Reported
  • Confidence: 0.80
  • Flags: None

Protocol And Measurement Signals

Benchmarks / Datasets

No benchmark or dataset names were extracted from the available abstract.

Reported Metrics

accuracyf1kappa

Research Brief

Deterministic synthesis

Model performance was assessed on three held-out messages per participant using accuracy, Cohen's kappa, and F1. HFEPX signals include Rubric Rating, Automatic Metrics with confidence 0.80. Updated from current HFEPX corpus.

Generated Apr 13, 2026, 6:35 AM · Grounded in abstract + metadata only

Key Takeaways

  • Model performance was assessed on three held-out messages per participant using accuracy, Cohen's kappa, and F1.
  • Primary extracted protocol signals: Rubric Rating, Automatic Metrics.

Researcher Actions

  • Compare its human-feedback setup against pairwise and rubric hubs.
  • Identify benchmark choices from full text before operationalizing conclusions.
  • Validate metric comparability (accuracy, f1, kappa).

Caveats

  • Generated from title, abstract, and extracted metadata only; full-paper implementation details are not parsed.
  • Extraction confidence is probabilistic and should be validated for critical decisions.

Research Summary

Contribution Summary

  • Model performance was assessed on three held-out messages per participant using accuracy, Cohen's kappa, and F1.

Researcher Checklist

  • Pass: Human feedback protocol is explicit

    Detected: Rubric Rating

  • Pass: Evaluation mode is explicit

    Detected: Automatic Metrics

  • Pass: Quality control reporting appears

    Detected: Inter Annotator Agreement Reported

  • Gap: Benchmark or dataset anchors are present

    No benchmark/dataset anchor extracted from abstract.

  • Pass: Metric reporting is present

    Detected: accuracy, f1, kappa

Related Papers

Papers are ranked by protocol overlap, extraction signal alignment, and semantic proximity.

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.