Skip to content
← Back to explorer

Demonstrating ViviDoc: Generating Interactive Documents through Human-Agent Collaboration

Yinghao Tang, Yupeng Xie, Yingchaojie Feng, Tingfeng Lan, Wei Chen · Mar 2, 2026 · Citations: 0

Abstract

Interactive articles help readers engage with complex ideas through exploration, yet creating them remains costly, requiring both domain expertise and web development skills. Recent LLM-based agents can automate content creation, but naively applying them yields uncontrollable and unverifiable outputs. We present ViviDoc, a human-agent collaborative system that generates interactive educational documents from a single topic input. ViviDoc introduces a multi-agent pipeline (Planner, Executor, Evaluator) and the Document Specification (DocSpec), a human-readable intermediate representation that decomposes each interactive visualization into State, Render, Transition, and Constraint components. The DocSpec enables educators to review and refine generation plans before code is produced, bridging the gap between pedagogical intent and executable output. Expert evaluation and a user study show that ViviDoc substantially outperforms naive agentic generation and provides an intuitive editing experience. Our project homepage is available at https://vividoc-homepage.vercel.app/.

HFEPX Relevance Assessment

This paper has direct human-feedback and/or evaluation protocol signal and is likely useful for eval pipeline design.

Eval-Fit Score

40/100 • Low

Treat as adjacent context, not a core eval-method reference.

Human Feedback Signal

Detected

Evaluation Signal

Detected

HFEPX Fit

High-confidence candidate

Human Data Lens

  • Uses human feedback: Yes
  • Feedback types: Expert Verification
  • Rater population: Domain Experts
  • Unit of annotation: Unknown
  • Expertise required: Coding
  • Extraction source: Persisted extraction

Evaluation Lens

  • Evaluation modes:
  • Agentic eval: Multi Agent
  • Quality controls: Not reported
  • Confidence: 0.50
  • Flags: None

Protocol And Measurement Signals

Benchmarks / Datasets

No benchmark or dataset names were extracted from the available abstract.

Reported Metrics

No metric terms were extracted from the available abstract.

Research Brief

Deterministic synthesis

Recent LLM-based agents can automate content creation, but naively applying them yields uncontrollable and unverifiable outputs. HFEPX signals include Expert Verification, Multi Agent with confidence 0.50. Updated from current HFEPX corpus.

Generated Mar 4, 2026, 3:58 AM · Grounded in abstract + metadata only

Key Takeaways

  • Recent LLM-based agents can automate content creation, but naively applying them yields uncontrollable and unverifiable outputs.
  • We present ViviDoc, a human-agent collaborative system that generates interactive educational documents from a single topic input.

Researcher Actions

  • Compare its human-feedback setup against pairwise and rubric hubs.
  • Identify benchmark choices from full text before operationalizing conclusions.
  • Verify metric definitions before comparing against your eval pipeline.

Caveats

  • Generated from title, abstract, and extracted metadata only; full-paper implementation details are not parsed.
  • Extraction confidence is probabilistic and should be validated for critical decisions.

Research Summary

Contribution Summary

  • Recent LLM-based agents can automate content creation, but naively applying them yields uncontrollable and unverifiable outputs.
  • We present ViviDoc, a human-agent collaborative system that generates interactive educational documents from a single topic input.
  • ViviDoc introduces a multi-agent pipeline (Planner, Executor, Evaluator) and the Document Specification (DocSpec), a human-readable intermediate representation that decomposes each interactive visualization into State, Render, Transition,…

Why It Matters For Eval

  • Recent LLM-based agents can automate content creation, but naively applying them yields uncontrollable and unverifiable outputs.
  • We present ViviDoc, a human-agent collaborative system that generates interactive educational documents from a single topic input.

Researcher Checklist

  • Pass: Human feedback protocol is explicit

    Detected: Expert Verification

  • Gap: Evaluation mode is explicit

    No clear evaluation mode extracted.

  • Gap: Quality control reporting appears

    No calibration/adjudication/IAA control explicitly detected.

  • Gap: Benchmark or dataset anchors are present

    No benchmark/dataset anchor extracted from abstract.

  • Gap: Metric reporting is present

    No metric terms extracted.

Related Papers

Papers are ranked by protocol overlap, extraction signal alignment, and semantic proximity.

Need human evaluators for your AI research? Scale annotation with expert AI Trainers.