Skip to content
← Back to explorer

HFEPX Hub

CS.MA + Multi Agent Papers

Updated from current HFEPX corpus (Feb 27, 2026). 8 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics, Simulation Env. Most common rater population: Domain Experts. Common annotation unit: Freeform. Frequent quality control: Adjudication. Frequently cited benchmark: Lawbench. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Feb 25, 2026.

Papers: 8 Last published: Feb 25, 2026 Global RSS Tag RSS
Cs.MAMulti Agent

Research Narrative

Grounded narrative Model: deterministic-grounded Source: persisted

Updated from current HFEPX corpus (Feb 27, 2026). This page tracks 8 papers for CS.MA + Multi Agent Papers. Dominant protocol signals include automatic metrics, simulation environments, with frequent benchmark focus on Lawbench, LiveCodeBench and metric focus on accuracy, calibration. Use the grounded sections below to prioritize reproducible protocol choices, benchmark-matched comparisons, and judge-vs-human evaluation checks.

Why This Matters For Eval Research

Protocol Takeaways

Benchmark Interpretation

  • Lawbench appears in 12.5% of hub papers (1/8); use this cohort for benchmark-matched comparisons.
  • LiveCodeBench appears in 12.5% of hub papers (1/8); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 25% of hub papers (2/8); compare with a secondary metric before ranking methods.
  • calibration is reported in 12.5% of hub papers (1/8); compare with a secondary metric before ranking methods.

Researcher Checklist

  • Close gap on Papers with explicit human feedback. Coverage is a replication risk (12.5% vs 45% target).
  • Tighten coverage on Papers reporting quality controls. Coverage is usable but incomplete (25% vs 30% target).
  • Tighten coverage on Papers naming benchmarks/datasets. Coverage is usable but incomplete (25% vs 35% target).
  • Maintain strength on Papers naming evaluation metrics. Coverage is strong (37.5% vs 35% target).
  • Close gap on Papers with known rater population. Coverage is a replication risk (12.5% vs 35% target).
  • Close gap on Papers with known annotation unit. Coverage is a replication risk (12.5% vs 35% target).

Papers with explicit human feedback

Coverage is a replication risk (12.5% vs 45% target).

Papers reporting quality controls

Coverage is usable but incomplete (25% vs 30% target).

Papers naming benchmarks/datasets

Coverage is usable but incomplete (25% vs 35% target).

Papers naming evaluation metrics

Coverage is strong (37.5% vs 35% target).

Papers with known rater population

Coverage is a replication risk (12.5% vs 35% target).

Papers with known annotation unit

Coverage is a replication risk (12.5% vs 35% target).

Suggested Reading Order

  1. 1. Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  2. 2. Training Generalizable Collaborative Agents via Strategic Risk Aversion

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  3. 3. A Hierarchical Multi-Agent System for Autonomous Discovery in Geoscientific Data Archives

    Start here for detailed protocol reporting, including rater and quality-control evidence.

  4. 4. Team of Thoughts: Efficient Test-time Scaling of Agentic Systems through Orchestrated Tool Calling

    Adds automatic metrics with expert verification for broader coverage within this hub.

  5. 5. Colosseum: Auditing Collusion in Cooperative Multi-Agent Systems

    Adds simulation environments for broader coverage within this hub.

  6. 6. Multimodal Multi-Agent Empowered Legal Judgment Prediction

    Adds simulation environments for broader coverage within this hub.

  7. 7. From Competition to Coordination: Market Making as a Scalable Framework for Safe and Aligned Multi-Agent LLM Systems

    Adds automatic metrics for broader coverage within this hub.

  8. 8. Multi-agent deep reinforcement learning with centralized training and decentralized execution for transportation infrastructure management

    Adds simulation environments for broader coverage within this hub.

Known Limitations

  • Rater population is under-specified (12.5% coverage).
  • Annotation unit is under-specified (12.5% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.

Research Utility Links

automatic_metrics vs simulation_env

both=0, left_only=5, right_only=3

0 papers use both Automatic Metrics and Simulation Env.

Benchmark Brief

Lawbench

Coverage: 1 papers (12.5%)

1 papers (12.5%) mention Lawbench.

Examples: Multimodal Multi-Agent Empowered Legal Judgment Prediction

Benchmark Brief

LiveCodeBench

Coverage: 1 papers (12.5%)

1 papers (12.5%) mention LiveCodeBench.

Examples: Team of Thoughts: Efficient Test-time Scaling of Agentic Systems through Orchestrated Tool Calling

Metric Brief

calibration

Coverage: 1 papers (12.5%)

1 papers (12.5%) mention calibration.

Examples: Team of Thoughts: Efficient Test-time Scaling of Agentic Systems through Orchestrated Tool Calling

Metric Brief

success rate

Coverage: 1 papers (12.5%)

1 papers (12.5%) mention success rate.

Examples: Hierarchical LLM-Based Multi-Agent Framework with Prompt Optimization for Multi-Robot Task Planning

Top Papers

Related Hubs