Skip to content
← Back to explorer

HFEPX Hub

General + Expert Verification (Last 45 Days)

Updated from current HFEPX corpus (Apr 19, 2026). 10 papers are grouped in this hub page.

Read Full Context

Updated from current HFEPX corpus (Apr 19, 2026). 10 papers are grouped in this hub page. Common evaluation modes: Automatic Metrics, Llm As Judge. Most common rater population: Domain Experts. Common annotation unit: Pairwise. Frequently cited benchmark: Sodium-Bench. Common metric signal: accuracy. Use this page to compare protocol setup, judge behavior, and labeling design decisions before running new eval experiments. Newest paper in this set is from Mar 19, 2026.

Papers: 10 Last published: Mar 19, 2026 Global RSS Tag RSS
GeneralExpert VerificationLast 45d

Researcher Quick Triage

This hub is best used for protocol triage and replication planning from abstract-level evidence. Quality band: Developing .

High-Signal Coverage

100.0%

10 / 10 sampled papers are not low-signal flagged.

Replication-Ready Set

1

Benchmark + metric + eval mode explicitly present.

Judge/Human Comparability

0

Papers containing both `human_eval` and `llm_as_judge`.

  • 1 papers are replication-ready (benchmark + metric + explicit evaluation mode).
  • 0 papers support judge-vs-human agreement analysis.
  • 0 papers report explicit quality controls (calibration/adjudication/IAA).

Primary action: Use this page for scouting only; collect additional papers before attempting replication-critical comparisons.

Need evaluators for this research workflow?

Post a Job →

Why This Matters For Eval Research

  • 100% of papers report explicit human-feedback signals, led by expert verification.
  • automatic metrics appears in 40% of papers in this hub.
  • Sodium-Bench is a recurring benchmark anchor for cross-paper comparisons in this page.

Protocol Takeaways

  • Quality-control reporting is sparse in this slice; prioritize papers with explicit calibration or adjudication steps.
  • Rater context is mostly domain experts, and annotation is commonly pairwise annotation; use this to scope replication staffing.
  • Pair this hub with a human_eval-heavy hub to validate judge-model calibration.

Benchmark Interpretation

  • Sodium-Bench appears in 10% of hub papers (1/10); use this cohort for benchmark-matched comparisons.

Metric Interpretation

  • accuracy is reported in 30% of hub papers (3/10); compare with a secondary metric before ranking methods.
  • cost is reported in 10% of hub papers (1/10); compare with a secondary metric before ranking methods.
Researcher Checklist (Expanded)

Researcher Checklist

  • Strong: Papers with explicit human feedback

    Coverage is strong (100% vs 45% target).

  • Gap: Papers reporting quality controls

    Coverage is a replication risk (0% vs 30% target).

  • Gap: Papers naming benchmarks/datasets

    Coverage is a replication risk (10% vs 35% target).

  • Strong: Papers naming evaluation metrics

    Coverage is strong (40% vs 35% target).

  • Strong: Papers with known rater population

    Coverage is strong (100% vs 35% target).

  • Gap: Papers with known annotation unit

    Coverage is a replication risk (20% vs 35% target).

Strengths

  • Strong human-feedback signal (100% of papers).

Known Gaps

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Annotation unit is under-specified (20% coverage).
  • Benchmark coverage is thin (10% of papers mention benchmarks/datasets).

Suggested Next Analyses

  • Pair this hub with a human_eval-heavy hub to validate judge-model calibration.
  • Track metric sensitivity by reporting both accuracy and cost.
Recommended Queries (Expanded)

Recommended Queries

Start with These 3

Use these when you need one protocol anchor, one benchmark anchor, and one recent comparison point before reading the wider hub.

Start Here (Best First 6)

Ranked for protocol completeness (human signal, benchmark + metric anchors, quality controls, and judge/human overlap).

Protocol Matrix (Top 12)

Use this to quickly compare protocol ingredients instead of scanning long prose.

Paper HF Signal Eval Modes Benchmarks Metrics QC
SODIUM: From Open Web Data to Queryable Databases

Mar 19, 2026

Yes Automatic Metrics Sodium Bench Accuracy Not Reported
Application-Driven Pedagogical Knowledge Optimization of Open-Source LLMs via Reinforcement Learning and Supervised Fine-Tuning

Apr 7, 2026

Yes Automatic Metrics Not Reported Accuracy Not Reported
An Industrial-Scale Insurance LLM Achieving Verifiable Domain Mastery and Hallucination Control without Competence Trade-offs

Mar 15, 2026

Yes Automatic Metrics Not Reported Hallucination rate Not Reported
Evaluation of LLMs in retrieving food and nutritional context for RAG systems

Mar 10, 2026

Yes Automatic Metrics Not Reported Accuracy Not Reported
Seeing but Not Thinking: Routing Distraction in Multimodal Mixture-of-Experts

Apr 9, 2026

Yes Not Reported Not Reported Not Reported Not Reported
Selecting Decision-Relevant Concepts in Reinforcement Learning

Apr 6, 2026

Yes Not Reported Not Reported Not Reported Not Reported
FourierMoE: Fourier Mixture-of-Experts Adaptation of Large Language Models

Apr 2, 2026

Yes Not Reported Not Reported Not Reported Not Reported
Deep Research, Shallow Evaluation: A Case Study in Meta-Evaluation for Long-Form QA Benchmarks

Mar 6, 2026

Yes Llm As Judge Not Reported Not Reported Not Reported
Fusing Semantic, Lexical, and Domain Perspectives for Recipe Similarity Estimation

Mar 10, 2026

Yes Not Reported Not Reported Not Reported Not Reported
Elenchus: Generating Knowledge Bases from Prover-Skeptic Dialogues

Mar 7, 2026

Yes Not Reported Not Reported Not Reported Not Reported

Protocol Diff (Top Papers)

Fast side-by-side comparison for the highest-ranked papers in this hub.

Signal SODIUM: From Open Web Data to Queryable Databases Application-Driven Pedagogical Knowledge Optimizati… An Industrial-Scale Insurance LLM Achieving Verifia…
Human Feedback Expert VerificationExpert VerificationExpert Verification, Rlaif Or Synthetic Feedback
Evaluation Modes Automatic MetricsAutomatic MetricsAutomatic Metrics
Benchmarks Sodium BenchNot reportedNot reported
Metrics AccuracyAccuracyHallucination rate
Quality Controls Not reportedNot reportedNot reported
Rater Population Domain ExpertsDomain ExpertsDomain Experts
Annotation Unit UnknownTrajectoryUnknown
Suggested Reading Order (Extended)

This section is intentionally expanded only when needed; use “Start Here” above for a faster pass.

Suggested Reading Order

  1. Seeing but Not Thinking: Routing Distraction in Multimodal Mixture-of-Experts

    Start here for detailed protocol reporting and quality-control evidence. Signals: expert verification. Abstract: Multimodal Mixture-of-Experts (MoE) models have achieved remarkable performance on vision-language tasks.

  2. Application-Driven Pedagogical Knowledge Optimization of Open-Source LLMs via Reinforcement Learning and Supervised Fine-Tuning

    Start here for detailed protocol reporting and quality-control evidence. Signals: automatic metrics + expert verification. Focus: accuracy. Abstract: We present an innovative multi-stage optimization strategy combining reinforcement learning.

  3. Selecting Decision-Relevant Concepts in Reinforcement Learning

    Start here for detailed protocol reporting and quality-control evidence. Signals: expert verification. Abstract: Training interpretable concept-based policies requires practitioners to manually select which human-understandable concepts an agent should.

  4. SODIUM: From Open Web Data to Queryable Databases

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: automatic metrics + expert verification. Focus: Sodium-Bench / accuracy. Abstract: During research, domain experts often ask analytical.

  5. Deep Research, Shallow Evaluation: A Case Study in Meta-Evaluation for Long-Form QA Benchmarks

    Include a human-eval paper to calibrate against judge-based evaluation settings. Signals: LLM-as-judge + pairwise preferences. Abstract: This has prompted evaluation frameworks that use LLM-as-judge protocols and claim verification,.

  6. An Industrial-Scale Insurance LLM Achieving Verifiable Domain Mastery and Hallucination Control without Competence Trade-offs

    Adds automatic metrics with expert verification for broader protocol coverage within this hub. Signals: automatic metrics + expert verification. Focus: hallucination rate. Abstract: Adapting Large Language Models (LLMs).

  7. Evaluation of LLMs in retrieving food and nutritional context for RAG systems

    Adds automatic metrics with expert verification for broader protocol coverage within this hub. Signals: automatic metrics + expert verification. Focus: accuracy. Abstract: In this article, we evaluate four.

  8. FourierMoE: Fourier Mixture-of-Experts Adaptation of Large Language Models

    Adds evaluation protocol evidence with expert verification for broader protocol coverage within this hub. Signals: expert verification. Abstract: Parameter-efficient fine-tuning (PEFT) has emerged as a crucial paradigm for.

Known Limitations

Known Limitations

  • Only 0% of papers report quality controls; prioritize calibration/adjudication evidence.
  • Annotation unit is under-specified (20% coverage).
  • Narrative synthesis is grounded in metadata and abstracts only; full-paper implementation details are not parsed.
Research Utility Snapshot (Detailed)

Research Utility Snapshot

Human Feedback Mix

  • Expert Verification (10)
  • Pairwise Preference (1)
  • Rlaif Or Synthetic Feedback (1)

Evaluation Modes

  • Automatic Metrics (4)
  • Llm As Judge (1)

Top Benchmarks

  • Sodium Bench (1)

Top Metrics

  • Accuracy (3)
  • Cost (1)
  • Hallucination rate (1)

Rater Population Mix

  • Domain Experts (10)

Quality Controls

Coverage diagnostics (sample-based): human-feedback 100.0% · benchmarks 10.0% · metrics 40.0% · quality controls 0.0%.

Top Papers

Related Hubs

Get Started

Join the #1 Platform for AI Training Talent

Where top AI builders and expert AI Trainers connect to build the future of AI.
Self-Service
Post a Job
Post your project and get a shortlist of qualified AI Trainers and Data Labelers. Hire and manage your team in the tools you already use.
Managed Service
For Large Projects
Done-for-You
We recruit, onboard, and manage a dedicated team inside your tools. End-to-end operations for large or complex projects.
For Freelancers
Join as an AI Trainer
Find AI training and data labeling projects across platforms, all in one place. One profile, one application process, more opportunities.