- SODIUM: From Open Web Data to Queryable Databases
Chuxuan Hu, Philip Li, Maxwell Yang, Daniel Kang · Mar 19, 2026 · Citations: 0
Expert Verification Automatic Metrics Multi Agent
Existing systems struggle with SODIUM tasks: we evaluate 6 advanced AI agents on SODIUM-Bench, with the strongest baseline achieving only 46.5% accuracy.
- Deep Research, Shallow Evaluation: A Case Study in Meta-Evaluation for Long-Form QA Benchmarks
Jena D. Hwang, Varsha Kishore, Amanpreet Singh, Dany Haddad, Aakanksha Naik · Mar 6, 2026 · Citations: 0
Pairwise PreferenceExpert Verification Llm As Judge
This has prompted evaluation frameworks that use LLM-as-judge protocols and claim verification, along with meta-evaluation frameworks that seek to validate these methods.
- Application-Driven Pedagogical Knowledge Optimization of Open-Source LLMs via Reinforcement Learning and Supervised Fine-Tuning
Navan Preet Singh, Xiaokun Wang, Anurag Garikipati, Madalina Ciobanu, Qingqing Mao · Apr 7, 2026 · Citations: 0
Expert Verification Automatic Metrics
These models remarkably achieve high enough accuracy on the Cross-Domain Pedagogical Knowledge (CDPK) Benchmark to establish new state-of-the-art (SOTA) results across the interactive Pedagogy Benchmark Leaderboard and surpass significantly…
- An Industrial-Scale Insurance LLM Achieving Verifiable Domain Mastery and Hallucination Control without Competence Trade-offs
Qian Zhu, Xinnan Guo, Jingjing Huo, Jun Li, Pan Liu · Mar 15, 2026 · Citations: 0
Expert VerificationRlaif Or Synthetic Feedback Automatic Metrics
Additionally, we release INSEva, the most comprehensive insurance benchmark to date (39k+ samples).
- Evaluation of LLMs in retrieving food and nutritional context for RAG systems
Maks Požarnik Vavken, Matevž Ogrinc, Tome Eftimov, Barbara Koroušić Seljak · Mar 10, 2026 · Citations: 0
Expert Verification Automatic Metrics
In this article, we evaluate four Large Language Models (LLMs) and their effectiveness at retrieving data within a specialized Retrieval-Augmented Generation (RAG) system, using a comprehensive food composition database.
- Seeing but Not Thinking: Routing Distraction in Multimodal Mixture-of-Experts
Haolei Xu, Haiwen Hong, Hongxing Li, Rui Zhou, Yang Zhang · Apr 9, 2026 · Citations: 0
Expert Verification
Experiments on three multimodal MoE models across six benchmarks demonstrate consistent improvements, with gains of up to 3.17% on complex visual reasoning tasks.
- Selecting Decision-Relevant Concepts in Reinforcement Learning
Naveen Raman, Stephanie Milani, Fei Fang · Apr 6, 2026 · Citations: 0
Expert Verification
Training interpretable concept-based policies requires practitioners to manually select which human-understandable concepts an agent should reason with when making sequential decisions.
- FourierMoE: Fourier Mixture-of-Experts Adaptation of Large Language Models
Juyong Jiang, Fan Wang, Hong Qi, Sunghun Kim, Jing Tang · Apr 2, 2026 · Citations: 0
Expert Verification
Extensive evaluations across 28 benchmarks, multiple model architectures, and scales demonstrate that FourierMoE consistently outperforms competitive baselines in both single-task and multi-task settings while using significantly fewer…
- Fusing Semantic, Lexical, and Domain Perspectives for Recipe Similarity Estimation
Denica Kjorvezir, Danilo Najkov, Eva Valencič, Erika Jesenko, Barbara Koroišić Seljak · Mar 10, 2026 · Citations: 0
Expert Verification
The evaluation of expert assessments enables the estimation of which similarity aspects--lexical, semantic, or nutritional--are most influential in expert decision-making.
- Elenchus: Generating Knowledge Bases from Prover-Skeptic Dialogues
Bradley P. Allen · Mar 7, 2026 · Citations: 0
Expert Verification
A human expert develops a bilateral position (commitments and denials) about a topic through prover-skeptic dialogue with a large language model (LLM) opponent.