Reasoning over mathematical objects: on-policy reward modeling and test time aggregation
Pranjal Aggarwal, Marjan Ghazvininejad, Seungone Kim, Ilia Kulikov, Jack Lanchantin, Xian Li, Tianjian Li, Bo Liu, Graham Neubig, Anaelia Ovalle, Swarnadeep Saha, Sainbayar Sukhbaatar, Sean Welleck, Jason Weston, Chenxi Whitehouse, Adina Williams, Jing Xu, Ping Yu, Weizhe Yuan, Jingyu Zhang, Wenting Zhao · Mar 19, 2026 · Citations: 0
How to use this page
Low trust
Use this as background context only. Do not make protocol decisions from this page alone.
Best use
Background context only
What to verify
Read the full paper before copying any benchmark, metric, or protocol choices.
Evidence quality
Low
Derived from extracted protocol signals and abstract evidence.
Abstract
The ability to precisely derive mathematical objects is a core requirement for downstream STEM applications, including mathematics, physics, and chemistry, where reasoning must culminate in formally structured expressions. Yet, current LM evaluations of mathematical and scientific reasoning rely heavily on simplified answer formats such as numerical values or multiple choice options due to the convenience of automated assessment. In this paper we provide three contributions for improving reasoning over mathematical objects: (i) we build and release training data and benchmarks for deriving mathematical objects, the Principia suite; (ii) we provide training recipes with strong LLM-judges and verifiers, where we show that on-policy judge training boosts performance; (iii) we show how on-policy training can also be used to scale test-time compute via aggregation. We find that strong LMs such as Qwen3-235B and o3 struggle on Principia, while our training recipes can bring significant improvements over different LLM backbones, while simultaneously improving results on existing numerical and MCQA tasks, demonstrating cross-format generalization of reasoning abilities.
Abstract-only analysis — low confidence
All signals on this page are inferred from the abstract only and may be inaccurate. Do not use this page as a primary protocol reference.
- This paper looks adjacent to evaluation work, but not like a strong protocol reference.
- The available metadata is too thin to trust this as a primary source.
- The abstract does not clearly describe the evaluation setup.
- The abstract does not clearly name benchmarks or metrics.
Research Brief
Metadata summary The ability to precisely derive mathematical objects is a core requirement for downstream STEM applications, including mathematics, physics, and chemistry, where reasoning must culminate in formally structured expressions.
Based on abstract + metadata only. Check the source paper before making high-confidence protocol decisions.
Key Takeaways
- The ability to precisely derive mathematical objects is a core requirement for downstream STEM applications, including mathematics, physics, and chemistry, where reasoning must culminate in formally structured expressions.
- Yet, current LM evaluations of mathematical and scientific reasoning rely heavily on simplified answer formats such as numerical values or multiple choice options due to the convenience of automated assessment.
- In this paper we provide three contributions for improving reasoning over mathematical objects: (i) we build and release training data and benchmarks for deriving mathematical objects, the Principia suite; (ii) we provide training recipes with strong LLM-judges and verifiers, where we show that on-policy judge training boosts performance; (iii) we show how on-policy training can also be used to scale test-time compute via aggregation.
Researcher Actions
- Compare this paper against nearby papers in the same arXiv category before using it for protocol decisions.
- Check the full text for explicit evaluation design choices (raters, protocol, and metrics).
- Use related-paper links to find stronger protocol-specific references.
Caveats
- Generated from abstract + metadata only; no PDF parsing.
- Signals below are heuristic and may miss details reported outside the abstract.