Pragmatic Inference for Moral Reasoning Acquisition: Generalization via Metapragmatic Links
Guangliang Liu, Xi Chen, Bocheng Chen, Xitong Zhang, Kristen Johnson · Sep 28, 2025 · Citations: 0
How to use this paper page
Coverage: StaleUse this page to decide whether the paper is strong enough to influence an eval design. It summarizes the abstract plus available structured metadata. If the signal is thin, use it as background context and compare it against stronger hub pages before making protocol choices.
Best use
Background context only
Metadata: StaleTrust level
Low
Signals: StaleWhat still needs checking
Extraction flags indicate low-signal or possible false-positive protocol mapping.
Signal confidence: 0.15
Abstract
While moral reasoning has emerged as a promising research direction for large language models (LLMs), achieving robust generalization remains a critical challenge. This challenge arises from the gap between what is said and what is morally implied. In this paper, we build on metapragmatic links and the moral foundations theory to close the gap. Specifically, we develop a pragmatic-inference approach that facilitates LLMs, for a given moral situation, to acquire the metapragmantic links between moral reasoning objectives and the social variables that affect them. This approach is adapted to three different moral reasoning tasks to demonstrate its adaptability and generalizability. Experimental results demonstrate that our approach significantly enhances LLMs' generalization in moral reasoning, paving the road for future research to utilize pragmatic inference in various moral reasoning tasks.