In Generative AI We (Dis)Trust? Computational Analysis of Trust and Distrust in Reddit Discussions
Aria Pessianzadeh, Naima Sultana, Hildegarde Van den Bulck, David Gefen, Shahin Jabbari, Rezvaneh Rezapour · Oct 17, 2025 · Citations: 0
Data freshness
Extraction: FreshCheck recency before relying on this page for active eval decisions. Use stale pages as context and verify against current hub results.
Metadata refreshed
Mar 23, 2026, 9:21 PM
RecentExtraction refreshed
Apr 6, 2026, 10:16 AM
FreshExtraction source
Persisted extraction
Confidence 0.15
Abstract
The rise of generative AI (GenAI) has impacted many aspects of human life. As these systems become embedded in everyday practices, understanding public trust in them is also essential for responsible adoption and governance. Prior work on trust in AI has largely drawn from psychology and human-computer interaction, but there is a lack of computational, large-scale, and longitudinal approaches to measuring trust and distrust in GenAI and large language models (LLMs). This paper presents the first computational study of trust and distrust in GenAI, using a multi-year Reddit dataset (2022--2025) spanning 39 subreddits and 230,576 posts. Crowd-sourced annotations of a representative sample were combined with classification models to scale analysis. We find that trust and distrust are nearly balanced over time, although trust modestly outweighs distrust, with shifts around major model releases. Technical performance and usability dominate as dimensions, while personal experience is the most frequent reason shaping attitudes. Distinct patterns also emerge across trustors (e.g., experts, ethicists, and general users). Our results provide a methodological framework for large-scale trust analysis and insights into evolving public perceptions of GenAI.