Anatomy of Unlearning: The Dual Impact of Fact Salience and Model Fine-Tuning
Borisiuk Anna, Andrey Savchenko, Alexander Panchenko, Elena Tutubalina · Feb 23, 2026 · Citations: 0
Data freshness
Extraction: FreshCheck recency before relying on this page for active eval decisions. Use stale pages as context and verify against current hub results.
Metadata refreshed
Feb 24, 2026, 10:56 AM
StaleExtraction refreshed
Apr 13, 2026, 6:40 AM
FreshExtraction source
Persisted extraction
Confidence 0.15
Abstract
Machine Unlearning (MU) enables Large Language Models (LLMs) to remove unsafe or outdated information. However, existing work assumes that all facts are equally forgettable and largely ignores whether the forgotten knowledge originates from pretraining or supervised fine-tuning (SFT). In this paper, we introduce DUAL (Dual Unlearning Evaluation across Training Stages), a benchmark of 28.6k Wikidata-derived triplets annotated with fact popularity using Wikipedia link counts and LLM-based salience scores. Our experiments show that pretrained and SFT models respond differently to unlearning. An SFT step on the forget data yields smoother forgetting, more stable tuning, and 10-50% higher retention, while direct unlearning on pretrained models remains unstable and prone to relearning or catastrophic forgetting.