Skip to content
← Back to explorer

EAMET: Robust Massive Model Editing via Embedding Alignment Optimization

Yanbo Dai, Zhenlan Ji, Zongjie Li, Shuai Wang · May 17, 2025 · Citations: 0

Abstract

Model editing techniques are essential for efficiently updating knowledge in large language models (LLMs). However, the effectiveness of existing approaches degrades in massive editing scenarios, particularly when evaluated with practical metrics. Their robustness is also limited in context-rich settings or when editing multiple facts of the same subject simultaneously. We attribute these failures to the embedding misalignment among knowledge items, which undermines editing reliability at scale. To address this, we propose EAMET (Embedding Alignment Model Editing in Transformers), which addresses this issue by aligning the space of key and residual embeddings. Extensive experiments across six LLMs and three datasets demonstrate that EAMET consistently outperforms existing methods, achieving about 90\% editing efficacy when editing 10k facts. Codes and datasets are publicly available at https://ybdai7.github.io/eamet-page/.

Human Data Lens

  • Uses human feedback: No
  • Feedback types: None
  • Rater population: Unknown
  • Unit of annotation: Unknown
  • Expertise required: Coding

Evaluation Lens

  • Evaluation modes: Automatic Metrics
  • Agentic eval: None
  • Quality controls: Not reported
  • Confidence: 0.30
  • Flags: low_signal, possible_false_positive

Research Summary

Contribution Summary

  • Model editing techniques are essential for efficiently updating knowledge in large language models (LLMs).
  • However, the effectiveness of existing approaches degrades in massive editing scenarios, particularly when evaluated with practical metrics.
  • Their robustness is also limited in context-rich settings or when editing multiple facts of the same subject simultaneously.

Related Papers