SvfEye: A Semantic-Visual Fusion Framework with Multi-Scale Visual Context for Multimodal Reasoning
Yuxiang Shen, Hailong Huang, Zhenkun Gao, Xueheng Li, Man Zhou, Chengjun Xie, Haoxuan Che, Xuanhua He, Jie Zhang · Feb 26, 2026 · Citations: 0
How to use this paper page
Coverage: StaleUse this page to decide whether the paper is strong enough to influence an eval design. It summarizes the abstract plus available structured metadata. If the signal is thin, use it as background context and compare it against stronger hub pages before making protocol choices.
Best use
Background context only
Metadata: StaleTrust level
Provisional
Signals: StaleWhat still needs checking
Structured extraction is still processing; current fields are metadata-first.
Signal confidence unavailable
Abstract
Multimodal Large Language Models (MLLMs) often struggle to accurately perceive fine-grained visual details, especially when targets are tiny or visually subtle. This challenge can be addressed through semantic-visual information fusion, which integrates global image context with fine-grained local evidence for multi-scale visual understanding. Recently, a paradigm termed "Thinking with Images" has emerged, enabling models to acquire high-resolution visual evidence by zooming or cropping image regions and fusing these local details with global context during reasoning. Although training-based approaches demonstrate the effectiveness of this capability, they require extensive computational resources and large-scale task-specific data. Consequently, lightweight training-free methods have been proposed as a practical alternative to incorporate local visual evidence during inference. However, existing training-free approaches still suffer from two key limitations. First, they indiscriminately extract and fuse local visual regions for all inputs regardless of necessity, introducing computational redundancy and perceptual noise. Second, they exhibit drift between semantic intent and visual attention, preventing accurate localization of user-focused regions. To address these challenges, we propose SvfEye, a training-free framework for adaptive visual-semantic fusion. SvfEye follows a two-stage pipeline with a confidence-based decision module to determine whether additional local visual information is needed, and a semantic-attention fusion module to identify informative local regions. Experiments show that SvfEye achieves substantial performance gains while obtaining an approximately 4.0x inference speedup over the state-of-the-art method ZoomEye.