- SWE-Protégé: Learning to Selectively Collaborate With an Expert Unlocks Small Language Models as Software Engineering Agents
Patrick Tser Jern Kon, Archana Pradeep, Ang Chen, Alexander P. Ellis, Warren Hunt · Feb 25, 2026 · Citations: 0
Automatic Metrics Long Horizon
Our approach combines supervised fine-tuning on expert-augmented trajectories with agentic reinforcement learning that explicitly discourages degenerative looping and unproductive expert collaboration.
- Structurally Aligned Subtask-Level Memory for Software Engineering Agents
Kangning Shen, Jingyuan Zhang, Chenxi Sun, Wencong Zeng, Yang Yue · Feb 25, 2026 · Citations: 0
Automatic Metrics Long Horizon
Large Language Models (LLMs) have demonstrated significant potential as autonomous software engineering (SWE) agents.
- SkillCraft: Can LLM Agents Learn to Use Tools Skillfully?
Shiqi Chen, Jingze Gai, Ruochen Zhou, Jinghan Zhang, Tongyao Zhu · Feb 28, 2026 · Citations: 0
Automatic Metrics Long Horizon
Real-world tool-using agents operate over long-horizon workflows with recurring structure and diverse demands, where effective behavior requires not only invoking atomic tools but also abstracting, and reusing higher-level tool…
- ToolMATH: A Math Tool Benchmark for Realistic Long-Horizon Multi-Tool Reasoning
Hyeonje Choi, Jeongsoo Lee, Hyojun Lee, Jay-Yoon Lee · Feb 24, 2026 · Citations: 0
Automatic Metrics Long Horizon
We introduce \ToolMATH, a math-grounded benchmark that evaluates tool-augmented language models in realistic multi-tool environments where the output depends on calling schema-specified tools and sustaining multi-step execution.
- Tool-Aware Planning in Contact Center AI: Evaluating LLMs through Lineage-Guided Query Decomposition
Varun Nathan, Shreyas Guha, Ayush Kumar · Feb 16, 2026 · Citations: 0
Critique Edit
We present a domain-grounded framework and benchmark for tool-aware plan generation in contact centers, where answering a query for business insights, our target use case, requires decomposing it into executable steps over structured tools…
- Vibe Code Bench: Evaluating AI Models on End-to-End Web Application Development
Hung Tran, Langston Nashold, Rayan Krishnan, Antoine Bigeard, Alex Gu · Mar 4, 2026 · Citations: 0
- Code Fingerprints: Disentangled Attribution of LLM-Generated Code
Jiaxun Guo, Ziyuan Yang, Mengyu Sun, Hui Wang, Jingfeng Lu · Mar 4, 2026 · Citations: 0
- SWE-CI: Evaluating Agent Capabilities in Maintaining Codebases via Continuous Integration
Jialong Chen, Xander Xu, Hu Wei, Chuan Chen, Bing Zhao · Mar 4, 2026 · Citations: 0
- CONCUR: Benchmarking LLMs for Concurrent Code Generation
Jue Huang, Tarek Mahmud, Corina Pasareanu, Guowei Yang · Mar 4, 2026 · Citations: 0
- SWE-rebench V2: Language-Agnostic SWE Task Collection at Scale
Ibragim Badertdinov, Maksim Nekrashevich, Anton Shevtsov, Alexander Golubev · Feb 27, 2026 · Citations: 0
- Utilizing LLMs for Industrial Process Automation
Salim Fares · Feb 26, 2026 · Citations: 0
- FireBench: Evaluating Instruction Following in Enterprise and API-Driven LLM Applications
Yunfan Zhang, Yijie Bei, Jetashree Ravi, Pawel Garbacki · Mar 5, 2026 · Citations: 0
However, existing instruction following benchmarks predominantly evaluate natural language generation constraints that reflect the needs of chat assistants rather than enterprise users.
- MIP Candy: A Modular PyTorch Framework for Medical Image Processing
Tianhao Fu, Yucheng Chen · Feb 24, 2026 · Citations: 0
MIPCandy provides a complete, modular pipeline spanning data loading, training, inference, and evaluation, allowing researchers to obtain a fully functional process workflow by implementing a single method, build_network, while retaining…
- Toward an Agentic Infused Software Ecosystem
Mark Marron · Feb 24, 2026 · Citations: 0
Fully leveraging the capabilities of AI agents in software development requires a rethinking of the software ecosystem itself.
- Airavat: An Agentic Framework for Internet Measurement
Alagappan Ramanathan, Eunju Kang, Dongsu Han, Sangeetha Abdu Jyothi · Feb 24, 2026 · Citations: 0
We present Airavat, the first agentic framework for Internet measurement workflow generation with systematic verification and validation.
- SpecMind: Cognitively Inspired, Interactive Multi-Turn Framework for Postcondition Inference
Cuong Chi Le, Minh V. T Pham, Tung Vu Duy, Cuong Duc Van, Huy N. Phan · Feb 24, 2026 · Citations: 0
Automatic Metrics
Our empirical evaluation shows that SpecMind significantly outperforms state-of-the-art approaches in both accuracy and completeness of generated postconditions.
- VeriSoftBench: Repository-Scale Formal Verification Benchmarks for Lean
Yutong Xin, Qiaochu Chen, Greg Durrett, Işil Dillig · Feb 20, 2026 · Citations: 0
However, most benchmarks for LLM-based proof automation are drawn from mathematics in the Mathlib ecosystem, whereas proofs in software verification are developed inside definition-rich codebases with substantial project-specific libraries.
- Exploring LLMs for User Story Extraction from Mockups
Diego Firmenich, Leandro Antonelli, Bruno Pazos, Fabricio Lozada, Leonardo Morales · Feb 19, 2026 · Citations: 0
Automatic Metrics
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- Hybrid-Gym: Training Coding Agents to Generalize Across Tasks
Yiqing Xie, Emmy Liu, Gaokai Zhang, Nachiket Kotalwar, Shubham Gandhi · Feb 18, 2026 · Citations: 0
When assessing the quality of coding agents, predominant benchmarks focus on solving single issues on GitHub, such as SWE-Bench.
- TAROT: Test-driven and Capability-adaptive Curriculum Reinforcement Fine-tuning for Code Generation with Large Language Models
Chansung Park, Juyong Jiang, Fan Wang, Sayak Paul, Jiasi Shen · Feb 17, 2026 · Citations: 0
TAROT systematically constructs, for each problem, a four-tier test suite (basic, intermediate, complex, edge), providing a controlled difficulty landscape for curriculum design and evaluation.
- TestExplora: Benchmarking LLMs for Proactive Bug Discovery via Repository-Level Test Generation
Steven Liu, Jane Luo, Xin Zhang, Aofan Liu, Hao Liu · Feb 11, 2026 · Citations: 0
To bridge this gap, we present TestExplora, a benchmark designed to evaluate LLMs as proactive testers within full-scale, realistic repository environments.
- SWE-Master: Unleashing the Potential of Software Engineering Agents via Post-Training
Huatong Song, Lisheng Huang, Shuang Sun, Jinhao Jiang, Ran Le · Feb 3, 2026 · Citations: 0
Long Horizon
In this technical report, we present SWE-Master, an open-source and fully reproducible post-training framework for building effective software engineering agents.
- The Invisible Hand of AI Libraries Shaping Open Source Projects and Communities
Matteo Esposito, Andrea Janes, Valentina Lenarduzzi, Davide Taibi · Jan 5, 2026 · Citations: 0
Automatic Metrics
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- Imitation Game: Reproducing Deep Learning Bugs Leveraging an Intelligent Agent
Mehil B Shah, Mohammad Masudur Rahman, Foutse Khomh · Dec 17, 2025 · Citations: 0
Automatic Metrics
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- Is Vibe Coding Safe? Benchmarking Vulnerability of Agent-Generated Code in Real-World Tasks
Songwen Zhao, Danqing Wang, Kexun Zhang, Jiaxuan Luo, Zhuo Li · Dec 2, 2025 · Citations: 0
To answer this question, we propose SU S VI B E S, a benchmark consisting of 200 feature-request software engineering tasks from real-world open-source projects, which, when given to human programmers, led to vulnerable implementations.
- AgentHub: A Registry for Discoverable, Verifiable, and Reproducible AI Agents
Erik Pautsch, Tanmay Singla, Parv Kumar, Wenxin Jiang, Huiyun Peng · Oct 3, 2025 · Citations: 0
Llm As Judge
LLM-based agents are rapidly proliferating, yet the infrastructure for discovering, evaluating, and governing them remains fragmented compared to mature ecosystems like software package registries (e.g., npm) and model hubs (e.g., Hugging…
- FeatBench: Towards More Realistic Evaluation of Feature-level Code Generation
Haorui Chen, Chengze Li, Jia Li · Sep 26, 2025 · Citations: 0
To address these limitations, we propose a new benchmark - FeatBench, which introduces the following advances: (1) Realistic Task Inputs.
- CASCADE: LLM-Powered JavaScript Deobfuscator at Google
Shan Jiang, Pranoy Kovuri, David Tao, Zhixun Tan · Jul 23, 2025 · Citations: 0
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- On the Inference (In-)Security of Vertical Federated Learning: Efficient Auditing against Inference Tampering Attack
Chung-ju Huang, Ziqi Zhang, Yinggui Wang, Binghui Wang, Tao Wei · Jul 3, 2025 · Citations: 0
Automatic Metrics
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- Echoes of AI: Investigating the Downstream Effects of AI Assistants on Software Maintainability
Markus Borg, Dave Hewett, Nadim Hagatulah, Noric Couderc, Emma Söderberg · Jul 1, 2025 · Citations: 0
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- DesignBench: A Comprehensive Benchmark for MLLM-based Front-end Code Generation
Jingyu Xiao, Man Ho Lam, Ming Wang, Yuxuan Wan, Junliang Liu · Jun 6, 2025 · Citations: 0
However, existing front-end UI code generation benchmarks have the following limitations: (1) While framework-based development becomes predominant in modern front-end programming, current benchmarks fail to incorporate mainstream…
- Automated Web Application Testing: End-to-End Test Case Generation with Large Language Models and Screen Transition Graphs
Nguyen-Khang Le, Quan Minh Bui, Minh Ngoc Nguyen, Hiep Nguyen, Trung Vo · Jun 3, 2025 · Citations: 0
Web Browsing
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- Advancing Software Quality: A Standards-Focused Review of LLM-Based Assurance Techniques
Avinash Patil · May 19, 2025 · Citations: 0
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.
- EVALOOOP: A Self-Consistency-Centered Framework for Assessing Large Language Model Robustness in Programming
Sen Fang, Weiyuan Ding, Mengshi Zhang, Zihao Chen, Bowen Xu · May 18, 2025 · Citations: 0
Automatic Metrics
However, adversarial attacks exhibit fundamental limitations that compromise fair robustness assessment: they demonstrate contradictory evaluation outcomes where different attack strategies tend to favor different models, and more…
- WAFFLE: Finetuning Multi-Modal Models for Automated Front-End Development
Shanchao Liang, Nan Jiang, Shangshu Qian, Lin Tan · Oct 24, 2024 · Citations: 0
Models fine-tuned with Waffle show up to 9.00 pp (percentage point) higher HTML match, 0.0982 higher CW-SSIM, 32.99 higher CLIP, and 27.12 pp higher LLEM on our new benchmark WebSight-Test and an existing benchmark Design2Code,…
- A Problem-Oriented Perspective and Anchor Verification for Code Optimization
Tong Ye, Tengfei Ma, Xuhong Zhang, Hang Yu, Jianwei Yin · Jun 17, 2024 · Citations: 0
Abstract shows limited direct human-feedback or evaluation-protocol detail; use as adjacent methodological context.