ContextFlow: Training-Free Video Object Editing via Adaptive Context Enrichment

1 State Key Laboratory of General Artifical Intelligence, Peking University, Beijing, China 2 The Hong Kong University of Science and Technology
*Corresponding Authors.
Teaser Image of the Project

Our ContextFlow achieves versatile and high-fidelity video object editing without any training. Our method demonstrates superior ability in a range of object-related challenging tasks, including object insertion (1st row), swapping (2nd row), and deletion (3rd row).

Abstract

Training-free video object editing aims to achieve precise object-level manipulation, including object insertion, swapping, and deletion. However, it faces significant challenges in maintaining fidelity and temporal consistency. Existing methods, often designed for U-Net architectures, suffer from two primary limitations: inaccurate inversion due to first-order solvers, and contextual conflicts caused by crude "hard" feature replacement. These issues are more challenging in Diffusion Transformers (DiTs), where the unsuitability of prior layer-selection heuristics makes effective guidance challenging. To address these limitations, we introduce ContextFlow, a novel training-free framework for DiT-based video object editing. In detail, we first employ a high-order Rectified Flow solver to establish a robust editing foundation. The core of our framework is Adaptive Context Enrichment (for specifying what to edit), a mechanism that addresses contextual conflicts. Instead of replacing features, it enriches the self-attention context by concatenating Key-Value pairs from parallel reconstruction and editing paths, empowering the model to dynamically fuse information. Additionally, to determine where to apply this enrichment (for specifying where to edit), we propose a systematic, data-driven analysis to identify task-specific vital layers. Based on a novel Guidance Responsiveness Metric, our method pinpoints the most influential DiT blocks for different tasks (e.g., insertion, swapping), enabling targeted and highly effective guidance. Extensive experiments show that ContextFlow significantly outperforms existing training-free methods and even surpasses several state-of-the-art training-based approaches, delivering temporally coherent, high-fidelity results.

Method

我们方法的流程图
Overview of the ContextFlow. Our method begins with a high-fidelity video inversion using RF-Solver to obtain a shared noise latent z. A dual-path sampling process then decouples reconstruction and editing. The editing path is guided by our core mechanism, Adaptive Context Enrichment, where Key-Value pairs from the reconstruction path are concatenated into the self-attention blocks of the editing path. This guidance is precisely targeted to vital layers, identified via our Guidance Responsiveness analysis, and is only active during the first half of the denoising process to balance fidelity and consistency.

Object Insertion

Object Swapping

Object Deletion

BibTeX

@misc{chen2025contextflowtrainingfreevideoobject,
      title={ContextFlow: Training-Free Video Object Editing via Adaptive Context Enrichment}, 
      author={Yiyang Chen and Xuanhua He and Xiujun Ma and Yue Ma},
      year={2025},
      eprint={2509.17818},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2509.17818}, 
}