A clean user image (top left) can be altered for neutral, violent, sexual, or misleading edits. Our DeContext injects imperceptible perturbations into the input image (bottom left), preventing identity preservation in edited results while retaining visual quality.
Abstract
In-context diffusion models allow users to modify images with remarkable ease and realism. However, the same power raises serious privacy concerns: personal images can be easily manipulated for identity impersonation, misinformation, or other malicious uses, all without the owner's consent. While prior work has explored input perturbations to protect against misuse in personalized text-to-image generation, the robustness of modern, large-scale in-context DiT-based models remains largely unexamined. In this paper, we propose DeContext, a new method to safeguard input images from unauthorized in-context editing. Our key insight is that contextual information from the source image propagates to the output primarily through multimodal attention layers. By injecting small, targeted perturbations that weaken these cross-attention pathways, DeContext breaks this flow, effectively decouples the link between input and output. This simple defense is both efficient and robust. We further show that early denoising steps and specific transformer blocks dominate context propagation, which allows us to concentrate perturbations where they matter most. Experiments on Flux Kontext and Step1X-Edit show that DeContext consistently blocks unwanted image edits while preserving visual quality. These results highlight the effectiveness of attention-based perturbations as a powerful defense against image manipulation.
Method
Overview of our DeContext pipeline. Given a prompt, timestep, noisy target, and context image, DeContext perturbs the context to suppress its attention in the diffusion model. Iterative gradient updates minimize attention activation, detaching the context from influencing generation.
Moreover, we employ Concentrated Context Detachment: restricting the optimization to early, high-noise timesteps and early-to-middle, context-heavy transformer blocks, effectively disrupting context propagation.
Comparision with Baselines
Unlike prior attacks on UNet–based T2I and I2I models that introduce visual artifacts, DeContext targets state-of-the-art diffusion transformers and effectively removes context identity while preserving high visual quality.
More Qualitative Results
Defense on human portraits.
Defense on item images.
Extension Results on Step1X-Edit.
BibTeX
@misc{shen2025decontextdefensesafeimage,
title={DeContext as Defense: Safe Image Editing in Diffusion Transformers},
author={Linghui Shen and Mingyue Cui and Xingyi Yang},
year={2025},
eprint={2512.16625},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2512.16625}
}