Extracting reusable filters from Diffusion models for controlled image manipulation
Leverages diffusion models to compute a semantic difference matrix from text embeddings, representing the nuanced transformations between prompts in latent space. By applying this filter to the model’s latents, we can visually explore prompt-driven changes, capturing transferable semantic shifts that can be reused across different images.