Home News HD-Painter: High Resolution Text-Guided Image Inpainting with Diffusion Models

HD-Painter: High Resolution Text-Guided Image Inpainting with Diffusion Models

HD-Painter: High Resolution Text-Guided Image Inpainting with Diffusion Models

Diffusion models have undoubtedly revolutionized the AI and ML industry, with their applications in real-time becoming an integral a part of our on a regular basis lives. After text-to-image models showcased their remarkable abilities, diffusion-based image manipulation techniques, equivalent to controllable generation, specialized and personalized image synthesis, object-level image editing, prompt-conditioned variations, and editing, emerged as hot research topics as a consequence of their applications in the pc vision industry.

Nonetheless, despite their impressive capabilities and exceptional results, text-to-image frameworks, particularly text-to-image inpainting frameworks, still have potential areas for development. These include the flexibility to know global scenes, especially when denoising the image in high diffusion timesteps. Addressing this issue, researchers introduced HD-Painter, a totally training-free framework that accurately follows prompt instructions and scales to high-resolution image inpainting coherently. The HD-Painter framework employs a Prompt Aware Introverted Attention (PAIntA) layer, which leverages prompt information to boost self-attention scores, leading to higher text alignment generation.

To further improve the coherence of the prompt, the HD-Painter model introduces a Reweighting Attention Rating Guidance (RASG) approach. This approach integrates a post-hoc sampling strategy into the final type of the DDIM component seamlessly, stopping out-of-distribution latent shifts. Moreover, the HD-Painter framework encompasses a specialized super-resolution technique customized for inpainting, allowing it to increase to larger scales and complete missing regions within the image with resolutions as much as 2K.

HD-Painter: Text-Guided Image Inpainting

Text-to-image diffusion models have indeed been a major topic within the AI and ML industry in recent months, with models demonstrating impressive real-time capabilities across various practical applications. Pre-trained text-to-image generation models like DALL-E, Imagen, and Stable Diffusion have shown their suitability for image completion by merging denoised (generated) unknown regions with diffused known regions throughout the backward diffusion process. Despite producing visually appealing and well-harmonized outputs, existing models struggle to know the worldwide scene, particularly under the high diffusion timestep denoising process. By modifying pre-trained text-to-image diffusion models to include additional context information, they could be fine-tuned for text-guided image completion.

Moreover, inside diffusion models, text-guided inpainting and text-guided image completion are major areas of interest for researchers. This interest is driven by the proven fact that text-guided inpainting models can generate content in specific regions of an input image based on textual prompts, resulting in potential applications equivalent to retouching specific image regions, modifying subject attributes like colours or clothes, and adding or replacing objects. In summary, text-to-image diffusion models have recently achieved unprecedented success, as a consequence of their exceptionally realistic and visually appealing generation capabilities.

Nonetheless, a majority of existing frameworks exhibit prompt neglection in two scenarios. The primary is Background Dominance when the model completes the unknown region by ignoring the prompt within the background whereas the second scenario is nearby object dominance when the model propagates the known region objects to the unknown region using visual context likelihood slightly than the input prompt. It’s a possibility that each these issues could be a results of vanilla inpainting diffusion’s ability to interpret the textual prompt accurately or mix it with the contextual information obtained from the known region. 

To tackle these roadblocks, the HD-Painter framework introduces the Prompt Aware Introverted Attention or PAIntA layer, that uses prompt information to boost the self-attention scores that ultimately ends in higher text alignment generation. PAIntA uses the given textual conditioning to boost the self attention rating with the aim to scale back the impact of non-prompt relevant information from the image region while at the identical time increasing the contribution of the known pixels aligned with the prompt. To further enhance the text-alignment of the generated results, the HD-Painter framework implements a post-hoc guidance method that leverages the cross-attention scores. Nonetheless, the implementation of the vanilla post-hoc guidance mechanism might cause out of distribution shifts consequently of the extra gradient term within the diffusion equation. The out of distribution shift will ultimately end in quality degradation of the generated output. To tackle this roadblock, the HD-Painter framework implements a Reweighting Attention Rating Guidance or RASG, a way that integrates a post-hoc sampling strategy into the final type of the DDIM component seamlessly. It allows the framework to generate visually plausible inpainting results by guiding the sample towards the prompt-aligned latents, and contain them of their trained domain.

By deploying each the RASH and PAIntA components in its architecture, the HD-Painter framework holds a major advantage over existing, including state-of-the-art, inpainting, and text to image diffusion models since it manages to resolve the present issue of prompt neglection. Moreover, each the RASH and the PAIntA components offer plug and play functionality, allowing them to be compatible with diffusion base inpainting models to tackle the challenges mentioned above. Moreover, by implementing a time-iterative mixing technology and by leveraging the capabilities of high-resolution diffusion models, the HD-Painter pipeline can operate effectively for as much as 2K resolution inpainting. 

To sum it up, the HD-Painter goals to make the next contributions in the sphere:

  1. It goals to resolve the prompt neglect issue of the background and nearby object dominance experienced by text-guided image inpainting frameworks by implementing the Prompt Aware Introverted Attention or PAIntA layer in its architecture. 
  2. It goals to enhance the text-alignment of the output by implementing the Reweighting Attention Rating Guidance or RASG layer in its architecture that permits the HD-Painter framework to perform post-hoc guided sampling while stopping out of shift distributions. 
  3. To design an efficient training-free text-guided image completion pipeline able to outperforming the present state-of-the-art frameworks, and using the easy yet effective inpainting-specialized super-resolution framework to perform text-guided image inpainting as much as 2K resolution. 

HD-Painter: Method and Architecture

Before now we have a have a look at the architecture, it’s vital to know the three fundamental concepts that form the muse of the HD-Painter framework: Image Inpainting, Post-Hoc Guidance in Diffusion Frameworks, and Inpainting Specific Architectural Blocks. 

Image Inpainting is an approach that goals to fill the missing regions inside a picture while ensuring the visual appeal of the generated image. Traditional deep learning frameworks implemented methods that used known regions to propagate deep features. Nonetheless, the introduction of diffusion models has resulted within the evolution of inpainting models, especially the text-guided image inpainting frameworks. Traditionally, a pre-trained text to image diffusion model replaces the unmasked region of the latent by utilizing the noised version of the known region throughout the sampling process. Although this approach works to an extent, it degrades the standard of the generated output significantly because the  denoising network only sees the noised version of the known region. To tackle this hurdle, a couple of approaches aimed to fine-tune the pre-trained text to image model to realize text-guided image inpainting. By implementing this approach, the framework is in a position to generate a random mask via concatenation because the model is in a position to condition the denoising framework on the unmasked region. 

Moving along, the standard deep learning models implemented special design layers for efficient inpainting with some frameworks having the ability to extract information effectively and produce visually appealing images by introducing special convolution layers to cope with the known regions of the image. Some frameworks even added a contextual attention layer of their architecture to scale back the unwanted heavy computational requirements of all to all self attention for prime quality inpainting. 

Finally, the Post-hoc guidance methods are backward diffusion sampling methods that guide the subsequent step latent prediction towards a selected function minimization objective. Post-hoc guidance methods are of great help in relation to generating visual content especially within the presence of additional constraints. Nonetheless, the Post-hoc guidance methods have a significant drawback: they’re known to end in image quality degradations since they have a tendency to shift the latent generation process by a gradient term. 

Coming to the architecture of HD-Painter, the framework first formulates the text-guided image completion problem, after which introduces two diffusion models namely the Stable Inpainting and Stable Diffusion. The HD-Painter model then introduces the PAIntA and the RASG blocks, and eventually we arrive on the inpainting-specific super resolution technique. 

Stable Diffusion and Stable Inpainting

Stable Diffusion is a diffusion model that operates throughout the latent space of an autoencoder. For text to image synthesis, the Stable Diffusion framework implements a textual prompt to guide the method. The guiding function has a structure much like the UNet architecture, and the cross-attention layers condition it on the textual prompts. Moreover, the Stable Diffusion model can perform image inpainting with some modifications and fine-tuning. To realize so, the features of the masked image generated by the encoder is concatenated with the downscaled binary mask to the latents. The resulting tensor is then input into the UNet architecture to acquire the estimated noise. The framework then initializes the newly added convolutional filters with zeros while the rest of the UNet is initialized using pre-trained checkpoints from the Stable Diffusion model. 

The above figure demonstrates the overview of the HD-Painter framework consisting of two stages. In the primary stage, the HD-Painter framework implements text-guided image painting whereas within the second stage, the model inpaints specific super-resolution of the output. To fill within the mission regions and to stay consistent with the input prompt, the model takes a pre-trained inpainting diffusion model, replaces the self-attention layers with PAIntA layers, and implements the RASG mechanism to perform a backward diffusion process. The model then decodes the ultimate estimated latent leading to an inpainted image. HD-Painter then implements the super stable diffusion model to inpaint the unique size image, and implements the diffusion backward technique of the Stable Diffusion framework conditioned on the low resolution input image. The model blends the denoised predictions with the unique image’s encoding after each step within the known region and derives the subsequent latent. Finally, the model decodes the latent and implements Poisson mixing to avoid edge artifacts. 

Prompt Aware Introverted Attention or PAIntA

Existing inpainting models like Stable Inpainting are inclined to rely more on the visual context across the inpainting area and ignore the input user prompts. On the premise of the user experience, this issue could be categorized into two classes: nearby object dominance and background dominance. The difficulty of visual context dominance over the input prompts could be a results of the only-spatial and prompt-free nature of the self-attention layers. To tackle this issue, the HD-Painter framework introduces the Prompt Aware Introverted Attention or PAIntA that uses cross-attention matrices and an inpainting mask to regulate the output of the self-attention layers within the unknown region. 

The Prompt Aware Introverted Attention component first applies projection layers to get the important thing, values, and queries together with the similarity matrix. The model then adjusts the eye rating of the known pixels to mitigate the strong influence of the known region over the unknown region, and defines a brand new similarity matrix by leveraging the textual prompt. 

Reweighting Attention Rating Guidance or RASG

The HD-Painter framework adopts a post-hoc sampling guidance method to boost the generation alignment with the textual prompts even further. Together with an objective function, the post-hoc sampling guidance approach goals to leverage the open-vocabulary segmentation properties of the cross-attention layers. Nonetheless, this approach of vanilla post-hoc guidance has the potential to shift the domain of diffusion latent which may degrade the standard of the generated image. To tackle this issue, the HD-Painter model implements the Reweighting Attention Rating Guidance or RASG mechanism that introduces a gradient reweighting mechanism leading to latent domain preservation. 

HD-Painter : Experiments and Results

To investigate its performance, the HD-Painter framework is compared against current state-of-the-art models including Stable Inpainting, GLIDE, and BLD or Blended Latent Diffusion over 10000 random samples where the prompt is chosen because the label of the chosen instance mask. 

As it could actually be observed, the HD-Painter framework outperforms existing frameworks on three different metrics by a major margin, especially the advance of 1.5 points on the CLIP metric and difference in generated accuracy rating of about 10% from other state-of-the-art methods. 

Moving along, the next figure demonstrates the qualitative comparison of the HD-Painter framework with other inpainting frameworks. As it could actually be observed, other baseline models either reconstruct the missing regions within the image as a continuation of the known region objects disregarding the prompts or they generate a background. Then again, the HD-Painter framework is in a position to generate the goal objects successfully owing to the implementation of the PAIntA and the RASG components in its architecture. 

Final Thoughts

In this text, now we have talked about HD-Painter, a training free text guided high-resolution inpainting approach that addresses the challenges experienced by existing inpainting frameworks including prompt neglection, and nearby and background object dominance. The HD-Painter framework implements a Prompt Aware Introverted Attention or PAIntA layer, that uses prompt information to boost the self-attention scores that ultimately ends in higher text alignment generation. 

To enhance the coherence of the prompt even further, the HD-Painter model introduces a Reweighting Attention Rating Guidance or RASG approach that integrates a post-hoc sampling strategy into the final type of the DDIM component seamlessly to stop out of distribution latent shifts. Moreover, the HD-Painter framework introduces a specialized super-resolution technique customized for inpainting that ends in extension to larger scales, and allows the HD-Painter framework to finish the missing regions within the image with resolution as much as 2K.


Please enter your comment!
Please enter your name here