Home Community Perp-Neg: Unveiling Image Potential with Negative Prompts and Stable Diffusion

Perp-Neg: Unveiling Image Potential with Negative Prompts and Stable Diffusion

0
Perp-Neg: Unveiling Image Potential with Negative Prompts and Stable Diffusion

➡️ Annotate all sorts of unstructured data rapidly and accurately with customizable annotation tasks with Kili Technology:

Despite the remarkable capabilities demonstrated by advancements in generating images from text using diffusion models, the accuracy of the generated images in conveying the intended meaning of the unique text prompt will not be all the time guaranteed, as found by recent research. Generating images that effectively align with the semantic content of the text query is a difficult task that necessitates a deep understanding of textual concepts and their meaning in visual representations.

On account of the challenges of acquiring detailed annotations, current text-to-image models struggle to completely comprehend the intricate relationship between text and pictures. Consequently, these models are likely to generate images that resemble continuously occurring text-image pairs within the training datasets. In consequence, the generated images often lack requested attributes or contain undesired ones. While recent research efforts have focused on addressing this issue by reintroducing missing objects or attributes to switch images based on well-crafted text prompts, there’s a limited exploration of techniques for removing redundant attributes or explicitly instructing the model to exclude unwanted objects using negative prompts. 

Based on this research gap, a brand new approach has been proposed to deal with the present limitations of the present algorithm for negative prompts. In accordance with the authors of this work, the present implementation of negative prompts can result in unsatisfactory results, particularly when there’s an overlap between the major prompt and the negative prompts. 

🚀 JOIN the fastest ML Subreddit Community

To handle this issue, they propose a novel algorithm called Perp-Neg, which doesn’t require any training and might be applied to a pre-trained diffusion model. The architecture is reported below. 

The name “Perp-Neg” is derived from the concept of utilizing the perpendicular rating estimated by the denoiser for the negative prompt. This selection of name reflects the important thing principle behind the Perp-Neg algorithm. Specifically, Perp-Neg employs a denoising process that’s restricted to be perpendicular to the direction of the major prompt. This geometric constraint plays an important role in achieving the specified end result.

Perp-Neg effectively addresses the difficulty of undesired perspectives within the negative prompts by limiting the denoising process to be perpendicular to the major prompt. It ensures that the model focuses on eliminating facets which might be orthogonal or unrelated to the major semantics of the prompt. In other words, Perp-Neg enables the model to remove undesirable attributes or objects not aligned with the text’s intended meaning while preserving the major prompt’s core essence.

This approach helps in enhancing the general quality and coherence of the generated images, ensuring a stronger alignment with the unique text input.

Some results obtained via Perp-Neg are presented within the figure below.

Beyond image synthesis, Perp-Neg can also be prolonged to DreamFusion, a complicated text-to-3D model. Moreover, on this context, the authors show its effectiveness in mitigating the Janus problem. The Janus (or multi-faced) problem refers to situations where a 3D-generated object is primarily rendered in response to its canonical view fairly than other perspectives. This problem mainly happens since the training dataset is unbalanced. For example, animals or individuals are often depicted from their front view and only sporadically from the side or back views.

This was the summary of Perp-Neg, a novel AI algorithm that leverages the geometrical properties of the rating space to deal with the shortcomings of the present negative prompts algorithm. For those who have an interest, you possibly can learn more about this system within the links below.


Take a look at the Paper, Project, and Github. Don’t forget to hitch our 21k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more. If you’ve any questions regarding the above article or if we missed anything, be at liberty to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Daniele Lorenzi received his M.Sc. in ICT for Web and Multimedia Engineering in 2021 from the University of Padua, Italy. He’s a Ph.D. candidate on the Institute of Information Technology (ITEC) on the Alpen-Adria-Universität (AAU) Klagenfurt. He’s currently working within the Christian Doppler Laboratory ATHENA and his research interests include adaptive video streaming, immersive media, machine learning, and QoS/QoE evaluation.


➡️ Ultimate Guide to Data Labeling in Machine Learning

LEAVE A REPLY

Please enter your comment!
Please enter your name here