A deep dive into Stable Diffusion and its inpainting variant for interior design
On this fast-paced world that we live in and after the pandemic, lots of us realised that having a pleasing environment like home to flee from reality is priceless and a goal to be pursued.
Whether you’re searching for a Scandinavian, minimalist, or a glamorous style to brighten your private home, it isn’t easy to assume how each object will slot in an area filled with different pieces and hues. For that reason, we normally search for skilled help to create those amazing 3D images that help us understand how our future home will appear to be.
Nevertheless, these 3D images are expensive, and if our initial idea doesn’t look nearly as good as we thought, getting latest images will take time and extra money, things which can be scarce nowadays.
In this text, I explore the Stable Diffusion model starting with a temporary explanation of what it’s, the way it is trained and what’s needed to adapt it for inpainting. Finally, I finish the article with its application on a 3D image from my future home where I alter the kitchen island and cabinets to a distinct color and material.
As all the time, the code is out there on Github.
What’s it?
Stable Diffusion [1] is a generative AI model released in 2022 by CompVis Group that produces photorealistic images from text and image prompts. It was primarily designed to generate images influenced by text descriptions but it will probably be used for other tasks reminiscent of inpainting or video creation.
Its success comes from the Perceptual Image Compression step that converts a high dimensional image right into a smaller latent space. This compression enables using the model in low-resourced machines making it accessible to everyone something that was impossible with the previous state-of-the-art models.