Home Community AI Researchers from Bytedance and the King Abdullah University of Science and Technology Present a Novel Framework For Animating Hair Blowing in Still Portrait Photos

AI Researchers from Bytedance and the King Abdullah University of Science and Technology Present a Novel Framework For Animating Hair Blowing in Still Portrait Photos

0
AI Researchers from Bytedance and the King Abdullah University of Science and Technology Present a Novel Framework For Animating Hair Blowing in Still Portrait Photos

Hair is probably the most remarkable features of the human body, impressing with its dynamic qualities that bring scenes to life. Studies have consistently demonstrated that dynamic elements have a stronger appeal and fascination than static images. Social media platforms like TikTok and Instagram witness the every day sharing of vast portrait photos as people aspire to make their pictures each appealing and artistically charming. This drive fuels researchers’ exploration into the realm of animating human hair inside still images, aiming to supply a vivid, aesthetically pleasing, and delightful viewing experience.

Recent advancements in the sphere have introduced methods to infuse still images with dynamic elements, animating fluid substances corresponding to water, smoke, and fire throughout the frame. Yet, these approaches have largely ignored the intricate nature of human hair in real-life photographs. This text focuses on the artistic transformation of human hair inside portrait photography, which involves translating the image right into a cinemagraph.

A cinemagraph represents an progressive short video format that enjoys favor amongst skilled photographers, advertisers, and artists. It finds utility in various digital mediums, including digital advertisements, social media posts, and landing pages. The fascination for cinemagraphs lies of their ability to merge the strengths of still images and videos. Certain areas inside a cinemagraph feature subtle, repetitive motions in a brief loop, while the rest stays static. This contrast between stationary and moving elements effectively captivates the viewer’s attention.

Through the transformation of a portrait photo right into a cinemagraph, complete with subtle hair motions, the thought is to boost the photo’s allure without detracting from the static content, making a more compelling and interesting visual experience.

Existing techniques and business software have been developed to generate high-fidelity cinemagraphs from input videos by selectively freezing certain video regions. Unfortunately, these tools should not suitable for processing still images. In contrast, there was a growing interest in still-image animation. Most of those approaches have focused on animating fluid elements corresponding to clouds, water, and smoke. Nevertheless, the dynamic behavior of hair, composed of fibrous materials, presents a particular challenge in comparison with fluid elements. Unlike fluid element animation, which has received extensive attention, the animation of human hair in real portrait photos has been relatively unexplored.

Animating hair in a static portrait photo is difficult as a consequence of the intricate complexity of hair structures and dynamics. Unlike the graceful surfaces of the human body or face, hair comprises tons of of 1000’s of individual components, leading to complex and non-uniform structures. This complexity results in intricate motion patterns throughout the hair, including interactions with the pinnacle. While there are specialized techniques for modeling hair, corresponding to using dense camera arrays and high-speed cameras, they are sometimes costly and time-consuming, limiting their practicality for real-world hair animation.

The paper presented in this text introduces a novel AI method for routinely animating hair inside a static portrait photo, eliminating the necessity for user intervention or complex hardware setups. The insight behind this approach lies within the human visual system’s reduced sensitivity to individual hair strands and their motions in real portrait videos, in comparison with synthetic strands inside a digitalized human in a virtual environment. The proposed solution is to animate “hair wisps” as a substitute of individual strands, making a visually pleasing viewing experience. To realize this, the paper introduces a hair wisp animation module, enabling an efficient and automatic solution. An summary of this framework is illustrated below.

The important thing challenge on this context is learn how to extract these hair wisps. While related work, corresponding to hair modeling, has focused on hair segmentation, these approaches primarily goal the extraction of the complete hair region, which differs from the target. To extract meaningful hair wisps, the researchers innovatively frame hair wisp extraction for example segmentation problem, where a person segment inside a still image corresponds to a hair wisp. By adopting this problem definition, the researchers leverage instance segmentation networks to facilitate the extraction of hair wisps. This not only simplifies the hair wisp extraction problem but additionally enables the usage of advanced networks for effective extraction. Moreover, the paper presents the creation of a hair wisp dataset containing real portrait photos to coach the networks, together with a semi-annotation scheme to supply ground-truth annotations for the identified hair wisps. Some sample results from the paper are reported within the figure below compared with state-of-the-art techniques.

This was the summary of a novel AI framework designed to rework still portraits into cinemagraphs by animating hair wisps with pleasing motions without noticeable artifacts. Should you have an interest and need to learn more about it, please be at liberty to discuss with the links cited below.


Try the Paper and Project Page. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to hitch our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the newest AI research news, cool AI projects, and more.

Should you like our work, you’ll love our newsletter..

We’re also on WhatsApp. Join our AI Channel on Whatsapp..


Daniele Lorenzi received his M.Sc. in ICT for Web and Multimedia Engineering in 2021 from the University of Padua, Italy. He’s a Ph.D. candidate on the Institute of Information Technology (ITEC) on the Alpen-Adria-Universität (AAU) Klagenfurt. He’s currently working within the Christian Doppler Laboratory ATHENA and his research interests include adaptive video streaming, immersive media, machine learning, and QoS/QoE evaluation.


▶️ Now Watch AI Research Updates On Our Youtube Channel [Watch Now]

LEAVE A REPLY

Please enter your comment!
Please enter your name here