Researchers at Korea University have developed a brand new speech synthesizer called HierSpeech++. This research goals to create synthetic speech that is strong, expressive, natural, and human-like. The team aimed to realize this without counting on a text-speech paired dataset and to enhance existing models’ shortcomings. HierSpeech++ was designed to bridge the semantic and acoustic representation gap in speech synthesis, ultimately improving style adaptation.
Until now, zero-shot speech synthesis based on LLM has had limitations. Nevertheless, HierSpeech++ has been developed to handle these limitations and improve robustness and expressiveness while addressing issues related to slow inference speed. By utilizing a text-to-vec framework that generates self-supervised speech and F0 representations based on text and prosody prompts, HierSpeech++ has been proven to outperform LLM-based and diffusion-based models. These speed, robustness, and quality advancements establish HierSpeech++ as a strong zero-shot speech synthesizer.
HierSpeech++ uses a hierarchical framework for generating speech without prior training. It employs a text-to-vec framework to develop self-supervised address and F0 representations based on text and prosody prompts. Speech is produced using a hierarchical variational autoencoder and a generated vector, F0, and voice prompt. The strategy also includes an efficient speech super-resolution framework. Comprehensive assessment uses various pre-trained models and implementations with objective and subjective metrics reminiscent of log-scale Mel error distance, perceptual evaluation of speech quality, pitch, periodicity, voice/unvoice F1 rating, naturalness, mean opinion rating, and voice similarity MOS.
Superior naturalness in synthetic speech is achieved by HierSpeech++ in zero-shot scenarios, with enhancements in robustness, expressiveness, and speaker similarity. Subjective metrics like naturalness mean opinion rating and voice similarity MOS were used to evaluate the innocence of the speech, and the outcomes showed that HierSpeech++ outperforms ground-truth speech. Incorporating a speech super-resolution framework from 16 kHz to 48 kHz further improved the naturalness of the address. Experimental results also demonstrated that the hierarchical variational autoencoder in HierSpeech++ is superior to LLM-based and diffusion-based models, making it a sturdy zero-shot speech synthesizer. It was also found that zero-shot text-to-speech synthesis with noisy prompts validated the effectiveness of HierSpeech++ in generating speech from unseen speakers. The hierarchical synthesis framework also allows for versatile prosody and voice style transfer, making synthesized speech much more flexible.
In conclusion, HierSpeech presents an efficient and potent framework for achieving human-level quality in zero-shot speech synthesis. Its disentangling of semantic modeling, speech synthesis, super-resolution, and facilitation of prosody and voice style transfer enhance synthesized speech flexibility. The system demonstrates robustness, expressiveness, naturalness, and speaker similarity improvements even with a small-scale dataset and offers significantly faster inference speeds. The study also explores potential extensions to cross-lingual and emotion-controllable speech synthesis models.
Take a look at the Paper, Project and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to hitch our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the most recent AI research news, cool AI projects, and more.
In the event you like our work, you’ll love our newsletter..
Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is captivated with applying technology and AI to handle real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.