Emphasis Control for Parallel Neural TTS
State-of-the-art neural text-to-speech (TTS) synthesis methods are able to generate speech with high fidelity while maintaining high-performance levels. However, the input to the TTS system does not often contain enough information in order to reconstruct the desired speech signal with the intended meaning and nuances. Therefore, TTS systems often generate average prosody learned from the database. This is not beneficial when there is a need to convey content in a specific style or emotion or highlight the part of the sentence that contributes new, non-derivable, or contrastive information.
What is TTS?
TTS stands for Text-to-Speech, which refers to the process of converting written text into speech using computer algorithms. TTS technology allows computers and other devices to generate speech using a speech synthesis engine from written text, enabling them to read text aloud or communicate information to users who are persons with visual impairment, or learning disabilities, or individuals who prefer to listen to content in form of videos and audio
The engine uses rules and patterns to create phonemes, the basic units of speech and then combines them to produce words and sentences. This technology is used in a variety of applications, including assistive technology for persons with disabilities, language learning software, digital assistants, and voice-enabled devices such as smart speakers and virtual assistants.
What is Neural TTS?
Neural TTS or Neural Text-to-Speech, is a type of text-to-speech synthesis technology that enables neural networks to generate naturalistic speech from text. It is a branch of artificial intelligence, that aims to produce speech similar to human speech in terms of intonation, rhythm, and tone.
Traditional TTS systems utilized rule-based methods to generate speech, which requires the manual creation of linguistic rules, acoustic models, and other components. These systems often produced unnatural speech that lacked expressiveness.
The baseline model architecture is based on the FastSpeech parallel neural TTS model that converts an input phoneme sequence to a Mel spectrogram. The model consists of an encoder that converts the phoneme sequence to phoneme encodings. The encoder consists of an embedding layer that converts the phoneme sequence to phoneme embeddings.
Afterward, a series of feed-forward transformer (FFT) intakes of the phoneme embeddings convert it into positional encodings. Each FFT consists of a self-attention layer and 1-D convolution layers along with layer normalization and dropout.
The phoneme encodings are then fed to the feature predictors that predict the phoneme-wise pitch, duration, and energy. The feature predictors consist of 1-D convolution layers, layer normalization, and dropout similar to the variance adaptors utilized. The predicted phoneme-wise pitch and energy features are quantized, passed through an embedding layer, and added to the phoneme encodings, to form the decoder input. All the feature predictors are trained using teacher forcing. The decoder inputs are upsampled based on the predicted phoneme-wise durations. The FastSpeech decoder which consists of FFT blocks is replaced here with a series of dilated convolution stacks, which improves model inference speed. This model achieves 150× faster than real-time synthesis on a GPU and 100× faster on a mobile device.
For example, to analyze how the models with different emphasis features respond to varying degrees of emphasis control, we synthesized 192 test sentences for the VarianceBased, WaveletBased, and Combined models and each voice using varying degrees of emphasis modification. This was done by adding different bias values to the predicted emphasis features for the phonemes of the emphasized word. Larger values of this bias should result in increased emphasis while negative values should correspond to de-emphasis. For the VarianceBased method with 2 emphasis features, both features were modified by the same amount. The emphasis features influence the pitch, energy, and phoneme duration of the models, and therefore, we do measure the resulting changes in these three features before and after the changes in the emphasis features to assess the emphasis control. This shows the change in the mean and standard deviations for pitch, energy, and phoneme duration after applying emphasis averaged per phone in the emphasized words.
What is Emphasis Control for Parallel Neural TTS?
Emphasis control is a technique used in text-to-speech (TTS) systems to adjust the prosodic features of synthesized speech, such as pitch, duration, and loudness, to convey certain emotions or emphasis. In the context of parallel neural TTS, neural networks are used to generate speech from text, emphasis control can be achieved by conditioning the neural network on additional input features related to emphasis, such as the desired pitch or duration of certain words.
One approach to emphasis control in parallel neural TTS is to use a multi-speaker TTS model, where each speaker is associated with a different emphasis style. This allows the system to synthesize speech with different emphasis styles by simply switching the speaker input. Another approach is to use an attention mechanism to selectively amplify or attenuate certain input features, such as the pitch or duration of certain words, based on a user-defined emphasis signal. This can be achieved by incorporating an attention module into the neural network architecture and training it to attend to the relevant input features based on the emphasis signal.
In addition to these approaches, recent research has also explored the use of adversarial training to learn emphasis styles from a small amount of labeled data, as well as the use of reinforcement learning to optimize the emphasis control policy based on user feedback. Overall, emphasis control is an important technique for improving the expression and naturalness of synthesized speech in parallel neural TTS systems and is an active area of research in the field.
History of TTS
Text-to-speech (TTS) technology has been around for several decades and has undergone significant advancements and changes. One of the earliest developments of TTS can be traced back to the Bell Labs in the 1930s, when a system was developed that could produce speech sounds by generating electrical signals. However, it was not until the 1960s that computer-based TTS systems came into existence
In the early days of TTS, systems used rule-based methods to produce speech sounds. These systems used a set of rules and linguistic knowledge to determine the pronunciation of words and generate the corresponding sounds. However, these systems were limited in their ability to produce humanistic speech.
In the 1980s, a new approach to TTS known as the "concatenative" method was developed. This approach involved recording individual sounds or phonemes and then combining them to produce speech. This approach allowed for more humanistic speech and was used in many TTS systems throughout the 1990s and 2000s.
Recent advancements in machine learning and artificial intelligence have led to the development of ‘neural’ TTS systems. These systems use deep learning algorithms to generate speech, allowing for even more natural-sounding and expressive speech. TTS technology is used in a wide range of applications, from accessibility tools for persons with visual impairments or virtual assistance through chatbots.
Datasets used to deploy ML models for Parallel Neural TTS
These are a few of the several datasets that have been used for training parallel neural TTS systems:
- LJSpeech: A dataset of approximately 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books, paired with their corresponding transcripts. This dataset is widely used in the TTS community and is freely available for download.
Github source: https://github.com/keithito/tacotron/commit/67e16fcbf570852a828909633f22d5bbdea8ac19
- LibriTTS: A dataset of over 5,000 hours of reading English speech from audiobooks, with accompanying text. This dataset is designed for TTS research and is available for download from the Mozilla Common Voice website.
Github source: https://github.com/tensorflow/datasets/blob/master/docs/catalog/libritts.md
- M-AILABS Speech Dataset: A dataset of approximately 3,000 hours of high-quality speech recordings from a single female speaker reading English sentences, paired with their corresponding text. This dataset is also freely available for download.
Github source: https://github.com/MycroftAI/mimic2/blob/master/datasets/mailabs.py
- Blizzard Challenge: A dataset of speech recordings and text transcriptions from multiple speakers in different languages. This dataset is commonly used for TTS research and is available for download from the Blizzard Challenge website.
Github source: https://github.com/Tomiinek/Blizzard2013_Segmentation
- Common Voice: A dataset of crowd-sourced speech recordings in multiple languages, paired with their corresponding text transcriptions. This dataset is freely available for download from the Mozilla Common Voice website.
Github source: https://github.com/common-voice/common-voice-bundler
These datasets have been used in various TTS systems such as Tacotron, Transformer TTS, FastSpeech, and more.
How does NVIDIA A100 help in parallel neural TTS?
The NVIDIA A100 GPU is a powerful computing resource that can help improve the performance and efficiency of parallel neural TTS systems, enabling faster training times and higher-quality speech synthesis in several ways:
- Increased Performance: The A100 GPU is designed specifically for high-performance computing, including deep learning workloads. It provides up to 20 times greater computing power than the previous generation of NVIDIA GPUs, which allows for faster training and inference times.
- Improved Efficiency: The A100 GPU uses NVIDIA's latest Tensor Cores technology, which is designed to accelerate matrix math operations commonly used in deep learning workloads. This results in improved efficiency and faster training times.
- Large Memory Capacity: The A100 GPU features up to 80 GB of high-bandwidth memory, which allows for larger neural networks to be trained and stored in memory, improving the overall performance of the parallel TTS system.
Launch A100 80GB Cloud GPU on E2E Cloud for training a TTS model
- Login to Myaccount
- Go to Compute> GPU> NVIDIA- A100 80GB.
- Click on “Create” and choose your plan.
- Choose your required security, backup, and network settings and click on “Create My Node”.
- The launched plan will appear in your dashboard once it starts running.
After launching the A100 80GB Cloud GPU from the Myaccount portal, you can deploy any TTS model for emphasis control.
E2E Networks is the leading accelerated Cloud Computing player which provides the latest Cloud GPUs at a great value. Connect with us at sales@e2enetworks.com