Automatic speech recognition (ASR) is used by virtual assistants to aid users in answering entity-centric enquiries. However, because of the vast quantity of continuously changing named things, spoken entity recognition is a tough task. Furthermore, when ASR is conducted on-device, the resources available for recognition are limited.
We investigate the use of probabilistic grammars as language models within the finite-state transducer (FST) framework in this blog, where a deterministic approximation to probabilistic grammars is introduced that avoids the explicit expansion of non-terminals at model creation time, integrates directly with the FST framework, and is complementary to n-gram models.
In this blog, we’ll also look at how entity-centric probabilistic grammars may be used as language models (LMs) in Automated Speech Recognition (ASR) to improve the recognition of entity-oriented inquiries, which are widespread in VA use.
Framework
A generic language model trained on transcribed VA questions is supplemented by an entity-centric language model of spoken VA queries. The purpose is to improve voice recognition of tail entities while keeping resource restrictions in mind (i.e., model size). ASR language models give a non-zero probability P (wk |w1,...,wk-1) to words wk that are preceded by left-context w1,...,wk-1, with words wi belonging to a fixed-size vocabulary V. The language model probability is coupled with auditory information during ASR decoding to distinguish between competing hypotheses.
First, the framework involves showing how entity-oriented inquiries may be represented as probabilistic grammar. Following that, non-deterministic graph grammar representations and their drawbacks and finally, the method, φ-RTN, offers a close approximation to FST-compatible graph-based grammars.
Entity-centric Query Grammars
An investigation of a month's worth of anonymised English query logs from a popular VA reveals that more than 15% of media player requests instructing the VA to play a song, album, or artist use the "play entity" form. As a result, a large proportion of media player queries can be represented using probabilistic grammar. The advantage of estimating a grammar using entity-centric queries derived from use logs—which will then be utilised for language model estimation—is that we may maintain the query templates unchanged while updating the weighted entity list using external knowledge sources that correspond with usage. As a result, the VA may distinguish new items that are not yet present in transcribed training data.
Denote T and E as the sets of query templates and entities, respectively, and show an example grammar G (composed of templates T with slots for entities E) that we want to support. Domain experts reviewed the templates derived from use logs, while the entity feed was extracted from an external knowledge source. We are particularly interested in increasing language model coverage for searches including tail entities. Tail inquiries are those whose combined probability is less than the median.
Recursive Transitive Networks
Encoding the grammar shown as a static FST by expanding every occurrence of the entity in the templates results in models that are proportionate to the cross product, |T||E|. Recursive Transitive Networks (RTNs) provide an alternate form for encoding grammar and, more broadly, class-based language models, which do not store the cross-product.
An RTN, on the other hand, is made up of a family of FSTs, each of which is connected with a non-terminal, where non-terminal labels on arcs inside the component FSTs are recursively replaced by the FST they point to. The non-terminal symbol S denotes the root non-terminal of the constituent FST that is investigated first. Grammar G may be expressed by an RTN with a single degree of recursion.
Unfortunately, RTNs are often non-deterministic when employed as decoding graphs in an FST decoder. If there are many outgoing arcs that share the same input symbol at a given state, the FST is nondeterministic. RTNs exhibit non-determinism since various pathways across the graph results in the same token sequence. Inside the grammar, this would be the case within the state corresponding to context "hello VA"—that is, the wake word supplied by templates—because the following token "play" can match either template or entity. Non-determinism is harmful to effective ASR decoding since numerous pathways might lead to the same hypothesis and should thus be avoided.
RTNs with Deterministic Approximation
We will now describe the topology of our model, dubbed φ-RTN, as a two-level RTN, and explain how our model is made deterministic (approximating the actual RTN) by applying precedence rules and -transitions. Furthermore, we describe how we guarantee that our model is properly normalized.
Topology: A two-level RTN is used, with the root nonterminal S linked to a rigid grammar FST FT calculated on templates. To account for out-of-domain utterances, the transition probabilities for each stage are doubled by discounting factor α = 1 − α (0 < α < 1). Template FST FT makes use of a non-terminal entity, which is related to entity FST FE. FST FE is represented by a standard, non-backoff n-gram model over entity names, with probabilities scaled to accommodate out-of-domain entities. The remaining mass at each FE state will be utilised to transition out of FE, back to FT, and maybe sP (w) in the event of out-of-domain entities.
Determinism: We address the previously reported RTN determinization issue by favouring regular symbols over nonterminal entry/exit. Within FT, observing non-terminal entity after context w1,..., wi, with associated state s, the appropriate sub-FST FE may be accessed by following the φ-transition from state s. It should be noted that in the absence of an entity, the -transition results in the unigram state sP. We remember the state we need to return to in FT when we enter FE. φ-transitions are used within FE to depart sub-FST FE and return to template FST FT. Final states in FE are deleted, and instead, the FST state's final probability mass is merged with the residual mass and divided among normal symbols at each state s in FE.
Normalization: We use an approach similar to back-off n-gram models to verify that the final model is suitably normalized for all states s. More precisely, we add a weight to the φ-transition leaving state s such that the probability mass attributed to unseen events at s is scaled to fit within the residual mass at state s when following φ-transitions recursively. Because we know the initial state in FE that the φ-transition leads to, and the state from FE that leads back to FT after following once more, the weights for φ-transitions leading from FT to FE can be computed statically when the model is built.
However, because of the reliance between the state in FE from which we are quitting and the state in FT to which we are returning, computing the weights for φ-transitions that exit from FE to FT (excluding the start state in FE) is more complex. More precisely, we add a weight to the φ-transition leaving state s such that the probability mass attributed to unseen events at s is scaled to fit within the residual mass at state s when following φ-transitions recursively. Because we know the initial state in FE that the φ-transition leads to, and the state from FE that leads back to FT after following once more, the weights for φ-transitions leading from FT to FE can be computed statically when the model is built.
At runtime, because we know the residual mass at state s in FE = α + P (final|s), we only need to evaluate explicit events specified at state s in FT that result from the φ-transition exiting FE to update the partial weight. Because FT is sparse in our case, the computation associated with this process is insignificant. It should be noted that φ-RTN does not accept templates with two or more consecutive non-terminals since doing so would result in much more processing.
Conclusion
In this blog, we looked upon φ-RTN, a low-cost grammar-backed language model that interfaces with FST decoders and enhances coverage for long-tail entities in VA ASR settings. φ-RTN eliminates explicit non-terminal expansion at creation time and keeps a single sub-graph per non-terminal, resulting in a small disc footprint. Despite constraints, φ-RTN is complimentary to language models trained on the extended grammar, allowing us to profit from both models and improving WER on long tail entity searches by 10%. Future work will entail expanding φ-RTN to non-English languages with specific grammatical agreements and morphology.