Part of a series on |
Translation |
---|
Types |
Theory |
Technologies |
Localization |
Institutional |
Related topics |
|
Google Neural Machine Translation (GNMT) was a neural machine translation (NMT) system developed by Google and introduced in November 2016 that used an artificial neural network to increase fluency and accuracy in Google Translate.[1][2][3][4] The neural network consisted of two main blocks, an encoder and a decoder, both of LSTM architecture with 8 1024-wide layers each and a simple 1-layer 1024-wide feedforward attention mechanism connecting them.[4][5] The total number of parameters has been variously described as over 160 million,[6] approximately 210 million,[7] 278 million[8] or 380 million.[9] It used WordPiece tokenizer, and beam search decoding strategy. It ran on Tensor Processing Units.
By 2020, the system had been replaced by another deep learning system based on a Transformer encoder and an RNN decoder.[10]
GNMT improved on the quality of translation by applying an example-based (EBMT) machine translation method in which the system learns from millions of examples of language translation.[2] GNMT's proposed architecture of system learning was first tested on over a hundred languages supported by Google Translate.[2] With the large end-to-end framework, the system learns over time to create better, more natural translations.[1] GNMT attempts to translate whole sentences at a time, rather than just piece by piece.[1] The GNMT network can undertake interlingual machine translation by encoding the semantics of the sentence, rather than by memorizing phrase-to-phrase translations.[2][11]
{{cite journal}}
: Cite journal requires |journal=
(help)