Despite rapid improvements in machine learning technologies, real-time machine translation algorithms still make mistakes that humans would find unthinkable. A team of researchers from the Language Technologies Institute, New York University, and The University of Hong Kong recently published a paper demonstrating that, for the first time, certain algorithms can perform simultaneous speech translation much better than previous algorithms.
The paper, published Oct. 3, was featured in a recent article on Slator, which offers news and insights on demand drivers, funding, talent moves, technology and more in the translation and language technologies fields. In the article, LTI Assistant Professor Graham Neubig discussed the work, which relied on a neural machine translation instead of more traditional segmentation-based algorithms for speech applications.
"In our experiments, we demonstrate for the first time that these algorithms are able to perform simultaneous translation very well, much better than previous segmentation-based algorithms. We think that the main reason for this is that our method remembers all of the previously input words and considers all of them when choosing the next word to translate, which was not easy with previous segmentation-based methods."
For more information, including specifics on how neural machine translation works and was applied to the research, check out the Slator article.