A student in the LTI’s Master of Language Technologes program was recently honored with the Outstanding Paper Award at the 2017 Conference of the European Chapter of the Association for Computational Linguistics (EACL 2017). Adhiguna Kuncoro’s paper “What Do Recurrent Neural Network Grammars Learn About Syntax?” was one of just three out of the 119 accepted long papers to receive the honor at EACL 2017, one of the most prestigious conferences on natural language processing worldwide.
“We are very excited to win this award and present the paper in Valencia,” Kuncoro said of the honor, announced ahead of the April 3-7 conference in Valencia, Spain.
Kuncoro’s paper addresses one of the central mysteries surrounding neural network/machine learning technologies: although they are often uncannily effective when applied to the challenges of natural language processing, the method whereby these results are produced is not well understood.
“Many of these models are somewhat ‘opaque’ in that it's difficult to tell why they are doing such a good job, and whether what they are learning aligns with our linguistic intuitions,” explained LTI professor Dr. Graham Neubig, who worked with Kuncoro on the paper. “Adhi's paper takes a step towards solving this problem by designing a model that is more conducive to inspection of what it is learning, and we find that the model is in fact learning some things that align with what we know about syntax, and also some things that are different from our intuitions, but interesting nonetheless.”
Despite the achievement, Kuncoro emphasized, he and his team aren’t satisfied yet: “We’re definitely looking forward to designing more interpretable variants of neural networks for other NLP tasks as well.”