Friday, October 14, 2016 - 10:30am to 12:00pm
Location:100 Porter Hall
Speaker:Dipanjan Das Google Research
Two Case Studies in Semantic Inference
ABSTRACT I will talk about two problems in semantic inference. First, I will describe a method for parsing natural language questions into logical forms, that can be mapped to information stored in structured knowledge bases. The method relies on deterministic mapping of syntactic dependency trees to logical forms, that could be in turn used for knowledge base inference. Our approach is inherently multilingual, only relying on automatic dependency parses. The second problem I will focus on is natural language inference (NLI). Here, the goal is to determine whether two sentences entail or contradict each other, or has no relationship. I will present a new "decomposable neural attention model", that is easily parallelizable on new computer architectures such as GPUs, and reaches state-of-the-art results on a recent NLI dataset, but using almost an order of magnitude less model parameters than previous work.
BIO Dipanjan Das is a Senior Research Scientist at Google focusing on learning semantic representations of language. He received a Ph.D. in 2012 from the Language Technologies Institute, School of Computer Science at Carnegie Mellon University. Before that, he completed an undergraduate degree in Computer Science and Engineering in 2005 from the Indian Institute of Technology, Kharagpur. His work on multilingual learning of sequence models received the best paper award at ACL 2011 and a best paper award honorable mention at EMNLP 2013.