Reasoning in NLP

Models that claim to understand language, should also be able to demonstrate its abilities to reason across various dimensions. My present goal is to evaluate, enhance and explain the reasoning capabilities of such systems (or language models).

Natural Language Inference

Large pre-trained language models show high performance in popular NLP benchmarks (GLUE, SuperGLUE), while failing poorly in datasets with targeted linguistic and logical phenomena. We consolidate the interesting reasoning phenomena in Taxonomy of reasoning w.r.t the NLI task. Our first work along this line published in CoNLL 2020 showed that these models (BERT, RoBERTa) may not know how to perform certain types of reasoning such as causal, numeric, spatial, temporal; but they can identify the type of reasoning required for a new example.

We did a follow-up, adapting the CheckList methodology, where we create a large CheckList-NLI dataset to individually yet collectively test different reasoning capabilities, including pragmatic ones. Through our test-suite, we show that such a post-hoc evaluation provides a more comprehensive overview of the behavioral nature of the language models. A thorough human study with Linguistic Olympiad participants shows that behavioral summary leads to better explanation and RoBERTa’s behavior is more predictable than BERT. Currently, we are also exploring augmenting NLI datasets with verifiable proofs.

Summary and Extensions:

Enhancing NLI: Multi-hop, Causality and Counterfactuals

As observed through TaxiNLI family of work, language models struggle with many important reasoning types. With Deepanway Ghoshal and Monojit choudhury, we explored a less annotation-intensive way to generate intermediate steps for complex reasoning examples in free-form NLI datasets. We observe, not only, we can generate such multi-hop steps without end-to-end supervision; but the steps are accurate as they can be augmented directly to improve NLI model's predictive ability. img

References


Previously I have been interested in mapping natural language to formal language representation and reasoning with it. My proposed solutions towards Question-Answering and Winograd Schema Challenge during my Ph.D have been motivated by the central idea of semantic parsing, followed by logical (or probabilistic logical) reasoning.

Semantic Parsing (K-Parser)

We (led by co-authors Arpit Sharma and Nguyen Vo) have explored mapping of natural language to formal representation, that enbales logical reasoning. Through several papers (K-Parser IJCAI-15, K-Parser NAACL 15), we showed how such semantic parsing enables us to find event mentions, and (even patially but interpretably) solved Winograd Schema challenge problems.

Avatar
Somak Aditya
Assistant Professor

My research interests include integrating knowledge and enabling higher-order reasoning in AI.