Reasoning in NLP

Models that claim to understand language, should also be able to demonstrate its abilities to reason across various dimensions. My present goal is to evaluate, enhance and explain the reasoning capabilities of such systems (or language models).

!!NEW!! Reasoning in LLMs

Our group has invested significantly in advancing reasoning abilities of LLMs in a multi-hop setting. The following drafts are in progress: 1) DeSelect$^+$: Efficient Leaf Selection to Improve Entailment Tree Generation, 2) A comprehensive survey of Logical Reasoning abilities of Large Language Models alongwith a benchmark, and 3) Multi-step Logical Reasoning under Incomplete Knowledge.

References

Natural Language Inference

Large pre-trained language models show high performance in popular NLP benchmarks (GLUE, SuperGLUE), while failing poorly in datasets with targeted linguistic and logical phenomena. We consolidate the interesting reasoning phenomena in Taxonomy of reasoning w.r.t the NLI task. Our first work along this line published in CoNLL 2020 showed that these models (BERT, RoBERTa) may not know how to perform certain types of reasoning such as causal, numeric, spatial, temporal; but they can identify the type of reasoning required for a new example.

We did a follow-up, adapting the CheckList methodology, where we create a large CheckList-NLI dataset to individually yet collectively test different reasoning capabilities, including pragmatic ones. Through our test-suite, we show that such a post-hoc evaluation provides a more comprehensive overview of the behavioral nature of the language models. A thorough human study with Linguistic Olympiad participants shows that behavioral summary leads to better explanation and RoBERTa’s behavior is more predictable than BERT. Currently, we are also exploring augmenting NLI datasets with verifiable proofs.

Summary and Extensions:

Enhancing NLI: Multi-hop, Causality and Counterfactuals & Reasoning in LLMs

As observed through TaxiNLI family of work, language models struggle with many important reasoning types. With Deepanway Ghoshal and Monojit choudhury, we explored a less annotation-intensive way to generate intermediate steps for complex reasoning examples in free-form NLI datasets. We observe, not only, we can generate such multi-hop steps without end-to-end supervision; but the steps are accurate as they can be augmented directly to improve NLI model's predictive ability. img

References


Previously I have been interested in mapping natural language to formal language representation and reasoning with it. My proposed solutions towards Question-Answering and Winograd Schema Challenge during my Ph.D have been motivated by the central idea of semantic parsing, followed by logical (or probabilistic logical) reasoning.

Semantic Parsing (K-Parser)

We (led by co-authors Arpit Sharma and Nguyen Vo) have explored mapping of natural language to formal representation, that enbales logical reasoning. Through several papers (K-Parser IJCAI-15, K-Parser NAACL 15), we showed how such semantic parsing enables us to find event mentions, and (even patially but interpretably) solved Winograd Schema challenge problems.

Avatar
Somak Aditya
Assistant Professor

My research interests include integrating knowledge and enabling higher-order reasoning in AI.

Publications

Tricking LLMs into Disobedience: Understanding, Analyzing, and Preventing Jailbreaks | In LREC-COLING 2024 (In Print).
(2024).

PDF nlp


STUCK IN THE QUICKSAND OF NUMERACY, FAR FROM AGI SUMMIT: EVALUATING LLMS' MATHEMATICAL COMPETENCY THROUGH ONTOLOGY-GUIDED PERTURBATIONS | In ArXiv 2024.
(2024).

PDF Code nlp symbolicmath


Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models | In ArXiv 2023.
(2023).

PDF Code Dataset nlp


Prover: Generating Intermediate Steps for NLI with Commonsense Knowledge Retrieval and Next-Step Prediction | In AACL-IJCNLP 2023 (Main).
(2023).

PDF Code Poster nlp neurosymbolic


SYNC: A Structurally guided Hard Negative Curricula for Efficient Neural Code Search | In AACL-IJCNLP 2023 (Main).
(2023).

Code nlp neurosymbolic


LoNLI: An Extensible Framework for Testing Diverse Logical Reasoning Capabilities for NLI | In LREV 2023 (In Print).
(2023).

PDF Dataset nlp


A Robust Information-Masking Approach for Domain Counterfactual Generation | In ACL 2023 (Long Paper Findings).
(2023).

PDF Code nlp


Multilingual CheckList: Generation and Evaluation | In AACL-IJCNLP 2022 (Long Paper Findings).
(2022).

PDF nlp


Vector Space Interpolation for Query Expansion | In AACL-IJCNLP 2022 (Short Paper).
(2022).

PDF nlp


LITMUS Predictor: An AI Assistant for Building Reliable, High-Performing and Fair Multilingual NLP Systems | In AAAI 2022 Demonstrations.
(2022).

PDF nlp


Analyzing the Effects of Reasoning Types on Cross-Lingual Transfer Performance | In EMNLP 2021 MRL Workshop.
(2021).

PDF Dataset nlp


Predicting joint intent-slot structure | In USPTO 2021.
(2021).

PDF nlp


Trusting RoBERTa over BERT: Insights from CheckListing the Natural Language Inference Task | In ArXiv 2021.
(2021).

PDF Dataset nlp


Creating a knowledge graph based on text-based knowledge corpora | In USPTO 2021.
(2021).

PDF nlp


TaxiNLI: Taking a Ride up the NLU Hill | In CoNLL 2020.
(2020).

PDF Dataset Slides nlp


Uncovering Relations for Marketing Knowledge Representation | In AAAI 2020, StarAI Workshp.
(2020).

PDF nlp


Integrating Knowledge and Reasoning in Image Understanding | In IJCAI 2019.
(2019).

PDF vision nlp


Spatial Knowledge Distillation to aid Visual Reasoning | In IEEE WACV 2019.
(2019).

PDF vision nlp neurosymbolic


Explicit Reasoning over End-to-End Neural Architectures | In AAAI 2018.
(2018).

PDF Code vision nlp neurosymbolic