Nitish Joshi

According to our database1, Nitish Joshi authored at least 12 papers between 2019 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Personas as a Way to Model Truthfulness in Language Models.
CoRR, 2023

Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples.
CoRR, 2023

Testing the General Deductive Reasoning Capacity of Large Language Models Using OOD Examples.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Measuring Inductive Biases of In-Context Learning with Underspecified Demonstrations.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
Nuisances via Negativa: Adjusting for Spurious Correlations via Data Augmentation.
CoRR, 2022

QuALITY: Question Answering with Long Input Texts, Yes!
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Are All Spurious Features in Natural Language Alike? An Analysis through a Causal Lens.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

An Investigation of the (In)effectiveness of Counterfactually Augmented Data.
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2022

2021
Experience of neural machine translation between Indian languages.
Mach. Transl., 2021

2020
Coupled Training of Sequence-to-Sequence Models for Accented Speech Recognition.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

2019
Cross-Lingual Training for Automatic Question Generation.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

Explore, Propose, and Assemble: An Interpretable Model for Multi-Hop Reading Comprehension.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019


  Loading...