Srinivasan Iyer

Affiliations:
  • Facebook AI Research, Meta AI Research, Seattle, WA, USA
  • University of Washington, Paul G. Allen Computer Science and Engineering, Seattle, WA, USA (PhD 2019)


According to our database1, Srinivasan Iyer authored at least 34 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Instruction-tuned Language Models are Better Knowledge Learners.
CoRR, 2024

2023
LIMA: Less Is More for Alignment.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

LEVER: Learning to Verify Language-to-Code Generation with Execution.
Proceedings of the International Conference on Machine Learning, 2023

Demystifying Prompts in Language Models via Perplexity Estimation.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2023, 2023

Methods for Measuring, Updating, and Visualizing Factual Beliefs in Language Models.
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023

Complementary Explanations for Effective In-Context Learning.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
OPT-IML: Scaling Language Model Instruction Meta Learning through the Lens of Generalization.
CoRR, 2022

QUASER: Question Answering with Scalable Extractive Rationalization.
Proceedings of the SIGIR '22: The 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, Madrid, Spain, July 11, 2022

AnswerSumm: A Manually-Curated Dataset and Pipeline for Answer Summarization.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Improving In-Context Few-Shot Learning via Self-Supervised Training.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022


ToKen: Task Decomposition and Knowledge Infusion for Few-Shot Hate Speech Detection.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
Efficient Large Scale Language Modeling with Mixtures of Experts.
CoRR, 2021

Do Language Models Have Beliefs? Methods for Detecting, Updating, and Visualizing Model Beliefs.
CoRR, 2021

EASE: Extractive-Abstractive Summarization with Explanations.
CoRR, 2021

Multi-Perspective Abstractive Answer Summarization.
CoRR, 2021

RECONSIDER: Improved Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering.
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2021

Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval.
Proceedings of the 9th International Conference on Learning Representations, 2021

DeLighT: Deep and Light-weight Transformer.
Proceedings of the 9th International Conference on Learning Representations, 2021

FiD-Ex: Improving Sequence-to-Sequence Models for Extractive Rationale Generation.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

Do Explanations Help Users Detect Errors in Open-Domain QA? An Evaluation of Spoken vs. Visual Explanations.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

2020
Human Evaluation of Spoken vs. Visual Explanations for Open-Domain QA.
CoRR, 2020

RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering.
CoRR, 2020

DeLighT: Very Deep and Light-weight Transformer.
CoRR, 2020

Efficient One-Pass End-to-End Entity Linking for Questions.
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020

2019
Learning to Map Natural Language to General Purpose Source Code.
PhD thesis, 2019

Learning Programmatic Idioms for Scalable Semantic Parsing.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

JuICe: A Large Scale Distantly Supervised Dataset for Open Domain Context-based Code Generation.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

2018
Learning to Map Context-Dependent Sentences to Executable Formal Queries.
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2018

Mapping Language to Code in Programmatic Context.
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31, 2018

Neural Semantic Parsing.
Proceedings of ACL 2018, Melbourne, Australia, July 15-20, 2018, Tutorial Abstracts, 2018

2017
Neural AMR: Sequence-to-Sequence Models for Parsing and Generation.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017

Learning a Neural Semantic Parser from User Feedback.
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, 2017

2016
Summarizing Source Code using a Neural Attention Model.
Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, 2016


  Loading...