Adhiguna Kuncoro

According to our database1, Adhiguna Kuncoro authored at least 23 papers between 2016 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
DiPaCo: Distributed Path Composition.
CoRR, 2024

2023
DiLoCo: Distributed Low-Communication Training of Language Models.
CoRR, 2023

On "Scientific Debt" in NLP: A Case for More Rigour in Language Model Pre-Training Research.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

A Natural Bias for Language Generation Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023

2022
Transformer Grammars: Augmenting Transformer Language Models with Syntactic Inductive Biases at Scale.
Trans. Assoc. Comput. Linguistics, 2022

A Systematic Investigation of Commonsense Knowledge in Large Language Models.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2021
Scaling Language Models: Methods, Analysis & Insights from Training Gopher.
CoRR, 2021

A Systematic Investigation of Commonsense Understanding in Large Language Models.
CoRR, 2021

Pitfalls of Static Language Modelling.
CoRR, 2021

Mind the Gap: Assessing Temporal Generalization in Neural Language Models.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

IndoNLG: Benchmark and Resources for Evaluating Indonesian Natural Language Generation.
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 2021

2020
Syntactic Structure Distillation Pretraining for Bidirectional Encoders.
Trans. Assoc. Comput. Linguistics, 2020

2019
Unsupervised Recurrent Neural Network Grammars.
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019

Text Genre and Training Data Size in Human-like Parsing.
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, 2019

Scalable Syntax-Aware Language Models Using Knowledge Distillation.
Proceedings of the 57th Conference of the Association for Computational Linguistics, 2019

2018
Memory Architectures in Recurrent Neural Network Language Models.
Proceedings of the 6th International Conference on Learning Representations, 2018

Finding syntax in human encephalography with beam search.
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018

LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better.
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, 2018

2017
DyNet: The Dynamic Neural Network Toolkit.
CoRR, 2017

What Do Recurrent Neural Network Grammars Learn About Syntax?
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, 2017

2016
Dependency Parsing with LSTMs: An Empirical Evaluation.
CoRR, 2016

Recurrent Neural Network Grammars.
Proceedings of the NAACL HLT 2016, 2016

Distilling an Ensemble of Greedy Dependency Parsers into One MST Parser.
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 2016


  Loading...