Mirac Suzgun

According to our database1, Mirac Suzgun authored at least 18 papers between 2018 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Meta-Prompting: Enhancing Language Models with Task-Agnostic Scaffolding.
CoRR, 2024

Large Legal Fictions: Profiling Legal Hallucinations in Large Language Models.
CoRR, 2024

Do Language Models Know When They're Hallucinating References?
Proceedings of the Findings of the Association for Computational Linguistics: EACL 2024, 2024

2023
A Benchmark for Learning to Translate a New Language from One Grammar Book.
CoRR, 2023

Safety-Tuned LLaMAs: Lessons From Improving the Safety of Large Language Models that Follow Instructions.
CoRR, 2023

string2string: A Modern Python Library for String-to-String Algorithms.
CoRR, 2023

The Harvard USPTO Patent Dataset: A Large-Scale, Well-Structured, and Multi-Purpose Corpus of Patent Applications.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

Language models are multilingual chain-of-thought reasoners.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

When Do Pre-Training Biases Propagate to Downstream Tasks? A Case Study in Text Summarization.
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, 2023

Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

Follow the Wisdom of the Crowd: Effective Text Generation via Minimum Bayes Risk Decoding.
Proceedings of the Findings of the Association for Computational Linguistics: ACL 2023, 2023

2022
Holistic Evaluation of Language Models.
CoRR, 2022

Scaling Instruction-Finetuned Language Models.
CoRR, 2022

Monte Carlo Tree Search for Interpreting Stress in Natural Language.
Proceedings of the Second Workshop on Language Technology for Equality, 2022

Prompt-and-Rerank: A Method for Zero-Shot and Few-Shot Arbitrary Textual Style Transfer with Small Language Models.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

2019
Memory-Augmented Recurrent Neural Networks Can Learn Generalized Dyck Languages.
CoRR, 2019

LSTM Networks Can Perform Dynamic Counting.
CoRR, 2019

2018
On Evaluating the Generalization of LSTM Models in Formal Languages.
CoRR, 2018


  Loading...