Luca Di Liello

According to our database1, Luca Di Liello authored at least 12 papers between 2020 and 2023.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Semantic Loss Functions for Neuro-Symbolic Structured Prediction.
Proceedings of the Compendium of Neurosymbolic Artificial Intelligence, 2023

Structural Self-Supervised Objectives for Transformers.
CoRR, 2023

Context-Aware Transformer Pre-Training for Answer Sentence Selection.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 2023

2022
TorchMetrics - Measuring Reproducibility in PyTorch.
J. Open Source Softw., 2022

Effective Pre-Training Objectives for Transformer-based Autoencoders.
CoRR, 2022

Paragraph-based Transformer Pre-training for Multi-Sentence Inference.
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2022

Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection.
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, 2022

Effective Pretraining Objectives for Transformer-based Autoencoders.
Proceedings of the Findings of the Association for Computational Linguistics: EMNLP 2022, 2022

2021
Efficient pre-training objectives for Transformers.
CoRR, 2021

Language Transfer for Identifying Diagnostic Paragraphs in Clinical Notes.
Proceedings of the Eighth Italian Conference on Computational Linguistics, 2021

2020
Efficient Generation of Structured Objects with Constrained Adversarial Networks.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Cross-Language Transformer Adaptation for Frequently Asked Questions.
Proceedings of the Seventh Italian Conference on Computational Linguistics, 2020


  Loading...