Duygu Nur Yaldiz

Orcid: 0009-0008-1340-5978

According to our database1, Duygu Nur Yaldiz authored at least 12 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
TruthTorchLM: A Comprehensive Library for Predicting Truthfulness in LLM Outputs.
CoRR, July, 2025

Conformal Prediction Adaptive to Unknown Subpopulation Shifts.
CoRR, June, 2025

Backdoor Defense in Diffusion Models via Spatial Attention Unlearning.
CoRR, April, 2025

Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 29, 2025

Un-considering Contextual Information: Assessing LLMs' Understanding of Indexical Elements.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

Reconsidering LLM Uncertainty Estimation Methods in the Wild.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Do Not Design, Learn: A Trainable Scoring Function for Uncertainty Estimation in Generative LLMs.
CoRR, 2024

Predicting Uncertainty of Generative LLMs with MARS: Meaning-Aware Response Scoring.
Proceedings of the IEEE International Symposium on Information Theory, 2024

Federated Orthogonal Training: Mitigating Global Catastrophic Forgetting in Continual Federated Learning.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning.
Proceedings of the Computer Vision - ECCV 2024, 2024

MARS: Meaning-Aware Response Scoring for Uncertainty Estimation in Generative LLMs.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Secure Federated Learning against Model Poisoning Attacks via Client Filtering.
CoRR, 2023


  Loading...