Peng Ding

Orcid: 0000-0001-7814-6606

Affiliations:
  • Nanjing University, National Key Laboratory for Novel Software Technology, Nanjing, China
  • Yunnan University, School of Information Science and Engineering, Yunnan, China


According to our database1, Peng Ding authored at least 8 papers between 2017 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement.
CoRR, May, 2025

Why Not Act on What You Know? Unleashing Safety Potential of LLMs via Self-Aware Guard Enhancement.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
A Wolf in Sheep's Clothing: Generalized Nested Jailbreak Prompts can Fool Large Language Models Easily.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024

Hallu-PI: Evaluating Hallucination in Multi-modal Large Language Models within Perturbed Inputs.
Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024, 2024

2018
An Attentive Neural Sequence Labeling Model for Adverse Drug Reactions Mentions Extraction.
IEEE Access, 2018

YNU Deep at SemEval-2018 Task 12: A BiLSTM Model with Neural Attention for Argument Reasoning Comprehension.
Proceedings of The 12th International Workshop on Semantic Evaluation, 2018

YNU_Deep at SemEval-2018 Task 11: An Ensemble of Attention-based BiLSTM Models for Machine Comprehension.
Proceedings of The 12th International Workshop on Semantic Evaluation, 2018

2017
YNUDLG at IJCNLP-2017 Task 5: A CNN-LSTM Model with Attention for Multi-choice Question Answering in Examinations.
Proceedings of the IJCNLP 2017, Shared Tasks, Taipei, Taiwan, November 27, 2017


  Loading...