Zongru Wu
Orcid: 0000-0002-5387-7821
According to our database1,
Zongru Wu
authored at least 17 papers
between 2023 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
On csauthors.net:
Bibliography
2025
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review.
IEEE Trans. Neural Networks Learn. Syst., August, 2025
IEEE Internet Things J., June, 2025
Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents.
CoRR, May, 2025
CoRR, May, 2025
Smoothing Grounding and Reasoning for MLLM-Powered GUI Agents with Query-Oriented Pivot Tasks.
CoRR, March, 2025
Investigating the Adaptive Robustness with Knowledge Conflicts in LLM-based Multi-Agent Systems.
CoRR, February, 2025
SynGhost: Invisible and Universal Task-agnostic Backdoor Attack via Syntactic Transfer.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 29, 2025
Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining.
Proceedings of the 31st International Conference on Computational Linguistics, 2025
Proceedings of the Findings of the Association for Computational Linguistics, 2025
2024
CoRR, 2024
TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models.
CoRR, 2024
MKF-ADS: Multi-Knowledge Fusion Based Self-supervised Anomaly Detection System for Control Area Network.
CoRR, 2024
Syntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models.
CoRR, 2024
Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024
2023
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review.
CoRR, 2023