Zongru Wu

Orcid: 0000-0002-5387-7821

According to our database1, Zongru Wu authored at least 17 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review.
IEEE Trans. Neural Networks Learn. Syst., August, 2025

Transferable and Robust Dynamic Adversarial Attack Against Object Detection Models.
IEEE Internet Things J., June, 2025

On the Adaptive Psychological Persuasion of Large Language Models.
CoRR, June, 2025

Hidden Ghost Hand: Unveiling Backdoor Vulnerabilities in MLLM-Powered Mobile GUI Agents.
CoRR, May, 2025

GEM: Gaussian Embedding Modeling for Out-of-Distribution Detection in GUI Agents.
CoRR, May, 2025

OS-Kairos: Adaptive Interaction for MLLM-Powered GUI Agents.
CoRR, March, 2025

Smoothing Grounding and Reasoning for MLLM-Powered GUI Agents with Query-Oriented Pivot Tasks.
CoRR, March, 2025

Investigating the Adaptive Robustness with Knowledge Conflicts in LLM-based Multi-Agent Systems.
CoRR, February, 2025

SynGhost: Invisible and Universal Task-agnostic Backdoor Attack via Syntactic Transfer.
Proceedings of the Findings of the Association for Computational Linguistics: NAACL 2025, Albuquerque, New Mexico, USA, April 29, 2025

Gracefully Filtering Backdoor Samples for Generative Large Language Models without Retraining.
Proceedings of the 31st International Conference on Computational Linguistics, 2025

OS-Kairos: Adaptive Interaction for MLLM-Powered GUI Agents.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
Transferring Backdoors between Large Language Models by Knowledge Distillation.
CoRR, 2024

TrojanRAG: Retrieval-Augmented Generation Can Be Backdoor Driver in Large Language Models.
CoRR, 2024

MKF-ADS: Multi-Knowledge Fusion Based Self-supervised Anomaly Detection System for Control Area Network.
CoRR, 2024

Syntactic Ghost: An Imperceptible General-purpose Backdoor Attacks on Pre-trained Language Models.
CoRR, 2024

Acquiring Clean Language Models from Backdoor Poisoned Datasets by Downscaling Frequency Space.
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2024

2023
Backdoor Attacks and Countermeasures in Natural Language Processing Models: A Comprehensive Security Review.
CoRR, 2023


  Loading...