Ziqi Zhou

Orcid: 0009-0000-6785-7306

Affiliations:
  • Huazhong University of Science and Technology, Wuhan, China


According to our database1, Ziqi Zhou authored at least 23 papers between 2022 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Towards Reliable Forgetting: A Survey on Machine Unlearning Verification, Challenges, and Future Directions.
CoRR, June, 2025

Spa-VLM: Stealthy Poisoning Attacks on RAG-based VLM.
CoRR, May, 2025

DarkHash: A Data-Free Backdoor Attack Against Deep Hashing.
IEEE Trans. Inf. Forensics Secur., 2025

BadRobot: Jailbreaking Embodied LLM Agents in the Physical World.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

PB-UAP: Hybride Universal Adversarial Attack for Image Segmentation.
Proceedings of the 2025 IEEE International Conference on Acoustics, 2025

Test-Time Backdoor Detection for Object Detection Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

Breaking Barriers in Physical-World Adversarial Examples: Improving Robustness and Transferability via Robust Feature.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

Detecting and Corrupting Convolution-based Unlearnable Examples.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

NumbOD: A Spatial-Frequency Fusion Attack Against Object Detectors.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
PB-UAP: Hybrid Universal Adversarial Attack For Image Segmentation.
CoRR, 2024

TrojanRobot: Backdoor Attacks Against Robotic Manipulation in the Physical World.
CoRR, 2024

BadRobot: Jailbreaking LLM-based Embodied AI in the Physical World.
CoRR, 2024

Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness.
CoRR, 2024

Securely Fine-tuning Pre-trained Encoders Against Adversarial Examples.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

DarkSAM: Fooling Segment Anything Model to Segment Nothing.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Unlearnable 3D Point Clouds: Class-wise Transformation Is All You Need.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Transferable Adversarial Facial Images for Privacy Protection.
Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024, 2024

Detector Collapse: Backdooring Object Detection to Catastrophic Overload or Blindness in the Physical World.
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 2024

ECLIPSE: Expunging Clean-Label Indiscriminate Poisons via Sparse Diffusion Purification.
Proceedings of the Computer Security - ESORICS 2024, 2024

2023
Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations.
CoRR, 2023

AdvCLIP: Downstream-agnostic Adversarial Examples in Multimodal Contrastive Learning.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Downstream-agnostic Adversarial Examples.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
BadHash: Invisible Backdoor Attacks against Deep Hashing with Clean Label.
Proceedings of the MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10, 2022


  Loading...