Shengwei An

According to our database1, Shengwei An authored at least 27 papers between 2015 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
LOTUS: Evasive and Resilient Backdoor Attacks through Sub-Partitioning.
CoRR, 2024

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia.
CoRR, 2024

Inspecting Prediction Confidence for Detecting Black-Box Backdoor Attacks.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift.
CoRR, 2023

Hard-label Black-box Universal Adversarial Patch Attack.
Proceedings of the 32nd USENIX Security Symposium, 2023

PELICAN: Exploiting Backdoors of Naturally Trained Deep Learning Models In Binary Code Analysis.
Proceedings of the 32nd USENIX Security Symposium, 2023

ImU: Physical Impersonating Attack for Face Recognition System with Natural Style Changes.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

Django: Detecting Trojans in Object Detection Models via Gaussian Focus Calibration.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense.
Proceedings of the 30th Annual Network and Distributed System Security Symposium, 2023

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

MEDIC: Remove Model Backdoors via Importance Driven Cloning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models.
CoRR, 2022

Confidence Matters: Inspecting Backdoors in Deep Neural Networks via Distribution Transfer.
CoRR, 2022

DECK: Model Hardening for Defending Pervasive Backdoors.
CoRR, 2022

Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense.
CoRR, 2022

Model Orthogonalization: Class Distance Hardening in Neural Networks for Better Security.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

Piccolo: Exposing Complex Backdoors in NLP Transformer Models.
Proceedings of the 43rd IEEE Symposium on Security and Privacy, 2022

MIRROR: Model Inversion for Deep LearningNetwork with High Fidelity.
Proceedings of the 29th Annual Network and Distributed System Security Symposium, 2022

Constrained Optimization with Dynamic Bound-scaling for Effective NLP Backdoor Defense.
Proceedings of the International Conference on Machine Learning, 2022

An Invisible Black-Box Backdoor Attack Through Frequency Domain.
Proceedings of the Computer Vision - ECCV 2022, 2022

Better Trigger Inversion Optimization in Backdoor Scanning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

2021
Backdoor Attack through Frequency Domain.
CoRR, 2021

Backdoor Scanning for Deep Neural Networks through K-Arm Optimization.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Augmented example-based synthesis using relational perturbation properties.
Proc. ACM Program. Lang., 2020

2016
Verifying Distributed Controllers with Local Invariants.
Proceedings of the 2016 IEEE International Conference on Software Quality, 2016

2015
An Event-Based Formal Framework for Dynamic Software Update.
Proceedings of the 2015 IEEE International Conference on Software Quality, 2015


  Loading...