Siyuan Cheng

Orcid: 0009-0006-0903-6917

Affiliations:
  • Purdue University, Department of Computer Science, West Lafayette, IN, USA


According to our database1, Siyuan Cheng authored at least 39 papers between 2020 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
CodeMirage: A Multi-Lingual Benchmark for Detecting AI-Generated and Paraphrased Source Code from Production-Level LLMs.
CoRR, June, 2025

SOFT: Selective Data Obfuscation for Protecting LLM Fine-tuning against Membership Inference Attacks.
CoRR, June, 2025

BAIT: Large Language Model Backdoor Scanning by Inverting Attack Target.
Proceedings of the IEEE Symposium on Security and Privacy, 2025

A Systematic Threat Modeling of LLM Applications.
Proceedings of the 33rd ACM International Conference on the Foundations of Software Engineering, 2025

Unleashing the Power of Generative Model in Recovering Variable Names from Stripped Binary.
Proceedings of the 32nd Annual Network and Distributed System Security Symposium, 2025

CENSOR: Defense Against Gradient Inversion via Orthogonal Subspace Bayesian Sampling.
Proceedings of the 32nd Annual Network and Distributed System Security Symposium, 2025

CO-SPY: Combining Semantic and Pixel Features to Detect Synthetic Images by AI.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

System Prompt Hijacking via Permutation Triggers in LLM Supply Chains.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

2024
ASPIRER: Bypassing System Prompts With Permutation-based Backdoors in LLMs.
CoRR, 2024

DIGIMON: Diagnosis and Mitigation of Sampling Skew for Reinforcement Learning based Meta-Planner in Robot Navigation.
CoRR, 2024

Rapid Optimization for Jailbreaking LLMs via Subconscious Exploitation and Echopraxia.
CoRR, 2024

Opening A Pandora's Box: Things You Should Know in the Era of Custom GPTs.
CoRR, 2024

Rethinking the Invisible Protection against Unauthorized Image Usage in Stable Diffusion.
Proceedings of the 33rd USENIX Security Symposium, 2024

On Large Language Models' Resilience to Coercive Interrogation.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Exploring the Orthogonality and Linearity of Backdoor Attacks.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

OdScan: Backdoor Scanning for Object Detection Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

BiScope: AI-generated Text Detection by Checking Memorization of Preceding Tokens.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

ROCAS: Root Cause Analysis of Autonomous Driving Accidents via Cyber-Physical Co-mutation.
Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering, 2024

UNIT: Backdoor Mitigation via Automated Neural Distribution Tightening.
Proceedings of the Computer Vision - ECCV 2024, 2024

Lotus: Evasive and Resilient Backdoor Attacks through Sub-Partitioning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Exploring Inherent Backdoors in Deep Learning Models.
Proceedings of the Annual Computer Security Applications Conference, 2024

Elijah: Eliminating Backdoors Injected in Diffusion Models via Distribution Shift.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
Make Them Spill the Beans! Coercive Knowledge Extraction from (Production) LLMs.
CoRR, 2023

LmPa: Improving Decompilation by Synergy of Large Language Model and Program Analysis.
CoRR, 2023

Hard-label Black-box Universal Adversarial Patch Attack.
Proceedings of the 32nd USENIX Security Symposium, 2023

ImU: Physical Impersonating Attack for Face Recognition System with Natural Style Changes.
Proceedings of the 44th IEEE Symposium on Security and Privacy, 2023

PEM: Representing Binary Program Semantics for Similarity Analysis via a Probabilistic Execution Model.
Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2023

Django: Detecting Trojans in Object Detection Models via Gaussian Focus Calibration.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

BEAGLE: Forensics of Deep Learning Backdoor Attack for Better Defense.
Proceedings of the 30th Annual Network and Distributed System Security Symposium, 2023

Improving Binary Code Similarity Transformer Models by Semantics-Driven Instruction Deemphasis.
Proceedings of the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2023

FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated Learning.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

MEDIC: Remove Model Backdoors via Importance Driven Cloning.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Detecting Backdoors in Pre-trained Encoders.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
Backdoor Vulnerabilities in Normally Trained Deep Learning Models.
CoRR, 2022

DECK: Model Hardening for Defending Pervasive Backdoors.
CoRR, 2022

2021
Backdoor Scanning for Deep Neural Networks through K-Arm Optimization.
Proceedings of the 38th International Conference on Machine Learning, 2021

Towards Feature Space Adversarial Attack by Style Perturbation.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

Deep Feature Space Trojan Attack of Neural Networks by Controlled Detoxification.
Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence, 2021

2020
Towards Feature Space Adversarial Attack.
CoRR, 2020


  Loading...