Kangjie Chen

Orcid: 0000-0001-5099-7054

According to our database1, Kangjie Chen authored at least 31 papers between 2017 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting.
CoRR, August, 2025

Towards Effective Prompt Stealing Attack against Text-to-Image Diffusion Models.
CoRR, August, 2025

Impact-driven Context Filtering For Cross-file Code Completion.
CoRR, August, 2025

Coward: Toward Practical Proactive Federated Backdoor Defense via Collision-based Watermark.
CoRR, August, 2025

BadLingual: A Novel Lingual-Backdoor Attack against Large Language Models.
CoRR, May, 2025

Picky LLMs and Unreliable RMs: An Empirical Study on Safety Alignment after Instruction Tuning.
CoRR, February, 2025

HRAvatar: High-Quality and Relightable Gaussian Head Avatar.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

2024
Boosting Black-Box Attack to Deep Neural Networks With Conditional Diffusion Models.
IEEE Trans. Inf. Forensics Secur., 2024

Towards Action Hijacking of Large Language Model-based Agent.
CoRR, 2024

SLGaussian: Fast Language Gaussian Splatting in Sparse Views.
CoRR, 2024

ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users.
CoRR, 2024

MIP: CLIP-based Image Reconstruction from PEFT Gradients.
CoRR, 2024

ART: Automatic Red-teaming for Text-to-Image Models to Protect Benign Users.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

EvilEdit: Backdooring Text-to-Image Diffusion Models in One Second.
Proceedings of the 32nd ACM International Conference on Multimedia, MM 2024, Melbourne, VIC, Australia, 28 October 2024, 2024

BadEdit: Backdooring Large Language Models by Model Editing.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Protecting Confidential Virtual Machines from Hardware Performance Counter Side Channels.
Proceedings of the 54th Annual IEEE/IFIP International Conference on Dependable Systems and Networks, 2024

2023
ADS-Lead: Lifelong Anomaly Detection in Autonomous Driving Systems.
IEEE Trans. Intell. Transp. Syst., January, 2023

Omnipotent Adversarial Training for Unknown Label-noisy and Imbalanced Datasets.
CoRR, 2023

Extracting Cloud-based Model with Prior Knowledge.
CoRR, 2023

GuardHFL: Privacy Guardian for Heterogeneous Federated Learning.
Proceedings of the International Conference on Machine Learning, 2023

Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Multi-target Backdoor Attacks for Code Pre-trained Models.
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023

2022
ShiftNAS: Towards Automatic Generation of Advanced Mulitplication-Less Neural Networks.
CoRR, 2022

BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
Vulnerability Assessment of Deep Reinforcement Learning Models for Power System Topology Optimization.
IEEE Trans. Smart Grid, 2021

A Unified Anomaly Detection Methodology for Lane-Following of Autonomous Driving Systems.
Proceedings of the 2021 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom), New York City, NY, USA, September 30, 2021

Temporal Watermarks for Deep Reinforcement Learning Models.
Proceedings of the AAMAS '21: 20th International Conference on Autonomous Agents and Multiagent Systems, 2021

Stealing Deep Reinforcement Learning Models for Fun and Profit.
Proceedings of the ASIA CCS '21: ACM Asia Conference on Computer and Communications Security, 2021

2020
Stealing Deep Reinforcement Learning Models for Fun and Profit.
CoRR, 2020

Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2017
Defending Against Man-In-The-Middle Attack in Repeated Games.
Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017


  Loading...