Yiming Li

Orcid: 0000-0002-2258-265X

Affiliations:
  • Zhejiang University, ZJU-HIC, Hangzhou Global Scientific and Technological Innovation Center, China
  • Tsinghua University, Computer Science and Technology, Tsinghua Shenzhen International Graduate School, China (PhD 2013)


According to our database1, Yiming Li authored at least 110 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Cowpox: Towards the Immunity of VLM-based Multi-Agent Systems.
CoRR, August, 2025

Towards Effective Prompt Stealing Attack against Text-to-Image Diffusion Models.
CoRR, August, 2025

Coward: Toward Practical Proactive Federated Backdoor Defense via Collision-based Watermark.
CoRR, August, 2025

BadReasoner: Planting Tunable Overthinking Backdoors into Large Reasoning Models for Fun or Profit.
CoRR, July, 2025

DREAM: Scalable Red Teaming for Text-to-Image Generative Systems via Distribution Modeling.
CoRR, July, 2025

Towards Resilient Safety-driven Unlearning for Diffusion Models against Downstream Fine-tuning.
CoRR, July, 2025

BURN: Backdoor Unlearning via Adversarial Boundary Analysis.
CoRR, July, 2025

DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective.
CoRR, July, 2025

Rethinking Data Protection in the (Generative) Artificial Intelligence Era.
CoRR, July, 2025

Holmes: Towards Effective and Harmless Model Ownership Verification to Personalized Large Vision Models via Decoupling Common Features.
CoRR, July, 2025

MOVE: Effective and Harmless Ownership Verification via Embedded External Features.
IEEE Trans. Pattern Anal. Mach. Intell., June, 2025

CertDW: Towards Certified Dataset Ownership Verification via Conformal Prediction.
CoRR, June, 2025

Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment.
CoRR, May, 2025

BitHydra: Towards Bit-flip Inference Cost Attack against Large Language Models.
CoRR, May, 2025

Towards Dataset Copyright Evasion Attack against Personalized Text-to-Image Diffusion Models.
CoRR, May, 2025

Inception: Jailbreak the Memory Mechanism of Text-to-Image Generation Systems.
CoRR, April, 2025

PT-Mark: Invisible Watermarking for Text-to-image Diffusion Models via Semantic-aware Pivotal Tuning.
CoRR, April, 2025

CBW: Towards Dataset Ownership Verification for Speaker Verification via Clustering-based Backdoor Watermarking.
CoRR, March, 2025

Towards Label-Only Membership Inference Attack against Pre-trained Large Language Models.
CoRR, February, 2025

Towards Copyright Protection for Knowledge Bases of Retrieval-augmented Language Models via Ownership Verification with Reasoning.
CoRR, February, 2025

Safety at Scale: A Comprehensive Survey of Large Model Safety.
CoRR, February, 2025

FIT-Print: Towards False-claim-resistant Model Ownership Verification via Targeted Fingerprint.
CoRR, January, 2025

ARMOR: Shielding Unlearnable Examples against Data Augmentation.
CoRR, January, 2025

PointNCBW: Toward Dataset Ownership Verification for Point Clouds via Negative Clean-Label Backdoor Watermark.
IEEE Trans. Inf. Forensics Secur., 2025

CoAS: Composite Audio Steganography Based on Text and Speech Synthesis.
IEEE Trans. Inf. Forensics Secur., 2025

FLARE: Toward Universal Dataset Purification Against Backdoor Attacks.
IEEE Trans. Inf. Forensics Secur., 2025

Towards Sample-Specific Backdoor Attack With Clean Labels via Attribute Trigger.
IEEE Trans. Dependable Secur. Comput., 2025

Evading backdoor defenses: Concealing genuine backdoors through scapegoat strategy.
Comput. Secur., 2025

Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2025

Prompt Inversion Attack Against Collaborative Inference of Large Language Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2025

Explanation as a Watermark: Towards Harmless and Multi-bit Model Ownership Verification via Watermarking Feature Attribution.
Proceedings of the 32nd Annual Network and Distributed System Security Symposium, 2025

A Benchmark for Semantic Sensitive Information in LLMs Outputs.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Probe before You Talk: Towards Black-box Defense against Backdoor Unalignment for Large Language Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

VideoShield: Regulating Diffusion-based Video Generation Models via Watermarking.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

REFINE: Inversion-Free Backdoor Defense via Model Reprogramming.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

SleeperMark: Towards Robust Watermark against Fine-Tuning Text-to-image Diffusion Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

Understanding the Dark Side of LLMs' Intrinsic Self-Correction.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Regional Adversarial Training for Better Robust Generalization.
Int. J. Comput. Vis., October, 2024

Node-Level Graph Regression With Deep Gaussian Process Models.
IEEE Trans. Artif. Intell., June, 2024

Portfolio Selection via Graph-Aware Gaussian Processes With Generalized Gaussian Likelihood.
IEEE Trans. Artif. Intell., February, 2024

Backdoor Learning: A Survey.
IEEE Trans. Neural Networks Learn. Syst., January, 2024

Backdoor Attack With Sparse and Invisible Trigger.
IEEE Trans. Inf. Forensics Secur., 2024

Toward Stealthy Backdoor Attacks Against Speech Recognition via Elements of Sound.
IEEE Trans. Inf. Forensics Secur., 2024

Understanding the Dark Side of LLMs' Intrinsic Self-Correction.
CoRR, 2024

SuperMark: Robust and Training-free Image Watermarking via Diffusion-based Super-Resolution.
CoRR, 2024

FLARE: Towards Universal Dataset Purification against Backdoor Attacks.
CoRR, 2024

SoK: On the Role and Future of AIGC Watermarking in the Era of Gen-AI.
CoRR, 2024

Demonstration Attack against In-Context Learning for Code Intelligence.
CoRR, 2024

PointNCBW: Towards Dataset Ownership Verification for Point Clouds via Negative Clean-label Backdoor Watermark.
CoRR, 2024

TAPI: Towards Target-Specific and Adversarial Prompt Injection against Code LLMs.
CoRR, 2024

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers.
CoRR, 2024

Model-agnostic Origin Attribution of Generated Images with Few-shot Examples.
CoRR, 2024

ZeroMark: Towards Dataset Ownership Verification without Disclosing Watermark.
Proceedings of the Advances in Neural Information Processing Systems 38: Annual Conference on Neural Information Processing Systems 2024, 2024

Defending Against Backdoor Attacks by Layer-wise Feature Analysis (Extended Abstract).
Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence, 2024

Purifying Quantization-conditioned Backdoors via Layer-wise Activation Correction with Distribution Approximation.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

IBD-PSC: Input-level Backdoor Detection via Parameter-oriented Scaling Consistency.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

Towards Faithful XAI Evaluation via Generalization-Limited Backdoor Watermark.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Towards Reliable and Efficient Backdoor Trigger Inversion via Decoupling Benign Features.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Which Model Generated This Image? A Model-Agnostic Approach for Origin Attribution.
Proceedings of the Computer Vision - ECCV 2024, 2024

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Causal Interventional Prediction System for Robust and Explainable Effect Forecasting.
Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, 2024

BadActs: A Universal Backdoor Defense in the Activation Space.
Proceedings of the Findings of the Association for Computational Linguistics, 2024

2023
Not All Samples Are Born Equal: Towards Effective Clean-Label Backdoor Attacks.
Pattern Recognit., July, 2023

Black-Box Dataset Ownership Verification via Backdoor Watermarking.
IEEE Trans. Inf. Forensics Secur., 2023

Towards Stealthy Backdoor Attacks against Speech Recognition via Elements of Sound.
CoRR, 2023

Backdoor Attack with Sparse and Invisible Trigger.
CoRR, 2023

BackdoorBox: A Python Toolbox for Backdoor Learning.
CoRR, 2023

Defending Against Backdoor Attacks by Layer-wise Feature Analysis.
Proceedings of the Advances in Knowledge Discovery and Data Mining, 2023

Domain Watermark: Effective and Harmless Dataset Copyright Protection is Closed at Hand.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Towards Robust Model Watermark via Reducing Parametric Vulnerability.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

Backdoor Defense via Suppressing Model Shortcuts.
Proceedings of the IEEE International Conference on Acoustics, 2023

BATT: Backdoor Attack with Transformation-Based Triggers.
Proceedings of the IEEE International Conference on Acoustics, 2023

Untargeted Backdoor Attack Against Object Detection.
Proceedings of the IEEE International Conference on Acoustics, 2023

Generating Transferable 3D Adversarial Point Cloud via Random Perturbation Factorization.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Semi-supervised robust training with generalized perturbed neighborhood.
Pattern Recognit., 2022

Multinomial random forest.
Pattern Recognit., 2022

A Fine-Grained Differentially Private Federated Learning Against Leakage From Gradients.
IEEE Internet Things J., 2022

Black-box Ownership Verification for Dataset Protection via Backdoor Watermarking.
CoRR, 2022

MOVE: Effective and Harmless Ownership Verification via Embedded External Features.
CoRR, 2022

Untargeted Backdoor Watermark: Towards Harmless and Stealthy Dataset Copyright Protection.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Backdoor Defense via Decoupling the Training Process.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Few-Shot Backdoor Attacks on Visual Object Tracking.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Adaptive Local Implicit Image Function for Arbitrary-Scale Super-Resolution.
Proceedings of the 2022 IEEE International Conference on Image Processing, 2022

Defending against Model Stealing via Verifying Embedded External Features.
Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence, 2022

2021
Regional Adversarial Training for Better Robust Generalization.
CoRR, 2021

Backdoor Attack in the Physical World.
CoRR, 2021

Hidden Backdoor Attack against Semantic Segmentation Models.
CoRR, 2021

Targeted Attack against Deep Neural Networks via Flipping Limited Weight Bits.
Proceedings of the 9th International Conference on Learning Representations, 2021

Invisible Backdoor Attack with Sample-Specific Triggers.
Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, 2021

Backdoor Attack Against Speaker Verification.
Proceedings of the IEEE International Conference on Acoustics, 2021

t-k-means: A ROBUST AND STABLE k-means VARIANT.
Proceedings of the IEEE International Conference on Acoustics, 2021

Visual Privacy Protection via Mapping Distortion.
Proceedings of the IEEE International Conference on Acoustics, 2021

2020
TNT: An Interpretable Tree-Network-Tree Learning Framework using Knowledge Distillation.
Entropy, 2020

Backdoor Attack with Sample-Specific Triggers.
CoRR, 2020

Open-sourced Dataset Protection via Backdoor Watermarking.
CoRR, 2020

Rectified Decision Trees: Exploring the Landscape of Interpretable and Effective Machine Learning.
CoRR, 2020

Backdoor Learning: A Survey.
CoRR, 2020

Rethinking the Trigger of Backdoor Attack.
CoRR, 2020

Toward Adversarial Robustness via Semi-supervised Robust Training.
CoRR, 2020

Multitask Deep Learning for Edge Intelligence Video Surveillance System.
Proceedings of the 18th IEEE International Conference on Industrial Informatics, 2020

Generalized Local Aggregation for Large Scale Gaussian Process Regression.
Proceedings of the 2020 International Joint Conference on Neural Networks, 2020

Adversarial Defense Via Local Flatness Regularization.
Proceedings of the IEEE International Conference on Image Processing, 2020

Targeted Attack for Deep Hashing Based Retrieval.
Proceedings of the Computer Vision - ECCV 2020, 2020

2019
Adversarial Defense Via Local Flatness Regularization.
CoRR, 2019

t-k-means: A k-means Variant with Robustness and Stability.
CoRR, 2019

Rectified Decision Trees: Towards Interpretability, Compression and Empirical Soundness.
CoRR, 2019

Multinomial Random Forests: Fill the Gap between Theoretical Consistency and Empirical Soundness.
CoRR, 2019

UA-DRN: Unbiased Aggregation of Deep Neural Networks for Regression Ensemble.
Aust. J. Intell. Inf. Process. Syst., 2019


  Loading...