Yihao Huang

Orcid: 0000-0002-5784-770X

Affiliations:
  • Nanyang Technological University, School of Computer Science and Engineering, Singapore
  • East China Normal University, Shanghai Key Lab of Trustworthy Computing, China (PhD 2022)


According to our database1, Yihao Huang authored at least 68 papers between 2019 and 2025.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Towards Effective Prompt Stealing Attack against Text-to-Image Diffusion Models.
CoRR, August, 2025

Seeing It Before It Happens: In-Generation NSFW Detection for Diffusion-Based Text-to-Image Models.
CoRR, August, 2025

Time-variant Image Inpainting via Interactive Distribution Transition Estimation.
CoRR, June, 2025

Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment.
CoRR, May, 2025

A Vision for Auto Research with LLM Agents.
CoRR, April, 2025

A Comprehensive Survey in LLM(-Agent) Full Stack Safety: Data, Training and Deployment.
CoRR, April, 2025

Privacy Protection Against Personalized Text-to-Image Synthesis via Cross-image Consistency Constraints.
CoRR, April, 2025

Evolution-based Region Adversarial Prompt Learning for Robustness Enhancement in Vision-Language Models.
CoRR, March, 2025

PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for Text-to-Image Models.
CoRR, January, 2025

Scale-Invariant Adversarial Attack Against Arbitrary-Scale Super-Resolution.
IEEE Trans. Inf. Forensics Secur., 2025

PATFinger: Prompt-Adapted Transferable Fingerprinting against Unauthorized Multimodal Dataset Usage.
Proceedings of the 48th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2025

Understanding the Effectiveness of Coverage Criteria for Large Language Models: A Special Angle from Jailbreak Attacks.
Proceedings of the 47th IEEE/ACM International Conference on Software Engineering, 2025

Improved Techniques for Optimization-Based Jailbreaking on Large Language Models.
Proceedings of the Thirteenth International Conference on Learning Representations, 2025

Efficient Universal Goal Hijacking with Semantics-guided Prompt Organization.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Perception-Guided Jailbreak Against Text-to-Image Models.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Dodging DeepFake Detection via Implicit Spatial-Domain Notch Filtering.
IEEE Trans. Circuits Syst. Video Technol., August, 2024

Natural & Adversarial Bokeh Rendering via Circle-of-Confusion Predictive Network.
IEEE Trans. Multim., 2024

Texture Re-Scalable Universal Adversarial Perturbation.
IEEE Trans. Inf. Forensics Secur., 2024

Historical Embedding-Guided Efficient Large-Scale Federated Graph Learning.
Proc. ACM Manag. Data, 2024

Concept Guided Co-saliency Objection Detection.
CoRR, 2024

What External Knowledge is Preferred by LLMs? Characterizing and Exploring Chain of Evidence in Imperfect Context.
CoRR, 2024

Heuristic-Induced Multimodal Risk Distribution Jailbreak Attack for Multimodal Large Language Models.
CoRR, 2024

Global Challenge for Safe and Secure LLMs Track 1.
CoRR, 2024

Semantic-Aligned Adversarial Evolution Triangle for High-Transferability Vision-Language Attack.
CoRR, 2024

Efficient and Effective Universal Adversarial Attack against Vision-Language Pre-training Models.
CoRR, 2024

Investigating Coverage Criteria in Large Language Models: An In-Depth Study Through Jailbreak Attacks.
CoRR, 2024

RT-Attack: Jailbreaking Text-to-Image Models via Random Token.
CoRR, 2024

NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing.
CoRR, 2024

Text Modality Oriented Image Feature Extraction for Detecting Diffusion-based DeepFake.
CoRR, 2024

Semantic-guided Prompt Organization for Universal Goal Hijacking against LLMs.
CoRR, 2024

MIP: CLIP-based Image Reconstruction from PEFT Gradients.
CoRR, 2024

Improving Robustness of LiDAR-Camera Fusion Model against Weather Corruption from Fusion Strategy Perspective.
CoRR, 2024

Is Aggregation the Only Choice? Federated Learning via Layer-wise Model Recombination.
Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2024

RUNNER: Responsible UNfair NEuron Repair for Enhancing Deep Neural Network Fairness.
Proceedings of the 46th IEEE/ACM International Conference on Software Engineering, 2024

FedCross: Towards Accurate Federated Learning via Multi-Model Cross-Aggregation.
Proceedings of the 40th IEEE International Conference on Data Engineering, 2024

Architecture-Agnostic Iterative Black-Box Certified Defense Against Adversarial Patches.
Proceedings of the IEEE International Conference on Acoustics, 2024

CFP: A Reinforcement Learning Framework for Comprehensive Fairness-Performance Trade-Off in Machine Learning.
Proceedings of the Artificial Neural Networks and Machine Learning - ICANN 2024, 2024

Cosalpure: Learning Concept from Group Images for Robust Co-Saliency Detection.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models.
Proceedings of the Thirty-Eighth AAAI Conference on Artificial Intelligence, 2024

2023
A Mutation-Based Method for Multi-Modal Jailbreaking Attack Detection.
CoRR, 2023

TranSegPGD: Improving Transferability of Adversarial Examples on Semantic Segmentation.
CoRR, 2023

AdapterFL: Adaptive Heterogeneous Federated Learning for Resource-constrained Mobile Computing Systems.
CoRR, 2023

Protect Federated Learning Against Backdoor Attacks via Data-Free Trigger Generation.
CoRR, 2023

Towards Better Fairness-Utility Trade-off: A Comprehensive Measurement-Based Reinforcement Learning Framework.
CoRR, 2023

On the Robustness of Segment Anything.
CoRR, 2023

FedMR: Federated Learning via Model Recombination.
CoRR, 2023

Zero-Day Backdoor Attack against Text-to-Image Diffusion Models via Personalization.
CoRR, 2023

GitFL: Uncertainty-Aware Real-Time Asynchronous Federated Learning Using Version Control.
Proceedings of the IEEE Real-Time Systems Symposium, 2023

ALA: Naturalness-aware Adversarial Lightness Attack.
Proceedings of the 31st ACM International Conference on Multimedia, 2023

Evading DeepFake Detectors via Adversarial Statistical Consistency.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

2022
FakeLocator: Robust Localization of GAN-Based Face Manipulations.
IEEE Trans. Inf. Forensics Secur., 2022

Countering Malicious DeepFakes: Survey, Battleground, and Horizon.
Int. J. Comput. Vis., 2022

GitFL: Adaptive Asynchronous Federated Learning using Version Control.
CoRR, 2022

FedCross: Towards Accurate Federated Learning via Multi-Model Cross Aggregation.
CoRR, 2022

ALA: Adversarial Lightness Attack via Naturalness-aware Regularizations.
CoRR, 2022

Masked Faces with Faced Masks.
Proceedings of the Computer Vision - ECCV 2022 Workshops, 2022

2021
AdvBokeh: Learning to Adversarially Defocus Blur.
CoRR, 2021

AdvFilter: Predictive Perturbation-aware Filtering against Adversarial Attack via Multi-domain Learning.
Proceedings of the MM '21: ACM Multimedia Conference, Virtual Event, China, October 20, 2021

2020
FakeRetouch: Evading DeepFakes Detection via the Guidance of Deliberate Noise.
CoRR, 2020

FakeLocator: Robust Localization of GAN-Based Face Manipulations via Semantic Segmentation Networks with Bells and Whistles.
CoRR, 2020

FREPA: an automated and formal approach to requirement modeling and analysis in aircraft control domain.
Proceedings of the ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, 2020

DeepSonar: Towards Effective and Robust Detection of AI-Synthesized Fake Voices.
Proceedings of the MM '20: The 28th ACM International Conference on Multimedia, 2020

Amora: Black-box Adversarial Morphing Attack.
Proceedings of the MM '20: The 28th ACM International Conference on Multimedia, 2020

FakePolisher: Making DeepFakes More Detection-Evasive by Shallow Reconstruction.
Proceedings of the MM '20: The 28th ACM International Conference on Multimedia, 2020

FakeSpotter: A Simple yet Robust Baseline for Spotting AI-Synthesized Fake Faces.
Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, 2020

2019
Amora: Black-box Adversarial Morphing Attack.
CoRR, 2019

Prema: A Tool for Precise Requirements Editing, Modeling and Analysis.
Proceedings of the 34th IEEE/ACM International Conference on Automated Software Engineering, 2019

A Domain Experts Centric Approach to Formal Requirements Modeling and V&V of Embedded Control Software.
Proceedings of the 26th Asia-Pacific Software Engineering Conference, 2019


  Loading...