Kuofeng Gao

Orcid: 0000-0002-5667-8238

According to our database1, Kuofeng Gao authored at least 28 papers between 2021 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
Grounding Language with Vision: A Conditional Mutual Information Calibrated Decoding Strategy for Reducing Hallucinations in LVLMs.
CoRR, May, 2025

Wolf Hidden in Sheep's Conversations: Toward Harmless Data-Based Backdoor Attacks for Jailbreaking Large Language Models.
CoRR, May, 2025

Your Language Model Can Secretly Write Like Humans: Contrastive Paraphrase Attacks on LLM-Generated Text Detectors.
CoRR, May, 2025

Towards Dataset Copyright Evasion Attack against Personalized Text-to-Image Diffusion Models.
CoRR, May, 2025

Making Them a Malicious Database: Exploiting Query Code to Jailbreak Aligned Large Language Models.
CoRR, February, 2025

PointNCBW: Toward Dataset Ownership Verification for Point Clouds via Negative Clean-Label Backdoor Watermark.
IEEE Trans. Inf. Forensics Secur., 2025

Protecting Your Video Content: Disrupting Automated Video-based LLM Annotations.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2025

QueryAttack: Jailbreaking Aligned Large Language Models Using Structured Non-natural Query Language.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

VLMInferSlow: Evaluating the Efficiency Robustness of Large Vision-Language Models as a Service.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

Benchmarking Open-ended Audio Dialogue Understanding for Large Audio-Language Models.
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2025

2024
Imperceptible and Robust Backdoor Attack in 3D Point Cloud.
IEEE Trans. Inf. Forensics Secur., 2024

Denial-of-Service Poisoning Attacks against Large Language Models.
CoRR, 2024

Embedding Self-Correction as an Inherent Ability in Large Language Models for Enhanced Mathematical Reasoning.
CoRR, 2024

PointNCBW: Towards Dataset Ownership Verification for Point Clouds via Negative Clean-label Backdoor Watermark.
CoRR, 2024

Video Watermarking: Safeguarding Your Video from (Unauthorized) Annotations by Video-based LLMs.
CoRR, 2024

Deconstructing The Ethics of Large Language Models from Long-standing Issues to New-emerging Dilemmas.
CoRR, 2024

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transformers.
CoRR, 2024

Adversarial Robustness for Visual Grounding of Multimodal Large Language Models.
CoRR, 2024

Energy-Latency Manipulation of Multi-modal Large Language Models via Verbose Samples.
CoRR, 2024

FMM-Attack: A Flow-based Multi-modal Adversarial Attack on Video-based LLMs.
CoRR, 2024

Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images.
Proceedings of the Twelfth International Conference on Learning Representations, 2024

Not All Prompts Are Secure: A Switchable Backdoor Attack Against Pre-trained Vision Transfomers.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
Backdoor Defense via Adaptively Splitting Poisoned Dataset.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023

Backdoor Attack on Hash-based Image Retrieval via Clean-label Data Poisoning.
Proceedings of the 34th British Machine Vision Conference 2023, 2023

2022
Practical protection against video data leakage via universal adversarial head.
Pattern Recognit., 2022

Hardly Perceptible Trojan Attack Against Neural Networks with Bit Flips.
Proceedings of the Computer Vision - ECCV 2022, 2022

2021
Clean-label Backdoor Attack against Deep Hashing based Retrieval.
CoRR, 2021


  Loading...