Tianshuo Cong

Orcid: 0000-0003-3189-8223

According to our database1, Tianshuo Cong authored at least 21 papers between 2019 and 2025.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2025
LoRA-Leak: Membership Inference Attacks Against LoRA Fine-tuned Language Models.
CoRR, July, 2025

Watermarking LLM-Generated Datasets in Downstream Tasks.
CoRR, June, 2025

FragFake: A Dataset for Fine-Grained Detection of Edited Images with Vision Language Models.
CoRR, May, 2025

Behind the Tip of Efficiency: Uncovering the Submerged Threats of Jailbreak Attacks in Small Language Models.
CoRR, February, 2025

SoK: Benchmarking Poisoning Attacks and Defenses in Federated Learning.
CoRR, February, 2025

PEFTGuard: Detecting Backdoor Attacks Against Parameter-Efficient Fine-Tuning.
Proceedings of the IEEE Symposium on Security and Privacy, 2025

Safety Misalignment Against Large Language Models.
Proceedings of the 32nd Annual Network and Distributed System Security Symposium, 2025

Beyond the Tip of Efficiency: Uncovering the Submerged Threats of Jailbreak Attacks in Small Language Models.
Proceedings of the Findings of the Association for Computational Linguistics, 2025

CL-Attack: Textual Backdoor Attacks via Cross-Lingual Triggers.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts.
Proceedings of the AAAI-25, Sponsored by the Association for the Advancement of Artificial Intelligence, February 25, 2025

2024
Safety Misalignment Against Large Language Models.
Dataset, November, 2024

On Evaluating The Performance of Watermarked Machine-Generated Texts Under Adversarial Attacks.
CoRR, 2024

Jailbreak Attacks and Defenses Against Large Language Models: A Survey.
CoRR, 2024

JailbreakEval: An Integrated Toolkit for Evaluating Jailbreak Attempts Against Large Language Models.
CoRR, 2024

Test-Time Poisoning Attacks Against Test-Time Adaptation Models.
Proceedings of the IEEE Symposium on Security and Privacy, 2024

Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model Merging.
Proceedings of the 1st ACM Workshop on Large AI Systems and Models with Privacy and Safety Analysis, 2024

2023
Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models.
CoRR, 2023

2022
Construction of generalized-involutory MDS matrices.
IACR Cryptol. ePrint Arch., 2022

SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained Encoders.
Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, 2022

2020
New Subquadratic Algorithms for Constructing Lightweight Hadamard MDS Matrices (Full Version).
IACR Cryptol. ePrint Arch., 2020

2019
Big Data Driven Oriented Graph Theory Aided tagSNPs Selection for Genetic Precision Therapy.
IEEE Access, 2019


  Loading...