Ling Shi
Orcid: 0000-0002-2023-0247Affiliations:
- Nanyang Technological University, Singapore
- National University of Singapore, Singapore (PhD 2014)
According to our database1,
Ling Shi
authored at least 35 papers
between 2008 and 2025.
Collaborative distances:
Collaborative distances:
Timeline
Legend:
Book In proceedings Article PhD thesis Dataset OtherLinks
Online presence:
-
on orcid.org
On csauthors.net:
Bibliography
2025
Seeing It Before It Happens: In-Generation NSFW Detection for Diffusion-Based Text-to-Image Models.
CoRR, August, 2025
Circumventing Safety Alignment in Large Language Models Through Embedding Space Toxicity Attenuation.
CoRR, July, 2025
Exposing the Ghost in the Transformer: Abnormal Detection for Large Language Models via Hidden State Forensics.
CoRR, April, 2025
Breaking the Loop: Detecting and Mitigating Denial-of-Service Vulnerabilities in Large Language Models.
CoRR, March, 2025
Detecting LLM Fact-conflicting Hallucinations Enhanced by Temporal-logic-based Reasoning.
CoRR, February, 2025
Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models.
Proceedings of the 2025 IEEE Security and Privacy, 2025
Understanding the Effectiveness of Coverage Criteria for Large Language Models: A Special Angle from Jailbreak Attacks.
Proceedings of the 47th IEEE/ACM International Conference on Software Engineering, 2025
2024
Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection.
Proc. ACM Softw. Eng., 2024
Drowzee: Metamorphic Testing for Fact-Conflicting Hallucination Detection in Large Language Models.
Proc. ACM Program. Lang., 2024
CoRR, 2024
Efficient and Effective Universal Adversarial Attack against Vision-Language Pre-training Models.
CoRR, 2024
Investigating Coverage Criteria in Large Language Models: An In-Depth Study Through Jailbreak Attacks.
CoRR, 2024
NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing.
CoRR, 2024
Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models.
CoRR, 2024
HalluVault: A Novel Logic Programming-aided Metamorphic Testing Framework for Detecting Fact-Conflicting Hallucinations in Large Language Models.
CoRR, 2024
Groot: Adversarial Testing for Generative Text-to-Image Models with Tree-based Semantic Transformation.
CoRR, 2024
CoRR, 2024
GlitchProber: Advancing Effective Detection and Mitigation of Glitch Tokens in Large Language Models.
Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering, 2024
Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering, 2024
Proceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering, 2024
DistillSeq: A Framework for Safety Alignment Testing in Large Language Models using Knowledge Distillation.
Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis, 2024
Proceedings of the Application of Formal Methods, 2024
2021
IEEE Trans. Software Eng., 2021
Proceedings of the Leveraging Applications of Formal Methods, Verification and Validation, 2021
Proceedings of the 28th Asia-Pacific Software Engineering Conference, 2021
2019
Proceedings of the 2019 International Symposium on Theoretical Aspects of Software Engineering, 2019
2018
A UTP semantics for communicating processes with shared variables and its formal encoding in PVS.
Formal Aspects Comput., 2018
Proceedings of the 23rd International Conference on Engineering of Complex Computer Systems, 2018
2013
ACM Trans. Softw. Eng. Methodol., 2013
Proceedings of the Formal Methods and Software Engineering, 2013
2012
Proceedings of the Formal Methods and Software Engineering, 2012
2010
Modeling and Verification of Transmission Protocols: A Case Study on CSMA/CD Protocol.
Proceedings of the Fourth International Conference on Secure Software Integration and Reliability Improvement, 2010
2008
Proceedings of the Second IEEE/IFIP International Symposium on Theoretical Aspects of Software Engineering, 2008