Lin Li

Orcid: 0000-0001-6369-2663

Affiliations:
  • King's College London, Department of Informatics, UK


According to our database1, Lin Li authored at least 14 papers between 2023 and 2025.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2025
Emerging Cyber Attack Risks of Medical AI Agents.
CoRR, April, 2025

AROID: Improving Adversarial Robustness Through Online Instance-Wise Data Augmentation.
Int. J. Comput. Vis., February, 2025

Robust shortcut and disordered robustness: Improving adversarial training through adaptive smoothing.
Pattern Recognit., 2025

Advancing robots with greater dynamic dexterity: A large-scale multi-view and multi-modal dataset of human-human throw&catch of arbitrary objects.
Int. J. Robotics Res., 2025

2024
Artificial Intelligence for Biomedical Video Generation.
CoRR, 2024

OODRobustBench: a Benchmark and Large-Scale Analysis of Adversarial Robustness under Distribution Shift.
Proceedings of the Forty-first International Conference on Machine Learning, 2024

One Prompt Word is Enough to Boost Adversarial Robustness for Pre-Trained Vision-Language Models.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024

2023
Large AI Models in Health Informatics: Applications, Challenges, and the Future.
IEEE J. Biomed. Health Informatics, December, 2023

Understanding and combating robust overfitting via input loss landscape analysis and regularization.
Pattern Recognit., April, 2023

OODRobustBench: benchmarking and analyzing adversarial robustness under distribution shift.
CoRR, 2023

VisionFM: a Multi-Modal Multi-Task Vision Foundation Model for Generalist Ophthalmic Artificial Intelligence.
CoRR, 2023

Improved Adversarial Training Through Adaptive Instance-wise Loss Smoothing.
CoRR, 2023

Large AI Models in Health Informatics: Applications, Challenges, and the Future.
CoRR, 2023

Data augmentation alone can improve adversarial training.
Proceedings of the Eleventh International Conference on Learning Representations, 2023


  Loading...