Chawin Sitawarin

Affiliations:
  • University of California, Berkeley, USA


According to our database1, Chawin Sitawarin authored at least 24 papers between 2017 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Vulnerability Detection with Code Language Models: How Far Are We?
CoRR, 2024

PAL: Proxy-Guided Black-Box Attack on Large Language Models.
CoRR, 2024

StruQ: Defending Against Prompt Injection with Structured Queries.
CoRR, 2024

2023
Jatmo: Prompt Injection Defense by Task-Specific Finetuning.
CoRR, 2023

Mark My Words: Analyzing and Evaluating Language Model Watermarks.
CoRR, 2023

Defending Against Transfer Attacks From Public Models.
CoRR, 2023

OODRobustBench: benchmarking and analyzing adversarial robustness under distribution shift.
CoRR, 2023

SPDER: Semiperiodic Damping-Enabled Object Representation.
CoRR, 2023

Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems.
Proceedings of the International Conference on Machine Learning, 2023

Part-Based Models Improve Adversarial Robustness.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

REAP: A Large-Scale Realistic Adversarial Patch Benchmark.
Proceedings of the IEEE/CVF International Conference on Computer Vision, 2023

2022
Demystifying the Adversarial Robustness of Random Transformation Defenses.
Proceedings of the International Conference on Machine Learning, 2022

2021
Adversarial Examples for k-Nearest Neighbor Classifiers Based on Higher-Order Voronoi Diagrams.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing.
Proceedings of the AISec@CCS 2021: Proceedings of the 14th ACM Workshop on Artificial Intelligence and Security, 2021

2020
Improving Adversarial Robustness Through Progressive Hardening.
CoRR, 2020

Minimum-Norm Adversarial Examples on KNN and KNN based Models.
Proceedings of the 2020 IEEE Security and Privacy Workshops, 2020

2019
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples.
CoRR, 2019

On the Robustness of Deep K-Nearest Neighbors.
Proceedings of the 2019 IEEE Security and Privacy Workshops, 2019

Analyzing the Robustness of Open-World Machine Learning.
Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, 2019

2018
DARTS: Deceiving Autonomous Cars with Toxic Signs.
CoRR, 2018

Rogue Signs: Deceiving Traffic Sign Recognition with Malicious Ads and Logos.
CoRR, 2018

Enhancing robustness of machine learning systems via data transformations.
Proceedings of the 52nd Annual Conference on Information Sciences and Systems, 2018

Not All Pixels are Born Equal: An Analysis of Evasion Attacks under Locality Constraints.
Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, 2018

2017
Beyond Grand Theft Auto V for Training, Testing and Enhancing Deep Learning in Self Driving Cars.
CoRR, 2017


  Loading...