Yigitcan Kaya

According to our database1, Yigitcan Kaya authored at least 15 papers between 2018 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
The Limitations of Deep Learning Methods in Realistic Adversarial Settings.
PhD thesis, 2023

Adversarial Robustness of Learning-based Static Malware Classifiers.
CoRR, 2023

2022
Generating Distributional Adversarial Examples to Evade Statistical Detectors.
Proceedings of the International Conference on Machine Learning, 2022

2021
Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving Adversarial Outcomes.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

When Does Data Augmentation Help With Membership Inference Attacks?
Proceedings of the 38th International Conference on Machine Learning, 2021

A Panda? No, It's a Sloth: Slowdown Attacks on Adaptive Multi-Exit Neural Network Inference.
Proceedings of the 9th International Conference on Learning Representations, 2021

2020
On the Effectiveness of Regularization Against Membership Inference Attacks.
CoRR, 2020

On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping.
CoRR, 2020

How to 0wn the NAS in Your Spare Time.
Proceedings of the 8th International Conference on Learning Representations, 2020

2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks.
Proceedings of the 28th USENIX Security Symposium, 2019

Shallow-Deep Networks: Understanding and Mitigating Network Overthinking.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
How to Stop Off-the-Shelf Deep Neural Networks from Overthinking.
CoRR, 2018

Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks.
CoRR, 2018

When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks.
Proceedings of the 27th USENIX Security Symposium, 2018

Too Big to FAIL: What You Need to Know Before Attacking a Machine Learning System.
Proceedings of the Security Protocols XXVI, 2018


  Loading...