Liam Fowl

According to our database1, Liam Fowl authored at least 27 papers between 2019 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Generating Potent Poisons and Backdoors from Scratch with Guided Diffusion.
CoRR, 2024

2023
Decepticons: Corrupted Transformers Breach Privacy in Federated Learning for Language Models.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Panning for Gold in Federated Learning: Targeted Text Extraction under Arbitrarily Large-Scale Aggregation.
Proceedings of the Eleventh International Conference on Learning Representations, 2023

Exploring Sequence-to-Sequence Transformer-Transducer Models for Keyword Spotting.
Proceedings of the IEEE International Conference on Acoustics, 2023

2022
Thinking Two Moves Ahead: Anticipating Other Users Improves Backdoor Attacks in Federated Learning.
CoRR, 2022

Execute Order 66: Targeted Data Poisoning for Reinforcement Learning.
CoRR, 2022

Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch.
Proceedings of the Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, 2022

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification.
Proceedings of the International Conference on Machine Learning, 2022

Robbing the Fed: Directly Obtaining Private Data in Federated Learning with Modified Models.
Proceedings of the Tenth International Conference on Learning Representations, 2022

Can Neural Nets Learn the Same Model Twice? Investigating Reproducibility and Double Descent from the Decision Boundary Perspective.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022

Poisons that are learned faster are more effective.
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2022

2021
Preventing Unauthorized Use of Proprietary Data: Poisoning for Secure Dataset Release.
CoRR, 2021

DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations.
CoRR, 2021

What Doesn't Kill You Makes You Robust(er): Adversarial Training against Poisons and Backdoors.
CoRR, 2021

Adversarial Examples Make Strong Poisons.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching.
Proceedings of the 9th International Conference on Learning Representations, 2021

Strong Data Augmentation Sanitizes Poisoning and Backdoor Attacks Without an Accuracy Tradeoff.
Proceedings of the IEEE International Conference on Acoustics, 2021

2020
Random Network Distillation as a Diversity Metric for Both Image and Text Generation.
CoRR, 2020

MetaPoison: Practical General-purpose Clean-label Data Poisoning.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Adversarially Robust Few-Shot Learning: A Meta-Learning Approach.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks.
Proceedings of the 37th International Conference on Machine Learning, 2020

Understanding Generalization Through Visualizations.
Proceedings of the "I Can't Believe It's Not Better!" at NeurIPS Workshops, 2020

Headless Horseman: Adversarial Attacks on Transfer Learning Models.
Proceedings of the 2020 IEEE International Conference on Acoustics, 2020

Deep k-NN Defense Against Clean-Label Data Poisoning Attacks.
Proceedings of the Computer Vision - ECCV 2020 Workshops, 2020

Adversarially Robust Distillation.
Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020

2019
Robust Few-Shot Learning with Adversarially Queried Meta-Learners.
CoRR, 2019

Strong Baseline Defenses Against Clean-Label Poisoning Attacks.
CoRR, 2019


  Loading...