Dylan Slack

Orcid: 0000-0003-4186-2937

According to our database1, Dylan Slack authored at least 19 papers between 2019 and 2023.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2023
Explaining machine learning models with interactive natural language conversations using TalkToModel.
Nat. Mac. Intell., August, 2023

TABLET: Learning From Instructions For Tabular Data.
CoRR, 2023

Post Hoc Explanations of Language Models Can Improve Language Models.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
TalkToModel: Understanding Machine Learning Models With Open Ended Dialogues.
CoRR, 2022

SAFER: Data-Efficient and Safe Reinforcement Learning via Skill Acquisition.
CoRR, 2022

Rethinking Explainability as a Dialogue: A Practitioner's Perspective.
CoRR, 2022

2021
Feature Attributions and Counterfactual Explanations Can Be Manipulated.
CoRR, 2021

Counterfactual Explanations Can Be Manipulated.
CoRR, 2021

Defuse: Harnessing Unrestricted Adversarial Examples for Debugging Models Beyond Test Accuracy.
CoRR, 2021

Reliable Post hoc Explanations: Modeling Uncertainty in Explainability.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Counterfactual Explanations Can Be Manipulated.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

On the Lack of Robust Interpretability of Neural Text Classifiers.
Proceedings of the Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021, 2021

2020
Differentially Private Language Models Benefit from Public Pre-training.
CoRR, 2020

How Much Should I Trust You? Modeling Uncertainty of Black Box Explanations.
CoRR, 2020

Fairness warnings and fair-MAML: learning fairly with minimal data.
Proceedings of the FAT* '20: Conference on Fairness, 2020

Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods.
Proceedings of the AIES '20: AAAI/ACM Conference on AI, 2020

2019
Fair Meta-Learning: Learning How to Learn Fairly.
CoRR, 2019

How can we fool LIME and SHAP? Adversarial Attacks on Post hoc Explanation Methods.
CoRR, 2019

Assessing the Local Interpretability of Machine Learning Models.
CoRR, 2019


  Loading...