Susan A. Murphy

Orcid: 0000-0002-2032-4286

Affiliations:
  • Harvard University, Cambridge, MA, USA
  • University of Michigan, Ann Arbor, MI, USA


According to our database1, Susan A. Murphy authored at least 74 papers between 2005 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
reBandit: Random Effects based Online RL algorithm for Reducing Cannabis Use.
CoRR, 2024

Monitoring Fidelity of Online Reinforcement Learning Algorithms in Clinical Trials.
CoRR, 2024

Non-Stationary Latent Auto-Regressive Bandits.
CoRR, 2024

Online Uniform Risk Times Sampling: First Approximation Algorithms, Learning Augmentation with Full Confidence Interval Integration.
CoRR, 2024

Reinforcement Learning Interventions on Boundedly Rational Human Agents in Frictionful Tasks.
CoRR, 2024

2023
A randomized trial of a mobile health intervention to augment cardiac rehabilitation.
npj Digit. Medicine, 2023

Estimating causal effects with optimization-based methods: A review and empirical comparison.
Eur. J. Oper. Res., 2023

Dyadic Reinforcement Learning.
CoRR, 2023

Online learning in bandits with predicted context.
CoRR, 2023

Effect-Invariant Mechanisms for Policy Generalization.
CoRR, 2023

Contextual Bandits with Budgeted Information Reveal.
CoRR, 2023

Did we personalize? Assessing personalization by an online reinforcement learning algorithm using resampling.
CoRR, 2023

Assessing the Impact of Context Inference Error and Partial Observability on RL Methods for Just-In-Time Adaptive Interventions.
Proceedings of the Uncertainty in Artificial Intelligence, 2023

The Unintended Consequences of Discount Regularization: Improving Regularization in Certainty Equivalence Reinforcement Learning.
Proceedings of the International Conference on Machine Learning, 2023

Reward Design for an Online Reinforcement Learning Algorithm Supporting Oral Self-Care.
Proceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence, 2023

2022
Modeling Mobile Health Users as Reinforcement Learning Agents.
CoRR, 2022

Doubly robust nearest neighbors in factor models.
CoRR, 2022

Statistical Inference After Adaptive Sampling in Non-Markovian Environments.
CoRR, 2022

Counterfactual inference for sequential experimental design.
CoRR, 2022

Designing Reinforcement Learning Algorithms for Digital Interventions: Pre-Implementation Guidelines.
Algorithms, 2022

Data-driven Interpretable Policy Construction for Personalized Mobile Health.
Proceedings of the IEEE International Conference on Digital Health, 2022

2021
IntelligentPooling: practical Thompson sampling for mHealth.
Mach. Learn., 2021

Comparison and Unification of Three Regularization Methods in Batch Reinforcement Learning.
CoRR, 2021

Online structural kernel selection for mobile health.
CoRR, 2021

Statistical Inference with M-Estimators on Bandit Data.
CoRR, 2021

Statistical Inference with M-Estimators on Adaptively Collected Data.
Proceedings of the Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, 2021

Power Constrained Bandits.
Proceedings of the Machine Learning for Healthcare Conference, 2021

2020
Personalized HeartSteps: A Reinforcement Learning Algorithm for Optimizing Physical Activity.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 2020

Fast Physical Activity Suggestions: Efficient Hyperparameter Learning in Mobile Health.
CoRR, 2020

The Micro-Randomized Trial for Developing Digital Interventions: Experimental Design Considerations.
CoRR, 2020

Translating Behavioral Theory into Technological Interventions: Case Study of an mHealth App to Increase Self-reporting of Substance-Use Related Data.
CoRR, 2020

Streamlined Empirical Bayes Fitting of Linear Mixed Models in Mobile Health.
CoRR, 2020

Rapidly Personalizing Mobile Health Treatment Policies with Limited Data.
CoRR, 2020

Inference for Batched Bandits.
Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, 2020

2019
ReVibe: A Context-assisted Evening Recall Approach to Improve Self-report Adherence.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 2019

Off-Policy Estimation of Long-Term Average Outcomes with Applications to Mobile Health.
CoRR, 2019

A smartphone-based behavioural activation application using recommender system.
Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 2019

2018
Just-in-Time but Not Too Much: Determining Treatment Timing in Mobile Health.
Proc. ACM Interact. Mob. Wearable Ubiquitous Technol., 2018

Personalizing Intervention Probabilities By Pooling.
CoRR, 2018

2017
Control Engineering Methods for the Design of Robust Behavioral Treatments.
IEEE Trans. Control. Syst. Technol., 2017

Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K).
IEEE Pervasive Comput., 2017

Addressing the Computational Challenges of Personalized Medicine (Dagstuhl Seminar 17472).
Dagstuhl Reports, 2017

An Actor-Critic Contextual Bandit Algorithm for Personalized Mobile Health Interventions.
CoRR, 2017

Action Centered Contextual Bandits.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

iSurvive: An Interpretable, Event-time Prediction Model for mHealth.
Proceedings of the 34th International Conference on Machine Learning, 2017

<i>e</i>wrapper: operationalizing engagement strategies in mHealth.
Proceedings of the Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, 2017

SARA: a mobile app to engage users in health data collection.
Proceedings of the Adjunct Proceedings of the 2017 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2017 ACM International Symposium on Wearable Computers, 2017

From Ads to Interventions: Contextual Bandits in Mobile Health.
Proceedings of the Mobile Health - Sensors, Analytic Methods, and Applications, 2017

Design Lessons from a Micro-Randomized Pilot Study in Mobile Health.
Proceedings of the Mobile Health - Sensors, Analytic Methods, and Applications, 2017

From Markers to Interventions: The Case of Just-in-Time Stress Intervention.
Proceedings of the Mobile Health - Sensors, Analytic Methods, and Applications, 2017

Introduction to Part III: Markers to mHealth Predictors.
Proceedings of the Mobile Health - Sensors, Analytic Methods, and Applications, 2017

Introduction to Part IV: Predictors to mHealth Interventions.
Proceedings of the Mobile Health - Sensors, Analytic Methods, and Applications, 2017

Introduction to Part II: Sensors to mHealth Markers.
Proceedings of the Mobile Health - Sensors, Analytic Methods, and Applications, 2017

Introduction to Part I: mHealth Applications and Tools.
Proceedings of the Mobile Health - Sensors, Analytic Methods, and Applications, 2017

2016
A Batch, Off-Policy, Actor-Critic Algorithm for Optimizing the Average Reward.
CoRR, 2016

2015
Center of excellence for mobile sensor data-to-knowledge (MD2K).
J. Am. Medical Informatics Assoc., 2015

2014
Budgeted Learning for Developing Personalized Treatment.
Proceedings of the 13th International Conference on Machine Learning and Applications, 2014

2013
Stratégies d'échantillonnage pour l'apprentissage par renforcement batch.
Rev. d'Intelligence Artif., 2013

Batch mode reinforcement learning based on the synthesis of artificial trajectories.
Ann. Oper. Res., 2013

A robust MPC approach to the design of behavioural treatments.
Proceedings of the 52nd IEEE Conference on Decision and Control, 2013

2012
Linear fitted-Q iteration with multiple reward functions.
J. Mach. Learn. Res., 2012

2011
Estimation Monte Carlo sans modèle de politiques de décision.
Rev. d'Intelligence Artif., 2011

Informing sequential clinical decision-making through reinforcement learning: an empirical study.
Mach. Learn., 2011

Active Learning for Developing Personalized Treatment.
Proceedings of the UAI 2011, 2011

Active exploration by searching for experiments that falsify the computed control policy.
Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning, 2011

Active learning for personalizing treatment.
Proceedings of the 2011 IEEE Symposium on Adaptive Dynamic Programming And Reinforcement Learning, 2011

2010
Model-Free Monte Carlo-like Policy Evaluation.
Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 2010

Efficient Reinforcement Learning with Multiple Reward Functions for Randomized Controlled Trial Analysis.
Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010

Towards Min Max Generalization in Reinforcement Learning.
Proceedings of the Agents and Artificial Intelligence - Second International Conference, 2010

A Cautious Approach to Generalization in Reinforcement Learning.
Proceedings of the ICAART 2010 - Proceedings of the International Conference on Agents and Artificial Intelligence, Volume 1, 2010

2009
Inferring bounds on the performance of a control policy from a sample of trajectories.
Proceedings of the IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, 2009

2008
Small Sample Inference for Generalization Error in Classification Using the CUD Bound.
Proceedings of the UAI 2008, 2008

2007
Variable Selection for Optimal Decision Making.
Proceedings of the Artificial Intelligence in Medicine, 2007

2005
A Generalization Error for Q-Learning.
J. Mach. Learn. Res., 2005


  Loading...