Harsha Nori

Orcid: 0000-0002-5442-1359

According to our database1, Harsha Nori authored at least 24 papers between 2018 and 2024.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

On csauthors.net:

Bibliography

2024
Interpretable Predictive Models to Understand Risk Factors for Maternal and Fetal Outcomes.
J. Heal. Informatics Res., March, 2024

Elephants Never Forget: Testing Language Models for Memorization of Tabular Data.
CoRR, 2024

Differentially Private Synthetic Data via Foundation Model APIs 2: Text.
CoRR, 2024

Data Science with LLMs and Interpretable Models.
CoRR, 2024

2023
Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine.
CoRR, 2023

LLMs Understand Glass-Box Models, Discover Surprises, and Suggest Repairs.
CoRR, 2023

Differentially Private Synthetic Data via Foundation Model APIs 1: Images.
CoRR, 2023

Capabilities of GPT-4 on Medical Challenge Problems.
CoRR, 2023

Sparks of Artificial General Intelligence: Early experiments with GPT-4.
CoRR, 2023

Supporting Human-AI Collaboration in Auditing LLMs with LLMs.
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, 2023

2022
Using Interpretable Machine Learning to Predict Maternal and Fetal Outcomes.
CoRR, 2022

Primo: Practical Learning-Augmented Systems with Interpretable Models.
Proceedings of the 2022 USENIX Annual Technical Conference, 2022

Interpretability, Then What? Editing Machine Learning Models to Reflect Human Knowledge and Values.
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022

Why Data Scientists Prefer Glassbox Machine Learning: Algorithms, Differential Privacy, Editing and Bias Mitigation.
Proceedings of the KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14, 2022

Differentially Private Estimation of Heterogeneous Causal Effects.
Proceedings of the 1st Conference on Causal Learning and Reasoning, 2022

2021
Summarize with Caution: Comparing Global Feature Attributions.
IEEE Data Eng. Bull., 2021

GAM Changer: Editing Generalized Additive Models with Interactive Visualization.
CoRR, 2021

Using Explainable Boosting Machines (EBMs) to Detect Common Flaws in Data.
Proceedings of the Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021

Interpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning.
Proceedings of the 3rd Workshop on Data Science with Human in the Loop, 2021

Accuracy, Interpretability, and Differential Privacy via Explainable Boosting.
Proceedings of the 38th International Conference on Machine Learning, 2021

2020
Intelligible and Explainable Machine Learning: Best Practices and Practical Challenges.
Proceedings of the KDD '20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2020

2019
InterpretML: A Unified Framework for Machine Learning Interpretability.
CoRR, 2019

An Algorithmic Framework For Differentially Private Data Analysis on Trusted Processors.
Proceedings of the Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, 2019

2018
Comparing Population Means Under Local Differential Privacy: With Significance and Power.
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, 2018


  Loading...