Sahand Negahban

According to our database1, Sahand Negahban authored at least 34 papers between 2008 and 2021.

Collaborative distances:

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2021
Connectome-Based Predictive Modelling With Missing Connectivity Data Using Robust Matrix Completion.
Proceedings of the 18th IEEE International Symposium on Biomedical Imaging, 2021

2020
Feature Selection using Stochastic Gates.
Proceedings of the 37th International Conference on Machine Learning, 2020

Tree-projected gradient descent for estimating gradient-sparse parameters on graphs.
Proceedings of the Conference on Learning Theory, 2020

2019
Warm-starting Contextual Bandits: Robustly Combining Supervised and Bandit Feedback.
Proceedings of the 36th International Conference on Machine Learning, 2019

2018
Learning from Comparisons and Choices.
J. Mach. Learn. Res., 2018

Understanding adversarial training: Increasing local stability of supervised models through robust optimization.
Neurocomputing, 2018

Alternating Linear Bandits for Online Matrix-Factorization Recommendation.
CoRR, 2018

Deep supervised feature selection using Stochastic Gates.
CoRR, 2018

Regional Differences in Predicting Risk of 30-Day Readmissions for Heart Failure.
Proceedings of the Nursing Informatics 2018, 2018

Predicting Risk of 30-Day Readmissions Using Two Emerging Machine Learning Methods.
Proceedings of the Nursing Informatics 2018, 2018

2017
Prediction of Adverse Events in Patients Undergoing Major Cardiovascular Procedures.
IEEE J. Biomed. Health Informatics, 2017

Rank Centrality: Ranking from Pairwise Comparisons.
Oper. Res., 2017

On Approximation Guarantees for Greedy Low Rank Optimization.
CoRR, 2017

Minimax Estimation of Bandable Precision Matrices.
Proceedings of the Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 2017

On Approximation Guarantees for Greedy Low Rank Optimization.
Proceedings of the 34th International Conference on Machine Learning, 2017

Scalable Greedy Feature Selection via Weak Submodularity.
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017

2016
Restricted Strong Convexity Implies Weak Submodularity.
CoRR, 2016

2015
Understanding Adversarial Training: Increasing Local Stability of Neural Nets through Robust Optimization.
CoRR, 2015

Individualized rank aggregation using nuclear norm regularization.
Proceedings of the 53rd Annual Allerton Conference on Communication, 2015

2014
Stochastic optimization and sparse statistical recovery: An optimal algorithm for high dimensions.
Proceedings of the 48th Annual Conference on Information Sciences and Systems, 2014

2012
Restricted Strong Convexity and Weighted Matrix Completion: Optimal Bounds with Noise.
J. Mach. Learn. Res., 2012

FASt global convergence of gradient methods for solving regularized M-estimation.
Proceedings of the IEEE Statistical Signal Processing Workshop, 2012

Iterative ranking from pair-wise comparisons.
Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012

Stochastic optimization and sparse statistical recovery: Optimal algorithms for high dimensions.
Proceedings of the Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012

Scaling multiple-source entity resolution using statistically efficient transfer learning.
Proceedings of the 21st ACM International Conference on Information and Knowledge Management, 2012

Learning sparse Boolean polynomials.
Proceedings of the 50th Annual Allerton Conference on Communication, 2012

2011
Simultaneous Support Recovery in High Dimensions: Benefits and Perils of Block <sub>1</sub>/ <sub>INFINITY </sub> -Regularization.
IEEE Trans. Inf. Theory, 2011

Fast global convergence of gradient methods for high-dimensional statistical recovery
CoRR, 2011

Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions.
Proceedings of the 28th International Conference on Machine Learning, 2011

2010
Fast global convergence rates of gradient methods for high-dimensional statistical recovery.
Proceedings of the Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, 2010

Estimation of (near) low-rank matrices with noise and high-dimensional scaling.
Proceedings of the 27th International Conference on Machine Learning (ICML-10), 2010

2009
Simultaneous support recovery in high dimensions: Benefits and perils of block l<sub>1</sub>/l<sub>infinity</sub>-regularization
CoRR, 2009

A unified framework for high-dimensional analysis of $M$-estimators with decomposable regularizers.
Proceedings of the Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, 2009

2008
Phase transitions for high-dimensional joint support recovery.
Proceedings of the Advances in Neural Information Processing Systems 21, 2008


  Loading...