Ingo Steinwart

Orcid: 0000-0002-4436-7109

Affiliations:
  • Universität Stuttgart, Germany


According to our database1, Ingo Steinwart authored at least 65 papers between 2001 and 2024.

Collaborative distances:
  • Dijkstra number2 of four.
  • Erdős number3 of four.

Timeline

Legend:

Book 
In proceedings 
Article 
PhD thesis 
Dataset
Other 

Links

Online presence:

On csauthors.net:

Bibliography

2024
Better by Default: Strong Pre-Tuned MLPs and Boosted Trees on Tabular Data.
CoRR, 2024

Conditioning of Banach Space Valued Gaussian Random Variables: An Approximation Approach Based on Martingales.
CoRR, 2024

2023
Adaptive Clustering Using Kernel Density Estimators.
J. Mach. Learn. Res., 2023

A Framework and Benchmark for Deep Batch Active Learning for Regression.
J. Mach. Learn. Res., 2023

Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension.
Proceedings of the Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, 2023

2022
Intrinsic Dimension Adaptive Partitioning for Kernel Methods.
SIAM J. Math. Data Sci., June, 2022

Training Two-Layer ReLU Networks with Gradient Descent is Inconsistent.
J. Mach. Learn. Res., 2022

Improved Classification Rates for Localized SVMs.
J. Mach. Learn. Res., 2022

Physics-Informed Gaussian Process Regression Generalizes Linear PDE Solvers.
CoRR, 2022

Utilizing Expert Features for Contrastive Learning of Time-Series Representations.
Proceedings of the International Conference on Machine Learning, 2022

SOSP: Efficiently Capturing Global Correlations by Second-Order Structured Pruning.
Proceedings of the Tenth International Conference on Learning Representations, 2022

2021
A closer look at covering number bounds for Gaussian kernels.
J. Complex., 2021

Which Minimizer Does My Neural Network Converge To?
Proceedings of the Machine Learning and Knowledge Discovery in Databases. Research Track, 2021

2020
Sobolev Norm Learning Rates for Regularized Least-Squares Algorithms.
J. Mach. Learn. Res., 2020

Reproducing Kernel Hilbert Spaces Cannot Contain all Continuous Functions on a Compact Metric Space.
CoRR, 2020

2019
Learning rates for kernel-based expectile regression.
Mach. Learn., 2019

Best-scored Random Forest Classification.
CoRR, 2019

Global Minima of DNNs: The Plenty Pantry.
CoRR, 2019

A Sober Look at Neural Network Initializations.
CoRR, 2019

2018
Kernel Density Estimation for Dynamical Systems.
J. Mach. Learn. Res., 2018

Optimal Learning with Anisotropic Gaussian SVMs.
CoRR, 2018

2017
A short note on the comparison of interpolation widths, entropy numbers, and Kolmogorov widths.
J. Approx. Theory, 2017

An SVM-like approach for expectile regression.
Comput. Stat. Data Anal., 2017

liquidSVM: A Fast and Versatile SVM package.
CoRR, 2017

Spatial Decompositions for Large Scale SVMs.
Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, 2017

2016
Learning Theory Estimates with Observations from General Stationary Stochastic Processes.
Neural Comput., 2016

Optimal Learning Rates for Localized SVMs.
J. Mach. Learn. Res., 2016

Learning with Hierarchical Gaussian Kernels.
CoRR, 2016

2015
Towards an axiomatic approach to hierarchical clustering of measures.
J. Mach. Learn. Res., 2015

2014
Fast learning from α-mixing observations.
J. Multivar. Anal., 2014

Elicitation and Identification of Properties.
Proceedings of The 27th Conference on Learning Theory, 2014

2013
Some Remarks on the Statistical Analysis of SVMs and Related Methods.
Proceedings of the Empirical Inference - Festschrift in Honor of Vladimir N. Vapnik, 2013

2012
Consistency and Rates for Clustering with DBSCAN.
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012

2011
Training SVMs Without Offset.
J. Mach. Learn. Res., 2011

Adaptive Density Level Set Clustering.
Proceedings of the COLT 2011, 2011

Optimal learning rates for least squares SVMs using Gaussian kernels.
Proceedings of the Advances in Neural Information Processing Systems 24: 25th Annual Conference on Neural Information Processing Systems 2011. Proceedings of a meeting held 12-14 December 2011, 2011

2010
Radial kernels and their reproducing kernel Hilbert spaces.
J. Complex., 2010

Universal Kernels on Non-Standard Input Spaces.
Proceedings of the Advances in Neural Information Processing Systems 23: 24th Annual Conference on Neural Information Processing Systems 2010. Proceedings of a meeting held 6-9 December 2010, 2010

Using support vector machines for anomalous change detection.
Proceedings of the IEEE International Geoscience & Remote Sensing Symposium, 2010

2009
Learning from dependent observations.
J. Multivar. Anal., 2009

Oracle inequalities for support vector machines that are based on random entropy numbers.
J. Complex., 2009

Fast Learning from Non-i.i.d. Observations.
Proceedings of the Advances in Neural Information Processing Systems 22: 23rd Annual Conference on Neural Information Processing Systems 2009. Proceedings of a meeting held 7-10 December 2009, 2009

Optimal Rates for Regularized Least Squares Regression.
Proceedings of the COLT 2009, 2009

2008
Sparsity of SVMs that use the epsilon-insensitive loss.
Proceedings of the Advances in Neural Information Processing Systems 21, 2008

Support Vector Machines.
Information science and statistics, Springer, ISBN: 978-0-387-77241-7, 2008

2007
Stability of Unstable Learning Algorithms.
Mach. Learn., 2007

Robust learning from bites for data mining.
Comput. Stat. Data Anal., 2007

How SVMs can estimate quantiles and the median.
Proceedings of the Advances in Neural Information Processing Systems 20, 2007

Gaps in Support Vector Optimization.
Proceedings of the Learning Theory, 20th Annual Conference on Learning Theory, 2007

2006
An Explicit Description of the Reproducing Kernel Hilbert Spaces of Gaussian RBF Kernels.
IEEE Trans. Inf. Theory, 2006

QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines.
J. Mach. Learn. Res., 2006

An Oracle Inequality for Clipped Regularized Risk Minimizers.
Proceedings of the Advances in Neural Information Processing Systems 19, 2006

Function Classes That Approximate the Bayes Risk.
Proceedings of the Learning Theory, 19th Annual Conference on Learning Theory, 2006

2005
Consistency of support vector machines and other regularized kernel classifiers.
IEEE Trans. Inf. Theory, 2005

A Classification Framework for Anomaly Detection.
J. Mach. Learn. Res., 2005

Fast Rates for Support Vector Machines.
Proceedings of the Learning Theory, 18th Annual Conference on Learning Theory, 2005

2004
On Robustness Properties of Convex Risk Minimization Methods for Pattern Recognition.
J. Mach. Learn. Res., 2004

Entropy of convex hulls--some Lorentz norm results.
J. Approx. Theory, 2004

Fast Rates to Bayes for Kernel Machines.
Proceedings of the Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems, 2004

Density Level Detection is Classification.
Proceedings of the Advances in Neural Information Processing Systems 17 [Neural Information Processing Systems, 2004

2003
On the Optimal Parameter Choice for v-Support Vector Machines.
IEEE Trans. Pattern Anal. Mach. Intell., 2003

Sparseness of Support Vector Machines.
J. Mach. Learn. Res., 2003

Sparseness of Support Vector Machines---Some Asymptotically Sharp Bounds.
Proceedings of the Advances in Neural Information Processing Systems 16 [Neural Information Processing Systems, 2003

2002
Support Vector Machines are Universally Consistent.
J. Complex., 2002

2001
On the Influence of the Kernel on the Consistency of Support Vector Machines.
J. Mach. Learn. Res., 2001


  Loading...