Robert J. Durrant

According to our database1, Robert J. Durrant authored at least 18 papers between 2008 and 2018.

Collaborative distances:
  • Dijkstra number2 of five.
  • Erdős number3 of four.



In proceedings 
PhD thesis 




Foreword: special issue for the journal track of the 9th Asian Conference on Machine Learning (ACML 2017).
Mach. Learn., 2018

Maximum Gradient Dimensionality Reduction.
Proceedings of the 24th International Conference on Pattern Recognition, 2018

Foreword: special issue for the journal track of the 8th Asian conference on machine learning (ACML 2016).
Mach. Learn., 2017

Maximum Margin Principal Components.
CoRR, 2017

Toward Large-Scale Continuous EDA: A Random Matrix Theory Perspective.
Evol. Comput., 2016

How effective is Cauchy-EDA in high dimensions?
Proceedings of the IEEE Congress on Evolutionary Computation, 2016

Random projections as regularizers: learning a linear discriminant from fewer observations than dimensions.
Mach. Learn., 2015

Learning in high dimensions with projected linear discriminants.
PhD thesis, 2013

Sharp Generalization Error Bounds for Randomly-projected Classifiers.
Proceedings of the 30th International Conference on Machine Learning, 2013

Towards large scale continuous EDA: a random matrix theory perspective.
Proceedings of the Genetic and Evolutionary Computation Conference, 2013

Dimension-Adaptive Bounds on Compressive FLD Classification.
Proceedings of the Algorithmic Learning Theory - 24th International Conference, 2013

Random Projections as Regularizers: Learning a Linear Discriminant Ensemble from Fewer Observations than Dimensions.
Proceedings of the Asian Conference on Machine Learning, 2013

A tight bound on the performance of Fisher's linear discriminant in randomly projected data spaces.
Pattern Recognit. Lett., 2012

Error bounds for Kernel Fisher Linear Discriminant in Gaussian Hilbert space.
Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics, 2012

Compressed fisher linear discriminant analysis: classification of randomly projected data.
Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2010

A Bound on the Performance of LDA in Randomly Projected Data Spaces.
Proceedings of the 20th International Conference on Pattern Recognition, 2010

When is 'nearest neighbour' meaningful: A converse theorem and implications.
J. Complex., 2009

Learning with Lq<1 vs L1-Norm Regularisation with Exponentially Many Irrelevant Features.
Proceedings of the Machine Learning and Knowledge Discovery in Databases, 2008