Classification of Superstatistical Features in High Dimensions

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

We characterise the learning of a mixture of two clouds of data points with generic centroids via empirical risk minimisation in the high dimensional regime, under the assumptions of generic convex loss and convex regularisation. Each cloud of data points is obtained by sampling from a possibly uncountable superposition of Gaussian distributions, whose variance has a generic probability density ϱ. Our analysis covers therefore a large family of data distributions, including the case of power-law-tailed distributions with no covariance. We study the generalisation performance of the obtained estimator, we analyse the role of regularisation, and the dependence of the separability transition on the distribution scale parameters.
Original languageEnglish
Title of host publication2023 Conference on Neural Information Processing Systems
Publication statusAccepted/In press - 21 Sept 2023

Fingerprint

Dive into the research topics of 'Classification of Superstatistical Features in High Dimensions'. Together they form a unique fingerprint.

Cite this