King's College London

Research portal

A memory-based method to select the number of relevant components in Principal Component Analysis

Research output: Contribution to journalArticlepeer-review

Standard

A memory-based method to select the number of relevant components in Principal Component Analysis. / Verma, Anshul; Vivo, Pierpaolo; Di Matteo, Tiziana.

In: Journal of Statistical Mechanics (JSTAT), 2019.

Research output: Contribution to journalArticlepeer-review

Harvard

Verma, A, Vivo, P & Di Matteo, T 2019, 'A memory-based method to select the number of relevant components in Principal Component Analysis', Journal of Statistical Mechanics (JSTAT). https://doi.org/10.1088/1742-5468/ab3bc4

APA

Verma, A., Vivo, P., & Di Matteo, T. (2019). A memory-based method to select the number of relevant components in Principal Component Analysis. Journal of Statistical Mechanics (JSTAT). https://doi.org/10.1088/1742-5468/ab3bc4

Vancouver

Verma A, Vivo P, Di Matteo T. A memory-based method to select the number of relevant components in Principal Component Analysis. Journal of Statistical Mechanics (JSTAT). 2019. https://doi.org/10.1088/1742-5468/ab3bc4

Author

Verma, Anshul ; Vivo, Pierpaolo ; Di Matteo, Tiziana. / A memory-based method to select the number of relevant components in Principal Component Analysis. In: Journal of Statistical Mechanics (JSTAT). 2019.

Bibtex Download

@article{df15f5a1eb13411cbcd1dc2a5d777daa,
title = "A memory-based method to select the number of relevant components in Principal Component Analysis",
abstract = "We propose a new data-driven method to select the optimal number of relevant components in Principal Component Analysis (PCA). This new method applies to correlation matrices whose time autocorrelation function decays more slowly than an exponential, giving rise to long memory effects. In comparison with other available methods present in the literature, our procedure does not rely on subjective evaluations and is computationally inexpensive. The underlying basic idea is to use a suitable factor model to analyse the residual memory after sequentially removing more and more components, and stopping the process when the maximum amount of memory has been accounted for by the retained components. We validate our methodology on both synthetic and real financial data, and find in all cases a clear and computationally superior answer entirely compatible with available heuristic criteria, such as cumulative variance and cross-validation.",
author = "Anshul Verma and Pierpaolo Vivo and {Di Matteo}, Tiziana",
year = "2019",
doi = "10.1088/1742-5468/ab3bc4",
language = "English",
journal = "Journal of Statistical Mechanics (JSTAT)",

}

RIS (suitable for import to EndNote) Download

TY - JOUR

T1 - A memory-based method to select the number of relevant components in Principal Component Analysis

AU - Verma, Anshul

AU - Vivo, Pierpaolo

AU - Di Matteo, Tiziana

PY - 2019

Y1 - 2019

N2 - We propose a new data-driven method to select the optimal number of relevant components in Principal Component Analysis (PCA). This new method applies to correlation matrices whose time autocorrelation function decays more slowly than an exponential, giving rise to long memory effects. In comparison with other available methods present in the literature, our procedure does not rely on subjective evaluations and is computationally inexpensive. The underlying basic idea is to use a suitable factor model to analyse the residual memory after sequentially removing more and more components, and stopping the process when the maximum amount of memory has been accounted for by the retained components. We validate our methodology on both synthetic and real financial data, and find in all cases a clear and computationally superior answer entirely compatible with available heuristic criteria, such as cumulative variance and cross-validation.

AB - We propose a new data-driven method to select the optimal number of relevant components in Principal Component Analysis (PCA). This new method applies to correlation matrices whose time autocorrelation function decays more slowly than an exponential, giving rise to long memory effects. In comparison with other available methods present in the literature, our procedure does not rely on subjective evaluations and is computationally inexpensive. The underlying basic idea is to use a suitable factor model to analyse the residual memory after sequentially removing more and more components, and stopping the process when the maximum amount of memory has been accounted for by the retained components. We validate our methodology on both synthetic and real financial data, and find in all cases a clear and computationally superior answer entirely compatible with available heuristic criteria, such as cumulative variance and cross-validation.

U2 - 10.1088/1742-5468/ab3bc4

DO - 10.1088/1742-5468/ab3bc4

M3 - Article

JO - Journal of Statistical Mechanics (JSTAT)

JF - Journal of Statistical Mechanics (JSTAT)

ER -

View graph of relations

© 2020 King's College London | Strand | London WC2R 2LS | England | United Kingdom | Tel +44 (0)20 7836 5454