Replica symmetry breaking in supervised and unsupervised Hebbian networks

Linda Albanese*, Andrea Alessandrelli, Alessia Annibale, Adriano Barra

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Hebbian neural networks with multi-node interactions, often called Dense Associative Memories, have recently attracted considerable interest in the statistical mechanics community, as they have been shown to outperform their pairwise counterparts in a number of features, including resilience against adversarial attacks, pattern retrieval with extremely weak signals and supra-linear storage capacities. However, their analysis has so far been carried out within a replica-symmetric theory. In this manuscript, we relax the assumption of replica symmetry and analyse these systems at one step of replica-symmetry breaking, focusing on two different prescriptions for the interactions that we will refer to as supervised and unsupervised learning. We derive the phase diagram of the model using two different approaches, namely Parisi’s hierarchical ansatz for the relationship between different replicas within the replica approach, and the so-called telescope ansatz within Guerra’s interpolation method: our results show that replica-symmetry breaking does not alter the threshold for learning and slightly increases the maximal storage capacity. Further, we also derive analytically the instability line of the replica-symmetric theory, using a generalization of the De Almeida and Thouless approach.

Original languageEnglish
Article number165003
JournalJournal of Physics A: Mathematical and Theoretical
Issue number16
Publication statusPublished - 19 Apr 2024


  • dense associative memory
  • Guerra’s interpolation
  • replica symmetry breaking
  • replica trick
  • supervised learning
  • unsupervised learning


Dive into the research topics of 'Replica symmetry breaking in supervised and unsupervised Hebbian networks'. Together they form a unique fingerprint.

Cite this