Brain Inspired Learning for Neural Networks

Jayani Hewavitharana, Amida Anand, Peter Giese, Carolina Ierardi, Kathleen Steinhofel*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperpeer-review

Abstract

Artificial neural networks (ANNs) have achieved remarkable success in various AI applications, yet their learning mechanisms remain fundamentally different from those in biological systems. While conventional ANNs rely on global weight updates via backpropagation, biological learning operates through more localised, energy-efficient synaptic modifications. Inspired by these principles, this study investigates two alternative learning rules modelled after long-term potentiation (LTP) of young brains and multi-innervated spines (MIS) observed in ageing brains. We implement these learning mechanisms in a bipartite artificial neural network and analyse their impact on learning speed, network adaptation, and specificity of output representations. Our results demonstrate that LTP-based learning facilitates rapid convergence, with 85% of input patterns achieving the learning objective in the minimum required iterations. In contrast, learning based on MIS exhibits slower, incremental learning but enables weight redistribution, supporting more flexibility. Despite these differences, both learning mechanisms lead to highly distinct representational spaces, with approximately 79% of output patterns being unique. Additionally, learned representations across the two approaches exhibit only a 21% overlap, highlighting their fundamentally different learning trajectories and potential benefits for ensemble learning and hybrid architectures.
Original languageEnglish
Title of host publicationLecture Notes in Computer Science
PublisherSpringer
Publication statusAccepted/In press - 22 Mar 2025

Fingerprint

Dive into the research topics of 'Brain Inspired Learning for Neural Networks'. Together they form a unique fingerprint.

Cite this