Neuro-BOTs: Biologically-Informed Transformers for Brain Imaging Analysis

Federico Turkheimer, Daniel Martins, Erik Fagerholm, Giuseppe De Alteriis, Massimiliano Facca, Manuela Moretto, Lucia Batzu, Silvia Rota, Milan Brazdil, Paul Expert, Mattia Veronesse

Research output: Working paper/PreprintPreprint

Abstract

Transformer models have revolutionized natural language processing by enabling flexible,parallel processing of complex input–output relationships. Here, we adapt this architecture to brain imaging through a biologically informed framework called Neuro-BOTs. Unlike traditional Transformers that learn attention weights purely from data, Neuro-BOTs incorporate prior neurobiological knowledge at each stage of the encoder: molecular maps (e.g.,neurotransmitters), cellular distributions (e.g., mitochondrial density), and large-scale structural connectivity. These priors act as spatial filters—analogous to attention weights—that guide the model’s interpretation of brain features. We apply this approach to a binaryclassification task using resting-state fMRI data from Parkinson’s disease patients and healthycontrols. Among several biologically defined attention layers, the noradrenergic map significantly improved classification accuracy from 71.3% to 89.7%. While based on a limited sample, this approach demonstrates that embedding multiscale biological priors intoTransformer-based architectures can improve both predictive performance and neurobiological interpretability. More broadly, we propose that such models open a pathway toward viewing brain inference as a form of translation, with applications across clinical,preclinical, and multimodal domains.
Original languageEnglish
Publication statusPublished - 10 Jun 2025

Fingerprint

Dive into the research topics of 'Neuro-BOTs: Biologically-Informed Transformers for Brain Imaging Analysis'. Together they form a unique fingerprint.

Cite this