TY - UNPB
T1 - Neuro-BOTs: Biologically-Informed Transformers for Brain Imaging Analysis
AU - Turkheimer, Federico
AU - Martins, Daniel
AU - Fagerholm, Erik
AU - De Alteriis, Giuseppe
AU - Facca, Massimiliano
AU - Moretto, Manuela
AU - Batzu, Lucia
AU - Rota, Silvia
AU - Brazdil, Milan
AU - Expert, Paul
AU - Veronesse, Mattia
PY - 2025/6/10
Y1 - 2025/6/10
N2 - Transformer models have revolutionized natural language processing by enabling flexible,parallel processing of complex input–output relationships. Here, we adapt this architecture to brain imaging through a biologically informed framework called Neuro-BOTs. Unlike traditional Transformers that learn attention weights purely from data, Neuro-BOTs incorporate prior neurobiological knowledge at each stage of the encoder: molecular maps (e.g.,neurotransmitters), cellular distributions (e.g., mitochondrial density), and large-scale structural connectivity. These priors act as spatial filters—analogous to attention weights—that guide the model’s interpretation of brain features. We apply this approach to a binaryclassification task using resting-state fMRI data from Parkinson’s disease patients and healthycontrols. Among several biologically defined attention layers, the noradrenergic map significantly improved classification accuracy from 71.3% to 89.7%. While based on a limited sample, this approach demonstrates that embedding multiscale biological priors intoTransformer-based architectures can improve both predictive performance and neurobiological interpretability. More broadly, we propose that such models open a pathway toward viewing brain inference as a form of translation, with applications across clinical,preclinical, and multimodal domains.
AB - Transformer models have revolutionized natural language processing by enabling flexible,parallel processing of complex input–output relationships. Here, we adapt this architecture to brain imaging through a biologically informed framework called Neuro-BOTs. Unlike traditional Transformers that learn attention weights purely from data, Neuro-BOTs incorporate prior neurobiological knowledge at each stage of the encoder: molecular maps (e.g.,neurotransmitters), cellular distributions (e.g., mitochondrial density), and large-scale structural connectivity. These priors act as spatial filters—analogous to attention weights—that guide the model’s interpretation of brain features. We apply this approach to a binaryclassification task using resting-state fMRI data from Parkinson’s disease patients and healthycontrols. Among several biologically defined attention layers, the noradrenergic map significantly improved classification accuracy from 71.3% to 89.7%. While based on a limited sample, this approach demonstrates that embedding multiscale biological priors intoTransformer-based architectures can improve both predictive performance and neurobiological interpretability. More broadly, we propose that such models open a pathway toward viewing brain inference as a form of translation, with applications across clinical,preclinical, and multimodal domains.
M3 - Preprint
BT - Neuro-BOTs: Biologically-Informed Transformers for Brain Imaging Analysis
ER -