LOADING...
'TRIBE v2': Meta's new AI model can predict brain activity
TRIBE v2 could help treat neurological disorders

'TRIBE v2': Meta's new AI model can predict brain activity

Mar 27, 2026
10:31 am

What's the story

Meta has unveiled an innovative artificial intelligence (AI) model, TRIBE v2 (Trimodal Brain Encoder), that predicts neural responses to sight, sound, and language. The foundation model uses pre-trained audio, video, and text embeddings to predict brain activity. The tech giant hopes this groundbreaking development will help create digital twins for neural activity and accelerate breakthroughs in treating neurological disorders.

Model features

It is trained on data from over 700 volunteers

TRIBE v2 processes interpretations through a transformer for universal representation across all stimuli, tasks, and individuals. Meta trained the system on brain imaging data from over 700 volunteers, a major improvement over earlier versions that used only a handful of subjects. The participants were exposed to various media such as podcasts, movies, images, and text while their brain activity was recorded using functional magnetic resonance imaging (fMRI).

Model performance

Model offers 70-fold increase in resolution

TRIBE v2 learns patterns from fMRI data and predicts what a brain scan would look like without actually running the scan. Meta claims this new model offers a 70-fold increase in resolution over similar systems, along with significant improvements in speed and accuracy. This enables "zero-shot prediction," or the ability to predict brain responses for new individuals, languages, and tasks without retraining the model.

Advertisement