We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

How AI Tools Are Shaping the Future of Neuroscience

Digital brain connected to circuits on a microchip, symbolizing the integration of AI in neuroscience.
Credit: iStock.
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 4 minutes

AI has become a focal point of scientific inquiry and innovation, finding applications in fields as diverse as medicine, engineering and environmental science.

 

In neuroscience, its potential is particularly intriguing.  

 

The brain is often described as one of the most complex systems in nature. Decoding its vast networks of neurons and understanding how they produce thoughts, emotions and behaviors requires interpreting immense datasets and conducting intricate experiments.

Want more breaking news?

Subscribe to Technology Networks’ daily newsletter, delivering breaking science news straight to your inbox every day.

Subscribe for FREE

AI is increasingly being used as a critical tool in neuroscience, helping researchers tackle complex challenges in understanding brain function. Large language models (LLMs) can process vast amounts of data while identifying patterns across scientific literature. This enables researchers to generate new hypotheses and explore potential outcomes in ways that were previously unattainable.


Dr. Xiaoliang “Ken” Luo, a computational neuroscience and machine learning researcher at the Foresight Institute, has been at the forefront of this effort. In a recent study published in Nature Human Behaviour, Dr. Luo and his team demonstrated how LLMs could surpass human experts in predicting neuroscience research results. Their work introduced BrainBench, a benchmarking tool, and BrainGPT, a specialized LLM fine-tuned on neuroscience literature, which achieved an 86% accuracy in predicting experimental outcomes. These advancements highlight the potential of AI to accelerate scientific discovery and refine research methodologies.


In this Q&A, Dr. Luo discusses the broader implications of AI in neuroscience, ethical considerations and how tools like BrainGPT may shape the future of research in the field.

Rhianna-lily Smith (RLS):

What do you see as the most promising applications of AI in neuroscience?


Xiaoliang “Ken” Luo, PhD (X“L):

I, personally, see two main applications of AI in neuroscience that could offer great potential benefits.

 

The first is a data-driven approach, where AI models like neural networks serve as powerful tools for building mechanistic understandings of the brain. These models can help explain the mechanisms behind neural activity related to various cognitive functions.

 

The second, which is more relevant to the central focus of our publication, is a more meta-level application. Given the power of LLMs to synthesize vast amounts of information, I believe there's huge potential in leveraging generative AI to help neuroscientists more efficiently digest the literature, understand trends in understudied problems in the field and even inspire future research directions.



RLS:

Neuroscience often inspires advances in AI, and vice versa. How do you think this interplay between studying the brain and building AI systems will shape the future of both fields?


X“L:

The relationship between neuroscience and AI has evolved in fascinating ways. While neural networks were initially inspired by the brain's architecture, recent AI advances have largely been driven by engineering breakthroughs in computing power and data processing rather than biological insights. However, I believe we're entering an exciting new phase of convergence between these fields.

 

Current research comparing artificial and biological systems has revealed intriguing similarities in information processing and learning patterns. This bidirectional exchange offers unique opportunities: AI models can serve as testable mechanistic models of brain function, while neuroscience principles could help us develop more interpretable and robust AI systems.

 

That said, this bio-inspired approach is just one of many valuable paths forward in AI development. The key is finding the right balance between learning from biology and pursuing purely engineering-based solutions.



RLS:
Your study demonstrates that LLMs can outperform human experts in predicting study results. Does this suggest that AI might develop a form of 'scientific intuition,' and how might that differ from human intuition?

X“L:

That's an interesting perspective. While “scientific intuition” is challenging to define precisely, my guess is it stems from years of research experience and synthesizing connections across literature.

 

The fact that LLMs trained on scientific papers can outperform human experts at prediction tasks suggests they may develop knowledge synthesis capabilities that differ from human approaches. An interesting research direction would be investigating how these models integrate information across neuroscience subdomains, which could reveal underlying patterns in the field and inspire new scientific connections.



RLS:
How could tools like BrainGPT help address fundamental questions about brain function and cognition, especially in areas where direct experimentation is challenging?

X“L:

I should clarify a few things about BrainGPT. As we show in the paper, you could further fine-tune pre-trained LLMs, at a relatively small cost, on neuroscience publications to build a model – which we call BrainGPT – that is better at predicting which study result is more likely.

 

This success suggests the potential for LLMs to synthesize scientific literature and we hope they might eventually help identify novel connections and suggest novel theories about the brain and cognition. We are actively exploring how BrainGPT might serve as a stepping stone toward developing systems that could help scientists navigate unexplored theoretical possibilities in neuroscience.

I would say that direct experimental evidence remains irreplaceable and fundamental to scientific progress. Generative models like BrainGPT must be grounded in concrete experimental data.

However, these models could help scientists explore potential research outcomes more efficiently. In an ideal world, scientists would test every possible hypothesis. But with limited resources and time, testing all possibilities becomes impractical. We envision that systems built upon BrainGPT could help researchers explore alternative scenarios and outcomes without conducting every conceivable experiment. By suggesting which experiments might be most promising and predicting possible results, such systems could help optimize resource allocation in scientific research.



RLS:
Do you see parallels between how LLMs process information and how the brain does, or are these fundamentally different systems? 

X“L:

Definitely. I think LLMs can provide inspiration for how human cognition works but I would be cautious in interpreting the success of LLMs as direct evidence of human-like processing mechanisms.

There is a growing body of research on whether LLMs learn like humans and opinions are mixed.

We have a recent paper out (a follow-up work from the Nature Human Behaviour paper) that shows LLMs trained on both forward and backward text perform equivalently on the BrainBench task. This is particularly telling since no human language has evolved to run backward (e.g., apple → elppa). The main takeaway from that paper is that LLMs are more general pattern-learning machines than human brains. LLMs are excellent at extracting predictive patterns in sufficiently structured input – even reversed text – but it doesn’t mean they employ human-like information processing. 



RLS:
Do you think there are any ethical concerns researchers should be aware of, particularly regarding the over-reliance on AI predictions? 

X“L:

While LLMs have achieved remarkable capabilities, they remain tools to enhance scientific work rather than replace human judgment. These systems excel at processing vast literature and potentially exploring possibilities, helping scientists work more efficiently.

 

However, scientists must maintain their critical thinking and decision-making autonomy, knowing when to accept AI predictions and when to challenge them. Given the inherent biases in AI training data, these tools should augment rather than override human expertise. 



RLS:
While your study focuses on neuroscience, you mention that the methodology could be applied universally across sciences. What fields do you think might benefit most from this approach, and why?

X“L:
This approach would be particularly valuable in complex, interconnected fields like biology, where discoveries often require synthesizing information across multiple domains. As we show in the current paper, LLMs excel at identifying patterns and connections across diverse bodies of knowledge, making them especially useful for researchers navigating interdisciplinary challenges.