The Expanding Role of AI in Science
eBook
Published: January 7, 2026
Credit: Technology Networks.
From accelerating drug development to enhancing climate solutions, AI is reshaping how research is conducted and applied.
Yet, for all its promise, AI’s integration into science comes with challenges such as scaling models, maintaining data integrity, and ensuring unbiased insights. The need to balance innovation with responsible deployment is more pressing than ever.
With real-world examples and expert insights, this eBook explores the practical, ethical and technical dimensions of AI in modern research.
Download this eBook to discover:
- How AI drives innovation in disease prediction, drug discovery, and materials science
- The tools, techniques, and research models shaping AI’s impact in labs and clinics
- Key ethical and implementation challenges faced by today’s research leaders
Credit: iStock
THE EXPANDING
ROLE OF AI
IN SCIENCE
How Will AI Change
the Food and Drink
Industry?
Could AI Help Predict
the Next Pandemic?
How Is AI Accelerating
the Discovery of New
Materials?
CONTENTS
04
Could AI Help Predict
the Next Pandemic?
08
How AI Is Transforming
Cancer Prevention, Diagnosis
and Treatment
12
How Is AI Shaping Proteomics
and Multiomics?
16
How Will AI Change the Food
and Drink Industry?
19
How Is AI Accelerating the
Discovery of New Materials?
23
How AI Tools Are Shaping
the Future of Neuroscience
THE EXPANDING ROLE OF AI IN SCIENCE 3
TECHNOLOGYNETWORKS.COM
FOREWORD
Artificial intelligence (AI) is steadily becoming an integral part of scientific research and applied
innovation. Across a wide range of disciplines, AI is being adopted as a tool to support data
analysis, streamline complex processes and improve decision-making.
This eBook brings together a series of articles that examine how AI is contributing to progress in
varied fields such as cancer research, neuroscience, materials science, proteomics and the food
and drink industry. The examples presented here illustrate how AI is being used to improve early
disease detection, inform treatment planning, identify new materials, better understand biological
systems and much more.
This collection highlights practical applications, emerging challenges and areas of ongoing
research, encouraging consideration of where and how AI can be most effectively used in science.
The Technology Networks editorial team
4 THE EXPANDING ROLE OF AI IN SCIENCE
Could AI Help Predict
the Next Pandemic?
Artificial intelligence can monitor disease outbreaks and could help
prepare us for future pandemics
Blake Forman
The COVID-19 pandemic highlighted the speed
at which infectious diseases can spread — and the
importance of an equally agile and robust array of tools
to predict, monitor and control their prolifera tion. New
and long-standing artificial intelligence (AI) tools were
deployed during the pandemic to help fill this role.
Lessons learned from this period have shown that AI
can be successfully utilized in early-warning systems
for infection, outbreak detection, epidemiological
forecasting and resource allocation. With new
pathogens of concern and new strains of old viruses a
constant threat, building on these AI-powered tools and
incorporating them into public health is a key priority.
This article outlines examples of where AI has
been utilized to predict disease outbreaks and how
AI models could help inform future strategies for
controlling the spread of infectious diseases to prevent
possible pandemics.
AI’s contribution to pandemic
preparedness
In August 2024, the World Health Organization
(WHO) updated its list of pathogens that could spark
the next pandemic, which grew to include more than
30 pathogens. The microorganisms were selected
based on available evidence showing them to be
highly transmissible and virulent, with limited access
to vaccines and treatments. While some pathogens
on the list may never cause an epidemic, the growing
number of pathogens of concern highlights the need
for new tools to help predict and control the spread of
infectious diseases.
Recognizing the utility of AI in preparing for future
pandemics, the US Centers for Disease Control
and Prevention (CDC) Center for Forecasting and
Outbreak Analytics launched Insight Net in 2023. The
US network hopes to transform the analytic capacities Credit: iStock
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 5
for infectious disease outbreaks by combining machine
learning and AI with the best available technologies
and academic research. Similarly, the WHO Hub for
Pandemic and Epidemic Intelligence is working towards
implementing AI in surveillance programs.
A key lesson from the COVID-19 pandemic was that
effective preparedness relies on monitoring known
pathogens and anticipating viral mutations that
can evade host immune responses. To address this,
researchers at Harvard Medical School (HMS) and
the University of Oxford have developed an AI tool
named EVEscape.
To build the tool, the researchers took their existing
generative model EVE – which can predict mutations
in viral proteins that won’t interfere with the virus’s
function – and added biological and structural details
about the virus. Together, this data allows EVEscape
to predict the variants most likely to occur as a
virus evolves.
In a study, published in the journal Nature, the
researchers demonstrated that EVEscape is as accurate
as high-throughput experimental scans at anticipating
variations for SARS-CoV-2 and is generalizable to other
viruses including influenza, HIV and understudied
viruses with pandemic potential.
The researchers continue to utilize EVEscape to predict
future variants of SARS-CoV-2 and publish a biweekly
variant report. They are now working on broadening this
work to include other pathogens with pandemic potential.
“We want to know if we can anticipate the variation in
viruses and forecast new variants — because if we can,
that’s going to be extremely important for designing
vaccines and therapies,” said Debora Marks, professor of
systems biology at the Blavatnik Institute at HMS.
Disease surveillance in disaster
contexts
In addition to predicting how diseases may evolve,
AI can also help track and contain epidemics. Earlywarning systems for disease surveillance have greatly
benefited from incorporating AI algorithms that can
analyze text for signals of infectious disease events with
high accuracy and at unprecedented speeds.
A study, published in the journal Emerging Infectious
Disease, utilized open-source data from EPIWATCH
– an AI early-warning system – to analyze the effects
of the Russia-Ukraine war on infectious disease
epidemiology.
Conflict situations can increase the risk of epidemics,
and disruptions to public health surveillance create
extra challenges in tracking them. In the study,
researchers demonstrated the value of using AIpowered open-source intelligence to gather information
about unfolding epidemics in a conflict zone where
formal surveillance was reduced.
The researchers analyzed patterns of infectious diseases
and syndromes before (November 1, 2021–February
23, 2022) and during (February 24–July 31, 2022)
the conflict. Case numbers for the most frequently
"We want to know if
we can anticipate the
variation in viruses and
forecast new variants
— because if we can,
that’s going to be
extremely important
for designing vaccines
and therapies."
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 6
reported diseases were compared with numbers from
formal sources.
The researchers found increases in overall infectious
disease reports. In addition, compared with formal
surveillance, the researchers were able to extract
more complete case data for the eight most reported
infectious diseases.
While the study has some limitations, such as the lack
of data from smaller regions in Ukraine, it demonstrates
how open-source health intelligence systems can be
valuable for making real-time public health decisions
during disasters.
Informing strategies against future
pandemics
AI-driven approaches complement human-curated ones,
providing new insights to help health professionals
make more informed decisions during outbreaks. A
team of engineers at the University of Houston recently
developed an AI tool to identify hotspots of infection
linked to air traffic. This model could help policymakers
decide on air traffic controls during pandemics. The
study was published in the journal Scientific Reports.
To explore how air traffic impacts the spread of disease,
the researchers developed a graph neural network
(GNN)-based framework called Dynamic Weighted
GraphSAGE. “The uniqueness of this work is that the
graph can accommodate dynamic weights, reflecting
aviation pattern changes over time. We also used
directed graphs to accurately capture the directionality
and asymmetry of flight traffic between regions,” lead
researcher and associate professor at the University
of Houston Dr. Hien Van Nguyen, told Technology
Networks. “These unique features make our GNN very
suitable for modeling the spatial and temporal changes
in air traffic, and from this, we can predict the spreading
of infectious disease cases via air travel.”
The researcher’s analysis found that air traffic
significantly drove COVID-19 infection during the
pandemic. By performing sensitivity analysis on the
model, the researchers could observe changes in the
spreading patterns. From this analysis, they found that
Western Europe, the Middle East and North America
were regions with the highest sensitivity to these changes
and therefore concluded they had a disproportionately
large impact on the spread of the virus.
“This highlights the importance of looking at air travel
patterns to potentially curb the spreading of airborne
diseases,” said Nguyen. “In the past, governments have
tried this, but I believe our model provides a more
systematic, data-driven way to decide how much air
traffic we want to cut and to predict how that would
likely impact the spreading pattern of disease."
“The model we’ve developed can be used as a
data-driven tool for policymakers to evaluate the
effectiveness of travel restrictions on the spread of
airborne diseases,” said Nguyen.
The next steps for this research will be to verify the
model on other types of infectious diseases with
different spreading rates and other properties that
might impact spreading predictions. “Various infectious
diseases can be applicable here because our framework
"The model we’ve
developed can be
used as a data-driven
tool for policymakers
to evaluate the
effectiveness of travel
restrictions on the
spread of airborne
diseases."
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 7
is not restricted. It has no assumptions related to
COVID-19, and we can apply this model to influenza
or any airborne disease influenced by human travel
migration patterns,” explained Nguyen. “In the future,
we could potentially use this framework for early
warning, not just predicting the spread of a disease but
to detect spikes and unusual patterns.”
Insights from the study were used to search through
restriction policies on air traffic for controlling the
pandemic, which identified policies and strategies that
reduced predicted global COVID cases effectively with
lesser air traffic reductions.
Nguyen concluded, “We can apply the tool for other
interventions such as where to increase health
infrastructure, and where to deploy resources to
anticipate the increased number of patients. So, in
addition to helping with decisions on air travel, we can
identify high-impact regions and quantify the potential
outcomes of strategies aimed at helping these regions.”
Future outlooks for AI in infectious
disease monitoring
AI has become an established technology in many areas
of medicine. Emerging technologies such as quantum
computing, biosensors, augmented intelligence and large
language models are all predicted to play an increasing
role in infectious disease surveillance in the future.
While AI continues to improve surveillance
infrastructures, limitations such as the prevalence of
databases that underrepresent select populations and
the potential for predictions that aren’t generalizable
still need to be overcome. In addition, data privacy is
a significant concern regarding using AI in infectious
disease surveillance. As models incorporate data
streams from sources such as wearable health
technology, connected health devices and smartphones
that may be linked to open social media, approaches to
preserve privacy will be a priority.
AI will likely continue to improve disease surveillance
infrastructure, but future pandemics remain a
possibility. Experiences with AI during the COVID-19
pandemic have shown its utility in this space, however,
it still cannot replace the collective intelligence required
to prevent emerging infectious diseases. Pandemic
preparedness continues to require the combined efforts
of collaborative surveillance networks.
MEET THE INTERVIEWEE
Dr. Hien Van Nguyen is an associate professor in the Department
of Electrical and Computer Engineering at the University of
Houston. His research focuses on the intersection of artificial
intelligence, computer vision and biomedical image analysis.
8 THE EXPANDING ROLE OF AI IN SCIENCE Credit: iStock
Artificial intelligence (AI) is rapidly transforming
cancer research, driving innovations from early
detection and diagnosis to treatment planning and
drug discovery. AI houses the ability to process vast
datasets at unprecedented speed and accuracy, allowing
enhancements to image interpretation, uncovering
novel biomarkers, predicting treatment responses and
supporting personalized medicine.
Recent collaborations between research institutes
and tech companies, alongside AI-powered diagnostic
approvals from regulatory bodies, reflect the field’s
momentum. As these technologies evolve, they hold the
promise of improving patient outcomes, streamlining
workflows and redefining how clinicians approach
cancer care.
The use of AI in cancer prevention
Prevention and early detection remain the most
effective strategies for reducing cancer-related mortality.
However, achieving widespread early detection is
challenging due to the multitude of cancer risk factors,
the variability in when and how early warning signs
present and difficulties in knowing when to initiate
screening. Access to cancer screening is also hindered
by health inequalities such as proximity to healthcare
providers, socioeconomic barriers and medical literacy.
The application of AI holds promise for overcoming
these obstacles, potentially transforming cancer
prevention and early detection. For example,
researchers developed an AI tool – encompassing data
from 6.2 million patients over 41 years – capable of
How AI Is Transforming
Cancer Prevention, Diagnosis
and Treatment
Isabel Ely, PhD
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 9
identifying individuals at the highest risk of developing
pancreatic cancer up to 3 years before diagnosis.
Another study focused on leveraging large language
models to monitor electronic health records with the
goal of better understanding the social determinants
of health – factors that can play a critical role in cancer
prevention, early detection and treatment outcomes.
By extracting and analyzing nuanced information
embedded in clinical notes and patient histories,
the large language models could help clinicians
proactively address risk factors that traditional models
might overlook.
The integration of AI tools that can pinpoint individuals
at heightened risk for certain cancers represents a
significant advancement in precision medicine. Such
tools enable clinicians to target diagnostic testing toward
high-risk populations, reducing unnecessary procedures
and associated anxiety for lower-risk individuals.
AI applications in cancer imaging
and analysis
Technological advances in medical imaging and
minimally invasive biomarkers hold promise in
addressing challenges across the spectrum of cancer
detection, treatment and monitoring. However, the
interpretation of the large volume of data that is
generated by these advancements presents a barrage of
new potential challenges.
Recent advances in AI methodologies have made great
strides in automatically quantifying radiographic patterns
in medical imaging data. For example, in 2021, the US
Food and Drug Administration authorized the marketing
of software designed to assist pathologists in identifying
areas suspicious for cancer, supplementing the review of
digitally scanned slide images from prostate biopsies.
AI is also enhancing the processing of medical images,
such as mammograms. A recent study demonstrated
that AI imaging algorithms not only improve breast
cancer detection from mammograms but also predict
the long-term risk of invasive breast cancer.
More recently, a study found that AI-assisted
interpretation of brain scans may help improve care
for children with gliomas – tumors that are typically
treatable but vary widely in their risk of recurrence.
Investigators trained deep learning algorithms to
analyze sequential post-treatment brain scans and flag
patients at risk of cancer recurrence.
Uniquely, the researchers used a technique called
temporal learning, which trains the model to synthesize
information from multiple brain scans taken over
several months after surgery. This approach differs from
traditional models, which typically draw conclusions
from single imaging snapshots.
The temporal learning model predicted the recurrence
of either low- or high-grade glioma within one year posttreatment with an accuracy of 75–89%, substantially
higher than the ~50% accuracy associated with
predictions based on single scans. While providing AI
models with more post-treatment images improved
prediction accuracy, the benefit plateaued after four to
six images.
Although further validation of AI models is needed
before they can be applied clinically, researchers hope to
launch clinical trials to determine whether AI-informed
"The integration of
AI tools that can
pinpoint individuals
at heightened risk
for certain cancers
represents a significant
advancement in
precision medicine."
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 10
risk predictions can improve patient care – ultimately
the driving goal behind developing such tools.
AI in cancer treatment
development and scheduling
Alongside aiding cancer detection and diagnosis,
AI is becoming increasingly vital in developing new
cancer treatments.
For instance, researchers have used AI to identify
activation patterns and predicted T-cell behavior to
improve immunotherapy outcomes. Further, AI is also
helping scientists uncover the biological mechanisms
that underlie drug responses, with studies employing
deep learning models to map shared drug response
pathways, providing predictive insights that could
inform future therapeutic strategies.
Recent advancements also highlight the potential of
deep reinforcement learning (DRL) frameworks in
oncology. DRL frameworks have successfully tackled
a variety of drug scheduling challenges, including
managing immune responses after transplant surgery
and controlling bacterial drug resistance. For example,
a recent study applied a DRL network to optimize the
treatment schedule in metastatic prostate cancer.
“These ‘deep’ methods use artificial neural networks
with many intricately connected layers that enable them
to learn highly complex relationships between system
variables,” Kit Gallagher, first author and PhD student
at the University of Oxford and Moffitt Cancer Center,
told Technology Networks.
Gallagher explained how DRL frameworks operate:
“At each time step, a deep learning agent receives
information about the system’s state (e.g., tumor size)
and selects an action (e.g., treat or not treat) from a set
of options. The agent learns its strategy through trial
and error, aiming to maximize a reward function that
incentivizes positive outcomes, such as tumor shrinkage
or cure, and penalizes negative events like excessive
drug toxicity. DRL is especially well suited for these
tasks because it can account for the long-term effects of
actions, even when the relationship between actions and
outcomes is not fully understood.”
The DRL is then paired with a virtual patient, powered
by a mathematical tumor model that simulates treatment
responses. “This approach offers two major advantages:
The tumor model can quickly generate the vast amount
of data needed to train a deep learning system and it
allows us to explore novel treatment strategies that have
not yet been tested in clinical settings,” Gallagher said.
By combining deep learning with mathematical
modeling, researchers can design more adaptable and
individualized cancer therapies. “The deep learning
model allows us to generate personalized treatment
protocols that account for variations in cancer dynamics
between individual patients,” said Gallagher. “It can
also be used to explore the best treatment scheduling
approaches in more complex treatment settings where
traditional analytic approaches are not possible, such as
when multiple drugs may be scheduled simultaneously
or the drug dose may be varied,” he added.
"Incorporating patientreported outcomes
that track symptoms
and functional status
in real time, beyond
the clinic walls, may
enhance the accuracy
and clinical utility of
prognostic AI models."
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 11
Challenges and future directions
of AI in cancer care
AI holds immense potential to revolutionize cancer care
by enabling earlier diagnoses, generating more accurate
risk assessments, guiding effective treatment strategies
and freeing up clinicians to focus on patient-centered
care. However, several challenges must be addressed to
fully realize this potential.
“The clinical applicability of machine learning
approaches, such as deep reinforcement learning, can
be limited by their ‘black-box’ nature – if we cannot
rationalize treatment protocols recommended by the
AI algorithm, then we cannot expect clinicians to place
trust in these recommendations when treating their
patients,” said Gallagher.
Physician hesitancy in adopting AI tools often stems
from this limited explainability, as well as concerns
around medical liability, the financial consequences
of errors and a general lack of familiarity with these
technologies. To ensure the safe and effective use of AI
in clinical settings, randomized clinical trials are needed
to validate its applications.
Another critical issue is bias in AI models. If the data
used to train AI systems are not sufficiently diverse or
representative of the broader population, these tools
risk perpetuating existing medical biases. As such, the
development and adoption of AI and machine learning
models must be grounded in accepted standards that
prioritize bias mitigation and reproducibility.
It is also important to recognize that AI systems cannot
fully replicate the nuance of human clinical decisionmaking. Factors such as patient demeanor, cognitive
state and subtle clinical cues – while vital to assessing
risk – are not always well-captured in datasets.
Incorporating patient-reported outcomes that track
symptoms and functional status in real time, beyond
the clinic walls, may enhance the accuracy and clinical
utility of prognostic AI models.
Looking ahead, researchers are expanding the
scope of precision medicine beyond drug selection
to optimize treatment strategies. “The increasing
potential of precision medicine has primarily focused
on personalizing the choice of drug for each patient;
however, the increasing integration of mathematical
models into clinical care will also unlock the potential
of personalized treatment schedules, allowing us to
achieve more effective treatments with existing drugs,”
Gallagher concluded.
MEET THE INTERVIEWEE
Kit Gallagher, MSc, is a mathematics PhD student at the University
of Oxford. His current work focuses on treatment-resistant
prostate and ovarian cancers but develops themes applicable to a
range of disease and treatment settings.
12 THE EXPANDING ROLE OF AI IN SCIENCE
How Is AI Shaping Proteomics
and Multiomics?
Molly Coddington
Artificial intelligence (AI) has emerged as a powerful
toolset that could create new opportunities and help
overcome hurdles in proteomics and wider omics
disciplines. Bolstered by AI, these fields of research
could have a profound impact on science and society.
At the Children's Medical Research Institute, the
University of Sydney, Associate Professor Qing
Zhong’s research interests span big data analysis,
machine learning and computational biology.
His work involves mining and managing large-scale
proteomics and multiomics datasets. He aims to
advance cancer research and implement big datadriven, evidence-based computational tools to enable
predictive, preventive and personalized medicine,
among other projects.
Zhong recently joined Technology Networks for a
conversation on AI’s progress in proteomics and
multiomics, barriers to its widespread implementation
and his vision for a “continuous, high-resolution lens
on biology.”
DIA-NN, DeeProM and AlphaFold
“AI applications in proteomics have gained significant
traction recently,” Zhong said. “Particularly with the
emergence of data-independent acquisition neural
networks (DIA-NN) for streamlined DIA analysis,
DeeProM for predicting cancer cell vulnerabilities and
AlphaFold for protein structure prediction.”
Pioneered by the laboratory of Professor Ruedi
Aebersold, DIA is considered a breakthrough technique
in mass spectrometry (MS)-based proteomics. Unlike
data-dependent acquisition (DDA), DIA offers unbiased
analysis with larger proteome coverage and higher
reproducibility, making it a useful method for discovery Credit: iStock
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 13
proteomics. Discovery research is incredibly important
for interrogating the underpinning mechanisms of
biological states, such as health and disease. There’s one
drawback, however; DIA generates large amounts of
data, which creates a bottleneck.
“DIA-NN uses deep neural networks to handle large
volumes of DIA data, simplifying peptide identification
and quantitation,” Zhong explained. DIA-NN is also free
to use, contributing to its growing popularity in highthroughput proteomics.
In 2022, researchers – including Zhong – published a
pan-cancer proteomic map of 949 human cell lines. The
team developed a deep learning-based computational
pipeline, named Deep Proteomic Marker, or DeeProM.
“DeeProM enabled the full integration of proteomic
data with drug responses and CRISPR-Cas9 gene
essentiality screens to build a comprehensive map of
protein-specific biomarkers of cancer vulnerabilities
that are essential for cancer cell survival and growth,”
Zhong said.
A significant challenge in the study of proteins is their
versatility, which is also why they’re so useful in biology.
A protein’s function is closely related to its structure.
For decades, scientists have worked to develop methods
capable of deciphering protein structure. The issue,
however, is that the number of different configurations a
protein could adopt is enormous.
Enter AlphaFold, an AI program developed by Google’s
DeepMind that is trained on mass amounts of data from
the Protein Data Bank to predict protein structure.
“AlphaFold has revolutionized structural proteomics
by accurately predicting protein folding, offering vital
clues about protein function and interaction networks,”
Zhong said. AlphaFold can also design de novo proteins
– a longstanding challenge in the field – for a wide
variety of applications, including the development of
novel therapeutics, diagnostics and imaging reagents.
An estimated 2 million researchers across 190
countries are using AlphaFold to inform their research
across several applications, from accelerating
drug discovery, identifying protein structural
alterations associated with diseases such as Alzheimer’s
to generating plastic-eating enzymes. The model’s
significant impact on science and society earned Google
DeepMind’s Demis Hassabis and John M. Jumper onehalf of the 2024 Nobel Prize for Chemistry.
“Together, these AI-driven approaches accelerate
discoveries in disease mechanisms and therapeutic
development, pushing proteomics beyond traditional
experimental limits,” Zhong said.
AI hurdles that are yet to be
surmounted
Though AI’s transformative potential in proteomics is
being realized to some degree, its integration still faces
significant challenges.
"Collaborative
data-sharing
frameworks, uniform
standardization efforts
and privacy-preserving
technologies are
urgently needed to
accelerate AI-driven
breakthroughs in
proteomics and wider
biomedical fields."
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 14
Data volume, quality and privacy
A common misconception about AI’s position in
proteomics and multiomics research, according to
Zhong, is that there’s already enough data to drive
AI research at the same speed as fields such as
natural language processing or computer vision. “In
reality, although biomedical experiments generate
vast quantities of raw data, only a fraction of these
datasets are well annotated, standardized and of high
quality,” he said.
“Unlike the billions of labeled texts or images available
for training large language or vision models, biomedical
data often remain scattered and behind institutional
firewalls, limiting opportunities for building equally
powerful AI systems,” Zhong continued.
Collaborative data-sharing frameworks, uniform
standardization efforts and privacy-preserving
technologies are urgently needed to accelerate
AI-driven breakthroughs in proteomics and wider
biomedical fields.
Privacy-preserving technologies will be integral to AI’s
widespread adoption in healthcare research, where
patient confidentiality is paramount. At the Human
Proteome Organization 2024 World Congress, Zhong
presented his recent pre-print research that seeks to
address this challenge.*
Zhong and colleagues developed a federated deep
learning (FDL) approach, called ProCanFDL. FDL is a
technique used to train AI models without sending raw
data to the model itself – instead, the model is brought
to the data.
“Our system enables AI to learn from individual cancer
proteomic data securely, behind local firewalls. In this
system, each local computer trains its own AI model
on private data, and only the updated local model
parameters are aggregated to create a single, more
robust global model,” Zhong explained.
Local models were trained on simulated sites that
contained data from a pan-cancer cohort and 29
cohorts that were held behind firewalls, representing
8 countries and 19,930 DIA-MS runs. “This global
AI model demonstrated a significant improvement in
accuracy for cancer subtyping tasks, highlighting its
potential to uncover valuable insights into tumors and
inform potential treatments – all while maintaining data
security and privacy,” Zhong said.
The researchers predict their approach could enable
the development of large-scale, privacy-compliant
proteomics AI models across institutions globally,
advancing digital health.
Funding for AI in omics
Headlines are frequently dominated by sizable
AI investment announcements. Though
reports suggest venture capital deal activity in AI for
healthcare has flourished over the last five years, Zhong
believes there is a funding disparity.
“While massive investments – sometimes running
into the billions – fuel the development of large
language models (LLMs) in the tech sector, the same
level of financial backing remains scarce for omics
research,” he said.
The impact? There are fewer opportunities to build “Large
Omics Models” to a scale and size that compares to
contemporary LLMs. “Limited funding slows the creation
of foundational datasets, impedes the development of
cutting-edge analytic tools and ultimately restricts the
field’s growth,” Zhong emphasized, adding that there is an
urgent need for greater philanthropic, governmental and
industrial investment in omics-focused AI initiatives.
An omics version of ImageNet
Lastly, Zhong highlighted the pressing need for data
standards and reproducibility in this line of research,
especially as AI models in proteomics and wider omics
studies become “increasingly data-hungry”, he said.
“Much like ImageNet transformed the field of computer
vision – and large, standardized corpora such as
Wikipedia dumps did for language models – omics
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 15
studies need a well-curated, widely accessible reference
dataset,” Zhong continued.
The omics version of ImageNet, or “omics ImageNet”, as
he described it, would help to unify metadata protocols,
file formats and quality checks across different
labs. Subsequently, this would enable reproducible,
transparent benchmarking and foster collaboration.
“Establishing such a foundation could dramatically
accelerate AI-driven discoveries, making it easier for
teams around the world to contribute to – and build
upon – the same high-quality datasets,” Zhong said.
A future without limits
In a future without barriers, Zhong believes that AI
could transform proteomics and multiomics into a
“continuous, high-resolution lens on biology”, one that
operates at a massive scale, which “might even dwarf the
data used to train LLMs, like ChatGPT,” he said.
“Much as LLMs have reshaped the way we interact with
technology, ‘Large Omics Models’ would seamlessly
integrate proteomic, genomic and other molecular
data, revealing complex cellular processes and disease
pathways in real-time,” Zhong said. “By predicting how
proteins and other biomolecules evolve, interact and
respond under diverse conditions, these models would
drive breakthroughs from new diagnostics to highly
tailored therapies.”
In this world, scientists could be freed from laborious
data management issues. They could pursue creative
research projects at an unprecedented pace. “Meanwhile,
the broader public would reap the benefits of earlier
disease detection, more precise interventions and a
deeper comprehension of health that shapes public
policy and healthcare worldwide,” Zhong said.
Zhong paints a compelling picture of a future where AI,
powered by unified data standards and transformative
“Large Omics Models,” revolutionizes proteomics and
omics research, delivering profound benefits to science,
medicine and society.
Given the rapid pace of current advancements, it may
not be too long before we see whether this vision can
become a reality.
*This article is based on research findings that are yet to be
peer-reviewed. Results are therefore regarded as preliminary
and should be interpreted as such. Find out about the role
of the peer review process in research here. For further
information, please contact the cited source.
MEET THE INTERVIEWEE
Dr. Qing Zhong is an associate professor at Children's Medical
Research Institute (CMRI) at The University of Sydney. His research
interests encompass big cancer data analysis, machine learning
and computational biology.
16 THE EXPANDING ROLE OF AI IN SCIENCE
AI is now inescapable. What once was an acronym
reserved for science fiction has become a buzzword in
almost every industry. From aerospace to agriculture,
banking to biotechnology, seemingly every sector is
now scrabbling to think of ways to maximize their use of
machine learning.
And the food and beverage industry is no exception.
Mondelez (the manufacturer of Oreos and many other
products) says it has developed an AI tool to optimize
the flavors of new products. Unilever is relying on AI
analysis of weather data to help the company adjust ice
cream sales forecasts to cut waste. Nestlé has announced
it will create AI-powered “digital twins” of products like
Nespresso coffee machines for future marketing materials.
Many of these initiatives have come from food
companies themselves, which have mountains of data
on their own processes and projects.
What the companies may lack, however, is AI expertise.
This, says Dr. Nicholas Watson, a professor of AI in food
at the University of Leeds, is where academia can help
guide this emerging technology.
AI and food production: greater
connectivity required
“They’ve [companies] got these large datasets that they
can use, and it is your data, it's my data and it's personal
data,” Watson told Technology Networks.
Once partnered with a food company, Watson and his
team focus on combining low-cost sensors with machine
learning models to monitor and optimize production
processes and predict food properties.
“Some of the companies we work with make millions of
their products a day,” said Watson. “That’s millions of
How Will AI Change the Food
and Drink Industry?
Leo Bear-McGuinness
Credit: iStock
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 17
samples for our training data set. So, we need to work
with industry partners. They've got the data; we've got
ideas and know-how.”
In food factories, a lot of these data can be captured with
low-cost sensor tools like weight scales and cameras.
“You can imagine it's almost trivial to put a camera in
a pizza factory and record every pizza,” said Watson,
“but a lot of machine learning uses a technique called
supervised machine learning, so you have lots of data,
and then you label that data. So, for the pizza example,
we would have good quality, bad quality or we could
have a quality score from 0–10.”
Even this set-up, however, still requires some mundane
human-led work and the time that comes with it. For
now, at least.
“Capturing a million images of a million pizzas is just a
case of putting a camera there and leaving it,” Watson
said. “But someone has to look at every image and
say, ‘Good quality, bad quality, 0, 3, 4, 8, 10…’ and that
labeling of data is often what takes the time. We’ve
used methods such as transfer learning, active learning
and semi-supervised learning, where we label some of
those edge cases which are quite hard to determine, to
overcome that data labeling burden.”
AI and food safety
Aside from product quality, many companies are also
interested in using AI to improve product safety. One
microbiology startup, Spore.Bio, is planning to use AIpowered pathogen detection technology to reduce food
testing time from days to minutes.
However, caution in this area should be exercised, says
Watson. Boosting food safety is a noble aim but one
that can lead food producers – and AI researchers – to
fruitlessly chase unobtainable goals.
“One of my PhD students has been working all week to
get his model accuracy from 99.5% to 99.55%,” Watson
told Technology Networks, “and I’m like, ‘What does that
actually mean in terms of the problem you’re solving?’”
“Because if we’re looking at, say, allergens in food, 99%
accuracy – which is generally good in most training
models – means 1 in 100 is wrong. And if you’re making
hundreds of thousands, if not millions, of products a
day, that's a lot of potential risk you're making. So, I
think it's about how do we manage that? Maybe we have
this AI system as an early warning indicator, but we still
continue with the very robust food safety protocols we
have taking samples and sending them off to the lab.”
A waste of wasted data
Another promising use of AI is applying it to tackle the
food and beverage industry’s gigantic waste issue.
In 2019, the US Environmental Protection Agency
estimated that 66 million tons of wasted food was
generated in the industry’s retail and service sectors. An
additional 40 million tons was thought to be generated in
the manufacturing and processing of food and beverages.
To cut down this secondary figure, many food
manufacturers have recently incorporated AI into
their workflow processes in order to identify areas
where food waste could be prevented. Nestle,
among other companies, recently announced they
had joined Innovate UK’s BridgeAI initiative, an AI
consortium that aims to redistribute up to 700 tons of
quality surplus food – the equivalent of up to 1.5 million
meals – by the end of its project.
For their own part in this frugal goal, Watson and his
team at the University of Leeds are collaborating with
Australia’s Commonwealth Scientific and Industrial
Research Organisation on an AI project to turn food
waste into edible, sustainable proteins.
“Typically, if you want to reuse food waste, you can
turn it into a liquid media, then you could ferment that
with yeast,” Watson explained to Technology Networks.
“Then the yeast will procreate and you'll get a microbial
protein, which you can then process into food.”
“But there are lots and lots of different parameters.
How do you treat the waste initially? How do you
understand its varying composition? And then you go to
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 18
fermentation and you've got things like temperature, the
type of organisms you use, pH, how long you ferment it
for. This is just a big problem,” he continued.
“With AI, we can actually reduce the time and cost from
maybe 25 experiments to about 5 experiments. We use
some of our own data, but then we go to the literature and
use tools to extract all the information and data in that
literature to augment what we're doing,” Watson added.
From food waste to food taste
Most foods, of course, have been cooked and eaten for
centuries. Newer edible creations, however – such as
processed plant-based meat alternatives – haven’t had
the benefit of countless refinements over generations.
Fortunately, says Watson, machine learning can
condense these centuries of iterations into mere
minutes, if given the right prompts.
“One of the challenges you have with plant-based
proteins is the bitterness and the taste,” Watson said.
“They’re just not that enjoyable to eat. Some people
might like that, some maybe don't.”
“One of our research fellows has started a very
prestigious scholarship where he will look at how you
can select and process different proteins to reduce that
bitterness level. He will collect some data, try different
types of plant-based proteins and different processing
techniques, and measure their bitterness; then we can
build a model that will say, ‘OK, for this protein and
this processing route you'll get to this bitterness level.’
But what we actually want to do is try and run some
optimization on that, because we don't want to predict
bitterness; we want to reduce it. That's generally what
most people want.”
This battle against bitterness can come into conflict with
cultural tastes, however.
“I just came back from a few weeks in China, visiting a
university for a conference, and we had the pleasure of
eating something called stinky tofu. It's one of the most
interesting things I've ever eaten. I straight away spoke
to some of my Chinese friends and said, ‘How can you
possibly eat that?’ And they went, ‘What are you talking
about? It's lovely. That blue cheese stuff you guys eat,
what is that all about? That just tastes like sweaty socks.’
It’s just that cultural thing.”
Whether such regional palettes can be programmed into
AI algorithms remains to be seen. In the meantime, it
seems a little human subjectivity can still go a long way
in food research.
MEET THE INTERVIEWEE
Dr. Nicholas Watson is a professor of artificial intelligence in food
at the University of Leeds. His research focuses on developing
digital solutions to address environmental sustainability, food
safety and health challenges in food production systems. .
19 THE EXPANDING ROLE OF AI IN SCIENCE
How Is AI Accelerating the
Discovery of New Materials?
Alexander Beadle
Batteries, solar panels, computer chips, carbon capture
systems. All these innovative technologies, and others
like them, are the result of serious breakthroughs
in materials science – driven by the discovery and
synthesis of novel inorganic materials.
For decades, the discovery of new inorganic materials
with more favorable properties was a daunting task
of trial and error, with scientists forced to conduct
hundreds upon hundreds of hours of painstaking
experimentation to identify and synthesize just a
handful of potential new materials.
Computational chemistry was a revolution in the world
of materials science when it was first introduced. With
the increased availability of supercomputers and the
combined efforts of physicists, chemists and computer
scientists, researchers could simulate the behavior of
molecules and materials at the atomic scale. This helped
scientists to accurately predict the properties of new
materials without the need for such repetitive physical
experimentation, shedding a significant amount of the
“trial and error” baggage.
Today, with the advent of machine learning (ML) and
artificial intelligence (AI), there is a sense that another
materials revolution could be on the way, with AIguided materials discovery set to accelerate these
computational approaches even further.
Better materials for tackling the
climate crisis
“Let me just say that I am not a material scientist, I am
a theoretical physicist,” Dr. Eliu A. Huerta began our
conversation. Huerta is the lead for translational AI in
the Data Science and Learning Division at Argonne
National Laboratory, US, and has been working at the
intersection of AI and scientific research for a decade Credit: iStock
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 20
already. Under his guidance, researchers at Argonne
have been applying AI and advanced computational
techniques to tackle grand challenges in astrophysics,
cosmology, materials science and biophysics.
“One thing I learned when I was being trained as a
theoretical physicist is the value of being an outsider.
You look at things in a completely different way and
this allows you to propose new ideas,” Huerta told
Technology Networks.
“I am really excited about solving problems that AI alone
cannot solve, that domain knowledge alone cannot
solve, that supercomputing alone cannot solve — but
where a mix of all of these can provide new approaches
and new opportunities to understand science in a way
you couldn’t do with these separate tools,” said Huerta.
Huerta recently helped lead a project that sought to
design new materials for carbon capture. Specifically,
the group was interested in a class of compounds known
as metal-organic frameworks (MOFs). MOFs are highly
porous materials, made up of metal ion clusters and
organic ligands which function as the network’s nodes
and linkers respectively.
Published in Communications Chemistry, the researchers
used a generative AI diffusion model to suggest unique
and chemically diverse linkers that could be used
to make novel MOFs. These were screened using a
modified neural network that would select the MOFs
with the best theoretical carbon capture performance.
The final structures were then validated using traditional
computational chemistry methods, including molecular
dynamics and grand canonical Monte Carlo simulations,
to compute more credible CO2 absorption capabilities
and make the final selection of best performers.
“Typically, when you do computational chemistry, you
know the structure that you want to validate. You know
that the molecule already exists and you want to go
and measure some properties. But we were interested
in doing this and borrowing ideas from drug design
and discovery,” Huerta explained. “So here we use the
diffusion model, which is a generative AI model, and we
expose that model to different molecular structures so
that the model would learn not only about metal-organic
frameworks, but about physics broadly speaking. I think
this was the key to allowing the model to propose some
chemical structures that were entirely novel."
The strength of this kind of approach is its speed,
Huerta continued. Creating new MOFs has been the
focus of researchers for several decades, yet the number
of exceptionally high-performing MOF materials is still
relatively low.
“Experimental science is needed, but it maybe is not the
optimal way to go and discover new materials,” Huerta
said. “Then again, if you are only using computational
chemistry methods, it is very challenging to create a new
material from the ground up! Now, with the method that
we are proposing, we are learning from experimental
chemistry and computational chemistry, but we are
allowing AI to go and explore this vast chemical design
"With the team’s tool
having found success
in suggesting novel,
high-performance
carbon capture
materials, Huerta
believes that similar
workflows with altered
parameters could help
with accelerating the
design of advanced
materials for other
applications."
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 21
space and find new things that we did not know about in
the past.”
In the study, Huerta’s team was able to generate
over 120,000 MOF candidates in 33 minutes using a
supercomputer at the Argonne Leadership Computing
Facility. This was whittled down by the modified neural
network to 364 AI-generated MOFs that were believed
to be high-performing. In total, this process took just
over five hours. Further computational analysis, which
took only a few days to complete, found 102 stable
MOFs in this dataset, of which 6 had a CO2 capacity
that ranked in the top 5% of materials in the popular
hMOF database.
With the team’s tool having found success in suggesting
novel, high-performance carbon capture materials,
Huerta believes that similar workflows with altered
parameters could help with accelerating the design of
advanced materials for other applications.
“When you have these tools, you want to develop the
ability to go and tackle similar problems that you can
apply MOFs in, for example, hydrogen storage. This is
just another parameter that you can use to fine-tune
your generative AI model. Going beyond this, there is
also methane capture, with methane being another gas
that is responsible for environmental pollution,” Huerta
said. “There are many applications where we can
use this software to go and explore the properties of
new materials.”
Improving batteries with AIgenerated materials
Machine learning and generative AI are useful for more
than just MOFs. As tools, they can be applied in a similar
way to accelerate the design and discovery of other
classes of material for different applications.
“Each generation faces a defining technical challenge,
and for ours, that challenge is climate change. It’s urgent
and requires immediate action,” Dr. Austin Sendek, an
adjunct professor of materials science & engineering at
Stanford University, told Technology Networks.
“AI has the potential to accelerate fundamental scientific
processes. My focus on energy technologies is driven
by both this scientific potential and the broader global
impact of solving this issue,” Sendek said.
At Stanford, Sendek’s research focuses on harnessing
the power of machine learning and AI to design new
materials that can support the decarbonization of the
global economy. The main focus of this research is on
batteries and electrochemistry – using machine learning
to assist in the discovery of better electrolytes to
support high-performance batteries.
“Electrochemistry offers a new modality through which
we can achieve many of the same outcomes as burning
fuels, but by using clean, renewable electrons instead,”
Sendek explained. “While powering vehicles through
electricity via batteries is a major application, the scope
of next-generation electrochemistry extends to areas
like cement production, grid power backup and the
creation of various materials, for instance.”
The search for new and improved battery materials
and electrolytes is a core part of the decarbonization
effort. Higher-performance batteries can help store
more energy from the grid when renewable energy
production is high, meaning that more green energy can
still be released even when production is low. Battery
improvements could also lead to better electric vehicles
with larger ranges or faster charging times – again
helping to support the transition away from fossil fuels.
In a 2018 paper, published in the journal Chemistry of
Materials, Sendek and co-authors demonstrated the
use of a machine learning-based prediction model to
generate novel lithium ion conductors for use in allsolid-state batteries.
“Electrolyte components heavily impact the
performance and properties of these cells,” Sendek
noted. “There’s a vast chemical space to explore, with
over 10 billion commercially available molecules that
can be used to modify electrolytes in various ways.”
The team found that this machine learning-assisted
approach was 2.7 times more likely to identify fast
lithium conductors than a random search, in addition to
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 22
performing well in a head-to-head competition against
six PhD students with experience in the field.
In a review published in Advanced Energy Materials,
Sendek and his colleagues also highlight the ability
of machine learning-based approaches to assist in
other areas, such as process optimization, cell lifetime
prediction and battery modeling, in addition to
accelerating materials discovery.
To further explain why AI and machine learning are such
helpful tools, Sendek strips it back to a matter of depth
and breadth.
“On the depth side, AI and machine learning help us
identify patterns and relationships that would typically
require scientific intuition and principles to uncover. For
example, it can reveal how different electrolytes affect
battery performance or predict the properties of new
materials — tasks that, in the past, might have taken
years of experimentation,” he said.
“On the breadth side, once we develop predictive
models, machine learning allows us to apply them to a
much larger design space at incredible speeds. These
models can rapidly evaluate vast numbers of molecules,
often identifying potential candidates that would have
taken traditional scientific methods much longer to find.”
The future outlook for materials
and AI
AI and machine learning are quickly solidifying
themselves as novel, useful tools in many areas
of science – including medicine, proteomics, drug
discovery and more. In materials science, the benefit
of these tools is their speed. With today’s computing
power, these tools can generate realistic novel materials
at a blistering pace, freeing scientists from extensive
loops of “trial and error” and allowing them to put their
expertise to better use further down the development
process. In the context of the global climate crisis,
the search for innovative new materials for carbon
capture and green energy storage cannot happen
quickly enough.
However, the application of AI and machine learning
to materials science is still a relatively nascent field,
with several limitations needing to be addressed as
the field develops. For example, much of AI’s use in
materials discovery and design relies on the use of prebuilt datasets, therefore it is key that these datasets are
properly assessed to ensure their reliability and quality.
Materials scientists have also begun to discuss the
various ethical considerations that may come with
incorporating AI into their work. This includes the
importance of recognizing and mitigating any bias in an
AI model’s training data, as well as remaining vigilant in
recognizing AI-generated misinformation through the
application of rigorous validation practices.
MEET THE INTERVIEWEES
Dr. Austin Sendek is an adjunct professor of materials science
and engineering at Stanford University. His research and teaching
focuses on harnessing the power of machine learning and AI
to accelerate the design and discovery of new materials for
decarbonizing the global economy.
Dr. Eliu A. Huerta is a theoretical astrophysicist, mathematician
and computer scientist whose research lies at the interface of
physics, AI and computational science. His interdisciplinary work
focuses on applying advanced computational techniques and
AI methodologies to tackle grand challenges in astrophysics,
cosmology, observational astronomy and complex systems..
23 THE EXPANDING ROLE OF AI IN SCIENCE
How AI Tools Are Shaping the
Future of Neuroscience
Rhianna-lily Smith
AI has become a focal point of scientific inquiry and
innovation, finding applications in fields as diverse as
medicine, engineering and environmental science.
In neuroscience, its potential is particularly intriguing.
The brain is often described as one of the most complex
systems in nature. Decoding its vast networks of
neurons and understanding how they produce thoughts,
emotions and behaviors requires interpreting immense
datasets and conducting intricate experiments.
AI is increasingly being used as a critical tool in
neuroscience, helping researchers tackle complex
challenges in understanding brain function. Large
language models (LLMs) can process vast amounts
of data while identifying patterns across scientific
literature. This enables researchers to generate new
hypotheses and explore potential outcomes in ways that
were previously unattainable.
Dr. Xiaoliang “Ken” Luo, a computational neuroscience
and machine learning researcher at the Foresight
Institute, has been at the forefront of this effort. In a
recent study published in Nature Human Behaviour,
Dr. Luo and his team demonstrated how LLMs could
surpass human experts in predicting neuroscience
research results. Their work introduced BrainBench, a
benchmarking tool, and BrainGPT, a specialized LLM
fine-tuned on neuroscience literature, which achieved
an 86% accuracy in predicting experimental outcomes.
These advancements highlight the potential of AI to
accelerate scientific discovery and refine research
methodologies.
In this Q&A, Dr. Luo discusses the broader implications
of AI in neuroscience, ethical considerations and how
tools like BrainGPT may shape the future of research in
the field.
Credit: iStock
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 24
Q: What do you see as the most
promising applications of AI in
neuroscience?
A: I, personally, see two main applications of AI in
neuroscience that could offer great potential benefits.
The first is a data-driven approach, where AI models
like neural networks serve as powerful tools for building
mechanistic understandings of the brain. These models
can help explain the mechanisms behind neural activity
related to various cognitive functions.
The second, which is more relevant to the central focus
of our publication, is a more meta-level application.
Given the power of LLMs to synthesize vast amounts
of information, I believe there's huge potential in
leveraging generative AI to help neuroscientists more
efficiently digest the literature, understand trends in
understudied problems in the field and even inspire
future research directions.
Q: Neuroscience often inspires
advances in AI, and vice versa.
How do you think this interplay
between studying the brain and
building AI systems will shape the
future of both fields?
A: The relationship between neuroscience and AI has
evolved in fascinating ways. While neural networks
were initially inspired by the brain's architecture, recent
AI advances have largely been driven by engineering
breakthroughs in computing power and data processing
rather than biological insights. However, I believe we're
entering an exciting new phase of convergence between
these fields.
Current research comparing artificial and biological
systems has revealed intriguing similarities in
information processing and learning patterns. This
bidirectional exchange offers unique opportunities: AI
models can serve as testable mechanistic models of
brain function, while neuroscience principles could help
us develop more interpretable and robust AI systems.
That said, this bio-inspired approach is just one of many
valuable paths forward in AI development. The key is
finding the right balance between learning from biology
and pursuing purely engineering-based solutions.
Q: Your study demonstrates that
LLMs can outperform human
experts in predicting study results.
Does this suggest that AI might
develop a form of "scientific
intuition," and how might that
differ from human intuition?
A: That's an interesting perspective. While “scientific
intuition” is challenging to define precisely, my guess
is it stems from years of research experience and
synthesizing connections across literature.
The fact that LLMs trained on scientific papers can
outperform human experts at prediction tasks suggests
they may develop knowledge synthesis capabilities that
differ from human approaches. An interesting research
direction would be investigating how these models
integrate information across neuroscience subdomains,
which could reveal underlying patterns in the field and
inspire new scientific connections.
Q: How could tools like BrainGPT
help address fundamental
questions about brain function
and cognition, especially in areas
where direct experimentation is
challenging?
A: I should clarify a few things about BrainGPT. As we
show in the paper, you could further fine-tune pretrained LLMs, at a relatively small cost, on neuroscience
publications to build a model – which we call BrainGPT
– that is better at predicting which study result is
more likely.
This success suggests the potential for LLMs to
synthesize scientific literature and we hope they
might eventually help identify novel connections and
TECHNOLOGYNETWORKS.COM
THE EXPANDING ROLE OF AI IN SCIENCE 25
suggest novel theories about the brain and cognition.
We are actively exploring how BrainGPT might serve
as a stepping stone toward developing systems that
could help scientists navigate unexplored theoretical
possibilities in neuroscience.
I would say that direct experimental evidence remains
irreplaceable and fundamental to scientific progress.
Generative models like BrainGPT must be grounded in
concrete experimental data.
However, these models could help scientists explore
potential research outcomes more efficiently. In
an ideal world, scientists would test every possible
hypothesis. But with limited resources and time,
testing all possibilities becomes impractical. We
envision that systems built upon BrainGPT could help
researchers explore alternative scenarios and outcomes
without conducting every conceivable experiment. By
suggesting which experiments might be most promising
and predicting possible results, such systems could help
optimize resource allocation in scientific research.
Q: Do you see parallels between
how LLMs process information and
how the brain does, or are these
fundamentally different systems?
A: Definitely. I think LLMs can provide inspiration for
how human cognition works but I would be cautious in
interpreting the success of LLMs as direct evidence of
human-like processing mechanisms.
There is a growing body of research on whether LLMs
learn like humans and opinions are mixed.
We have a recent paper out (a follow-up work from
the Nature Human Behaviour paper) that shows LLMs
trained on both forward and backward text perform
equivalently on the BrainBench task. This is particularly
telling since no human language has evolved to run
backward (e.g., apple → elppa). The main takeaway
from that paper is that LLMs are more general patternlearning machines than human brains. LLMs are
excellent at extracting predictive patterns in sufficiently
structured input – even reversed text – but it doesn’t
mean they employ human-like information processing.
Q: Do you think there are any
ethical concerns researchers
should be aware of, particularly
regarding the over-reliance on AI
predictions?
A: While LLMs have achieved remarkable capabilities,
they remain tools to enhance scientific work rather
than replace human judgment. These systems excel
at processing vast literature and potentially exploring
possibilities, helping scientists work more efficiently.
However, scientists must maintain their critical thinking
and decision-making autonomy, knowing when to
accept AI predictions and when to challenge them.
Given the inherent biases in AI training data, these tools
should augment rather than override human expertise.
Q: While your study focuses on
neuroscience, you mention that
the methodology could be applied
universally across sciences. What
fields do you think might benefit
most from this approach, and why?
A: This approach would be particularly valuable in
complex, interconnected fields like biology, where
discoveries often require synthesizing information
across multiple domains. As we show in the current
paper, LLMs excel at identifying patterns and
connections across diverse bodies of knowledge,
making them especially useful for researchers navigating
interdisciplinary challenges.
MEET THE INTERVIEWEE
Dr. Xiaoliang “Ken” Luo is a computational neuroscience and
machine learning researcher at the Foresight Institute, where his
work bridges neural networks and brain function.
THE EXPANDING ROLE OF AI IN SCIENCE 26
TECHNOLOGYNETWORKS.COM
CONTRIBUTORS
Blake Forman
Blake pens and edits breaking news, articles and
features on a broad range of scientific topics with a
focus on drug discovery and biopharma. Blake earned
an honors degree in chemistry from the University of
Surrey, which involved a placement year at the Medicines
and Healthcare products Regulatory Agency (MHRA)
laboratory, where he developed new pharmaceutical
testing methods. Blake also holds an MSc in chemistry
from the University of Southampton. His research project
focused on the synthesis of novel fluorescent dyes often
used as chemical/bio-sensors and as photosensitizers in
photodynamic therapy. Blake held several editorial-based
roles before joining Technology Networks as Senior Science
Writer in 2024.
Isabel Ely, PhD
Isabel is a Science Writer and Editor at Technology
Networks. She holds a BSc in exercise and sport science
from the University of Exeter, a MRes in medicine and
health and a PhD in medicine from the University of
Nottingham. Her doctoral research explored the role of
dietary protein and exercise in optimizing muscle health as
we age.
Molly Coddington
Molly is a Senior Writer and Newsroom Team Lead at
Technology Networks. Molly reports on various scientific
topics, covering the latest breaking news and writing longform pieces. Before joining Technology Networks in 2019,
Molly worked as a clinical research associate in the NHS
and as a freelance science writer. She has a first-class
honors degree in neuroscience from the University of
Leeds and received a Partnership Award for her efforts in
science communication.
Leo Bear-McGuinness
Leo is a Science Writer at Technology Networks where he
focuses on environmental and food research. He holds a
bachelor's degree in biology from Newcastle University
and a master's degree in science communication from the
University of Edinburgh.
Alexander Beadle
Alexander is a Science Writer and Editor for Technology
Networks. He writes news and features for the Applied
Sciences section, leading the site's coverage of topics
relating to materials science and engineering. Before
joining Technology Networks in 2023, Alexander worked as
a freelance science writer, reporting on a broad range of
topics including cannabis science and policy, psychedelic
drug research and environmental science. He holds a
masters degree in Materials Chemistry from the University
of St Andrews, Scotland.
Rhianna-lily Smith
Rhianna-lily graduated from the University of East
Anglia with a BSc in biomedicine and completed her MSc
by Research in microbiology at the Quadram Institute
Bioscience in 2023. Her research primarily focused on the
gut microbiome in pregnant women throughout gestation.
During her MSc, she developed a passion for science
communication and later joined Technology Networks as an
Editorial Assistant.
Download the eBook for FREE Now!
Information you provide will be shared with the sponsors for this content. Technology Networks or its sponsors may contact you to offer you content or products based on your interest in this topic. You may opt-out at any time.
Experiencing issues viewing the form? Click here to access an alternate version