Researchers Want To Embrace AI But Still Have Major Concerns, Survey Finds
Elsevier’s latest report reveals how researchers feel about AI’s potential and what is needed to increase trust in these tools.
Complete the form below to unlock access to ALL audio articles.
Corporate researchers feel positive about adopting artificial intelligence (AI). Still, concerns around misinformation and inaccuracy need to be addressed, according to a survey of 300 researchers published in Elsevier’s “Insights 2024: Attitudes toward AI” report.
“Based on the responses from corporate R&D professionals, there is a strong appetite for adopting AI tools across the life sciences industry, but not at the expense of ethics, transparency and accuracy,” Mirit Eldor, managing director, life sciences at Elsevier told Technology Networks.
The new report highlights the current sentiments around AI, providing actionable insights into the steps needed to build confidence in AI tools in the research community.
Appetites for AI come with a side of concern
AI is expected to transform research and healthcare, creating both excitement and questions about its potential to improve productivity and safety. Partnerships have already emerged between pharmaceutical giants like Moderna and OpenAI.
The “Insights 2024: Attitudes toward AI” survey was conducted to provide decision-makers with evidence-based insights into how researchers feel about AI’s potential, as well as its challenges.
“We initiated this work last year, after hearing from our customers that ChatGPT and other Generative AI [GAI] technologies were creating a lot of interest and excitement, but also some confusion and concern on how and where to best utilize these technologies,” said Eldor.
The survey found that 96% of participants think AI will accelerate knowledge discovery, while 71% say the impact of AI in their area will be transformative or significant. Most respondents also believe AI will realize cost savings for businesses (93%), increase work quality (87%) and free-up time to focus on higher-value projects (85%).
However, despite positive sentiment towards AI, participants are clear about specific concerns that need to be addressed for its adoption. These include the belief that AI will be used for misinformation at least to some extent (96%), that AI will cause critical errors (84%) and that AI will lead to weakened critical thinking (86%).
“Operating in highly regulated industries, pharmaceutical, biotech, medical devices and chemicals researchers cannot simply take at face value the results from a ‘black box’ AI tool,” explained Eldor. “A vital part of the trustworthiness of any AI model is that its outputs are explainable and can be validated, to avoid critical errors.”
“In conversations with our customers, trust in data quality and provenance have emerged as critical in how they view AI’s ability to augment their work. At Elsevier, we address these concerns by developing explainable AI built on a solid foundation of peer-reviewed content, extensive curated data sets and sophisticated analytics.”
Transparency and ethical AI are fundamental to researchers. Almost all survey participants (91%) expect the results from GAI-dependent tools to be based solely on high-quality trusted sources; 60% say ensuring the confidentiality of inputs would increase their trust in that tool.
“Elsevier ensures these requirements are met by following our Responsible AI Principles,” said Eldor. “These ensure that any AI use is fully compliant with – and often ahead of – regulations, and that it is explainable and impact-driven, all while avoiding the introduction of bias into data sets.”
Incorporating a human-in-the-loop approach that ensures the outputs of AI tools are overseen and validated by subject-matter experts can help uphold these principles. “It is also vital that the data included in AI models is trusted and is fully compliant with privacy regulations,” continued Eldor.
The risk of “shadow AI”
“Shadow AI” — using AI systems and tools within an organization without explicit approval or oversight from IT or data governance teams — poses a significant risk to heavily regulated industries. More than 55% of respondents are prohibited from uploading confidential information into public GAI platforms, and 29% are prohibited from using public GAI for certain purposes.
“One of the main ways that organizations can mitigate potential risk around AI is by seeking out domain-specific expertise and data. ‘Off the shelf’ public AI tools are powerful technologies, but they are not built for the nuance of scientific questions in disciplines such as drug discovery,” said Eldor.
“Answering research questions requires domain-specific AI fine-tuned on high-quality, verified data to enable precision and accuracy in research.”
Elsevier has been using and developing AI and machine learning technologies, powered by more than 100 years of peer-reviewed content and data, to create products that help the research community be more effective. It recently launched SciBite Chat, a large language model (LLM) designed specifically to serve the unique needs of the research community.
“SciBite Chat is underpinned by deep domain expertise, and more than 20 million scientific terms and their synonyms that have been created by subject matter experts,” explained Eldor. “SciBite Chat can be used as a translational tool, with researchers able to ask questions and receive answers in their native language. All of this is designed to significantly reduce the technological entry barriers that have limited the more widespread use of LLMs in the research community.”
The “Insights 2024: Attitudes toward AI” report highlights how researchers feel AI can enable them to increase scholarly output. However, caution over the integrity of information and a demand for transparency have offset its implementation. Specialist AI applications that can integrate reliable scientific data with secure computational ecosystems are set to overcome these barriers and unleash the true power of AI in scientific research.
Mirit Eldor was speaking to Blake Forman, Senior Science Writer for Technology Networks.
About the interviewee:
Mirt Eldor is the managing director, life sciences at Elsevier. Mirit joined Elsevier in 2015 and previously served as SVP strategy for Elsevier’s Health & Commercial businesses. She holds an MBA from Imperial College, London, and an LL.B. from Tel-Aviv University.