We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.


Why are the Life Sciences Lagging in AI Development?

Listen with
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 3 minutes

A recent survey conducted by not-for-profit group The Pistoia Alliance highlighted that nearly three-quarters of life science professionals interviewed think that their sector is lagging behind other industries in the development of AI. We discuss the survey results, what they mean for the life sciences field, and how the field can speed up AI development with Dr Nick Lynch, a consultant at the Pistoia Alliance. 

Ruairi Mackenzie (RM): Your recent survey into AI in life science has turned up some interesting results: 72% of life scientists think their sector is lagging in AI development – are they correct?

Dr Nick Lynch (NL): The suggestion that life scientists believe their sector is lagging in AI development is probably a valid assertion – partly because of the complexity and breadth of the life science industry. If we compare the use of AI in the retail industry compared to the life sciences, for example, the challenges that need solving are far more complicated than say, personalizing the customer experience.

One of the most significant obstacles hindering innovation within the life science sector is the variety and quality of the data in use – which intensifies the problems at hand and impacts how successful AI outcomes can be. Only when we’re able to find a way to standardize data will we be able to catch up with other industries.

RM: 42% of your respondents who had AI projects on the go were unsure or unconvinced of AI’s usefulness – what’s wrong with these projects and how can we arrive at meaningful outcomes with AI?

NL: At the moment, the industry is still in its trial and error period with AI – we’re still trying to figure out exactly how the technology can be used to help R&D. There are some really clever examples of AI in other industries at the moment, but these have years of work behind them and are only now coming to fruition. We often see the high-profile cases in the news, but the industry must remember AI is like an iceberg, so much of what is being done isn’t seen by the public and takes time to become effective.

Additionally, many managers in the pharma industry seem to believe AI can solve problems immediately, while the practitioners of AI are more realistic that the technology can’t do everything right now. The hype around AI is there, but there is also skepticism – partly due to several miss hits in the industry. Patience and perseverance will be vital to arriving at meaningful outcomes with AI.

The only way to achieve meaningful outcomes from AI as an industry is to ensure we are overcoming the two key barriers – the data and the people. Firstly, organizations need to ensure the quality and veracity of the data they use. Secondly, those in the life science industry need to ensure they have the right people with the right skills to implement and use AI approaches in collaboration with the existing workflows, in order to achieve intended outcomes.

RM: Another recent survey, by YouGov, highlighted that the general public lives in nothing less than nuclear-level fear of AI. How can science help change these viewpoints in the general public?

NL: It can often be difficult to differentiate between developments in robotics, and how AI is coupled with it. When the public see videos of robots acting in certain ‘creepy’ ways, they are fearful and wonder what this will mean for life as they know it. But often, they are less aware of the augmented and human aspect involved in AI. The initial deployment of AI in life sciences will still involve a human element working alongside the automation to achieve the desired outcomes. Making the public more aware of exactly how AI will be used will be important in changing public perceptions.

Another key concern that often arises from the public are the ethical considerations of AI. While this is an incredibly broad topic, scientists already tend to work in an ethical manner – particularly those working within the healthcare area. This focus on ethics has already been seen in genomics – there is public concern about what happens to the biological samples and data for example. Organizations like the Wellcome Sanger Institute have worked hard to eliminate negative feeling and raise the public’s awareness about the genomics process and its potential. This approach may need to be adopted to change public perceptions within the AI and machine learning (ML) field and this is where collaboration will be required between parties to highlight the value of AI/ML for healthcare in general.

RM: What are the next steps to ensuring that life sciences lead the way in AI development?

This really comes down to the quality and quantity of data. The nature of AI means it learns best when it is seeing real data, so the early pilots will really help to bring value in the second or third round of use. While still in the augmented stage of AI and ML – where scientists are heavily involved – a key broadening use of AI will be in early stages of drug discovery, but there will also be potential for patients in supporting earlier diagnosis and in alerting to early signs of serious issues for patients.

However, it will be vital to overcome public fear about how their health data is being used. The more data that can be captured from sources like this, the more it will become possible for the life science industry to continuously improve how AI/ML is used. A stream of good quality and diverse data is likely to lead to a much broader use of AI in the NHS and other health systems in the future.

Dr Nick Lynch was speaking to Ruairi Mackenzie, Science Writer for Technology Networks