We've updated our Privacy Policy to make it clearer how we use your personal data. We use cookies to provide you with a better experience. You can read our Cookie Policy here.

Advertisement

Scientific Storytelling: It's All About Context

Scientific Storytelling: It's All About Context content piece image
Listen with
Speechify
0:00
Register for free to listen to this article
Thank you. Listen to this article using the player above.

Want to listen to this article for FREE?

Complete the form below to unlock access to ALL audio articles.

Read time: 6 minutes

Scientists are human, and humans make mistakes. To perceive scientists as infallible and omniscient is to undermine the very foundation on which science is built: self-correction.

Science is a process, says
Dr. Yotam Ophir from the Department of Communication, University at Buffalo. This process is not finite. It involves a community of individuals coming together to continuously critique each other's research to further enhance, develop and update knowledge.

After years spent studying the effects of misinformation, both in science and more broadly, Ophir developed an interest in how people perceive science as a field. Scientific research is largely communicated to the public via media outlets. How does the narrative used in this communication impact beliefs and support for science?

This was the question posed in Ophir's latest research study, conducted in collaboration with
Professor Kathleen Hall Jamieson, published in Public Understanding of Science. In this interview, we ask Ophir to explain the different types of narratives that exist around science, their impact and why retractions should be interpreted as a positive step in scientific progress. Ophir also provides useful advice on navigating the world of scientific research and science communications.

Molly Campbell (MC): Please can you tell me a bit about how you started this specific research study?

Yotam Ophir (YO): 
For the last 10 years or so I have been studying misinformation, how it spreads, how it influences society and what can be done against it to ameliorate some of the damages. Some of this work was dedicated to scientific misinformation specifically, such as issues relating to tobacco misinformation, vaccine misinformation and so on. At some point, I started thinking more broadly about people's misperceptions about science itself.

There is a lot of research that suggests that teaching people facts, for example that the world is warming up, or that vaccines do not increase the chance of developing autism, does not seem to do much in the face of misinformation. People are not being persuaded to trust – or mistrust – science based on scientific facts. What does influence people's view on science is their understanding of what science is, how science works, the logic behind science and the values of science.

Over recent years, scientists have been raising an alarm about an inability to replicate important research findings. My collaborator in this project [
Professor Kathleen Hall Jamieson] wrote a piece in the Proceedings of the National Academy of Sciences in 2018 about the problem she saw surrounding the narrative of this inability to replicate. She concluded that the fact that a study does not manage to replicate other research is being interpreted by scientists and journalists alike as a scientific "crisis" [the replication crisis], and that this is a problem.

MC: You randomly assigned research subjects to read stories that represented several different media narratives. Can you summarize these narratives?

YO:
We built our experiment based on the work that Professor Jamieson had done previously. She had identified multiple narratives that the media use when describing science. The first, which is a popular narrative, focuses on an individual's achievements, so it is referred to as the "quest discovery" narrative. This narrative centers on individual achievements, which by itself is misleading because it creates the impression that science is progressing through individuals, alone in their labs, continuously having "eureka" moments. That is not how science works. This narrative is misleading, but it wasn’t our main concern as it typically does not reduce trust in science.

The second type of narrative often employed by journalists is the "counterfeit quest". This is the story about a researcher that rose to fame with a high impact study, before it was eventually discovered that the research was fake, wrong or unethical in some way. The paper is retracted, and the scientist is now being punished in one way or another. An easy example here is Andrew Wakefield's work on vaccines and autism, and his infamous 1998 paper in The Lancet that got retracted – eventually, leading to Wakefield’s losing his license to practice medicine in the UK.

In recent years, we have seen a third narrative emerge, that I think came from scientists, not journalists. As there was a move towards open science and increased transparency – which are great in my opinion – there was a realization that certain studies, including influential ones in the hard, and later social, sciences, could not be replicated which led to retractions, and it was described in the media as a "crisis", meaning that science was "broken" and "should not be trusted".

What was missing in the media coverage is a contextualization that will frame scientific errors in light of the values of science. We proposed in our study a new narrative that acknowledges that, yes, studies are being retracted and some findings cannot be replicated, but the fact that we can identify and correct errors in published work should be interpreted as a sign that science is doing what it should be doing.

We decided to run an experiment and see if the stories that we kept seeing in the news media had a detrimental effect on people's trust in science and if our new narrative could fix it. We asked people to read news articles that were based on real coverage that represented these different narratives.

MC: What did you find?

YO:
What we found in general that the science is broken narrative indeed led to distrust in science and scientists. Also, our suggested new narrative – one that is transparent about errors in science but contextualizes them as part of a healthy scientific process – ameliorated some of the detrimental effects of the crisis stories. These effects were stronger for people who trusted science to begin with, indicating that some will be more open to the idea of our new narrative that science is doing what it should be doing than others. In other words, the introduction of the new narrative could be more effective if we keep working in other ways to increase trust in science.

MC: Presenting hard facts does not help to ameliorate the damage caused by misinformation but contextualizing failures and providing an understanding of the scientific process does. What if the public does not understand the scientific process, for example, if they have not studied science, or are perhaps not working in a science-related career?

YO:
That is why, in our study, we chose not to look at scientific publications, but focused instead on news articles; this is how people generally learn about science. Most of us do not read scientific papers, and most of us cannot understand 99% of the work that is out there. Even scientists are experts in only one field, and thus learn about most scientific topics from the media. We need the media to mediate science to us. That is why we targeted journalistic practices in this research, not scientific practices.

MC: Can you talk more about the philosophy and values of science?

YO:
Science is always skeptical, always questioning. It is never turning knowledge into dogma or faith. We continue to ask questions even after our work is getting published. In our view, the fact that high impact research managed to pass peer review, to get published and then is found to be wrong, isn't a sign of crisis. We see it as science doing what science should be doing. These are the values of science; it is what makes it so epistemologically strong. Science is a reliable way of knowing, because even when something is considered a finding, you can still question it, you can still retest it.

MC: What about "controversial" retractions?

YO:
I do not think retractions should be seen as controversial. I think they're part of the game for a couple of reasons, both of which result from our limitations as humans. First, we all make mistakes. Just like any other person working in any other profession might make mistakes, scientists will inevitably make mistakes at some point. Some research that we publish might not be accurate, that is a fact of life. Second, some of us will play the system and we lie. There is only so much that anonymous peer review can do to catch blatant lies. If someone fabricates data, it may be increasingly harder to identify errors. Mistakes will be made, and papers will be retracted, but I do not think that it [retraction] should be considered controversial. Perhaps the background story is controversial, if a scientist was lying to get a grant, that is controversial. But the mere fact that that we found a mistake, and we can pinpoint it and correct it is again, in my view, a scientific achievement. It is a natural process that shows us that science is doing what it is set to do. We need to get better at communicating that, because there is a thin line between healthy skepticism – whereby you question what you hear, look for evidence, corroborate data and the sources of data – and toxic cynicism.

A cynical person might see a retracted paper and assume that means all of the research connected to the paper is incorrect. 'We knew all along that we shouldn't trust science', might be their narrative. Skepticism and cynicism are two very different things.

MC: If you could provide a key piece of advice to a scientist that is in the early stage of their career, what would you say?

YO:
Firstly, be as transparent as possible. Do not think about transparency as a weakness. This connects to the previous point, that retractions are not controversial because they are part of transparency. Secondly, remember that a person's view of your work is less dependent on their understanding of the actual facts – the theories or evidence that you found – and more dependent on their trust in the scientific process. It is better to communicate why your work is reliable, than to flood people with statistics, or graphs. My third piece of advice is to be modest and remember to practice healthy skepticism yourself. When I first published this research study, I told a journalist to trust the science, not scientists. My eight-year-old daughter heard and questioned me on this. 'But dad, you're a scientist? So, are you telling me not to trust you?' I told her that was absolutely right. Do not trust me because I am smart, because I have a degree from a prestigious institution or because a scientific journal published my work. Do not trust me, because I am just a person and people make mistakes. Some of my thoughts might be brilliant, some of them might be completely wrong. Trust the scientific process instead. If you publish something, a year or two from now it might be found to be wrong. Do not get defensive about it, accept it and say 'Okay, that is how science works. I found something and I put it out there so other scientists can look at it and make sense of it. Whether it will be challenged or not, I contributed to the scientific endeavor'.

Yotam Ophir was speaking to Molly Campbell, Science Writer for Technology Networks.