Radically Rethinking Scientific Publication: The “Octopus” Model
Dr. Alexandra Freeman believes that the major problems associated with research culture in science can be traced back to one source: the current scientific publishing model.
This model has followed a specific protocol for many decades. The infamous “publish or perish” paradigm can stifle both creativity and careers while contributing to major issues such as plagiarism and research paper mills.
Freeman, who is the Executive Director of the Winton Centre for Risk and Evidence Communication at University of Cambridge, recalls a long-standing desire to work in a storytelling medium. After completing her initial studies in Biological Sciences Freeman followed her long-standing ambition to be a produce and director of natural history and science television programmes at the BBC.
Upon returning to academia, she observed parallels between her colleagues in the media and in the laboratory. This commonality was, as Freeman describes, “a desire to tell neat, easy-to-read and persuasive stories.” But she argues that good research isn’t about telling powerful stories, it is about evidence communication. Alarmed by her observations, Freeman sat down with a pen and paper, hoping to formulate a solution. In one evening, she devised Octopus, a “radical” platform for primary research publication.
Octopus is an entirely online, novel approach to scientific publication, which Freeman believes can more effectively recognize and values good scientific practise. There are different types of articles that can be uploaded to Octopus, including:
- A neatly defined scientific problem
- An original hypothesis or rationale
- A method or protocol
- Raw data or summarized results that are collected using an existing published method
- A statistical or thematic analysis of existing published data
- Interpretation – a discussion surrounding an already published analysi
- Translation or application – a discussion of “real world” applications from an already published interpretation
- A considered, detailed review of any other kinds of publication from the list above
Ahead of Octopus’s launch, Technology Networks interviewed Freeman to learn about her motivations behind developing the platform, discuss the deep-rooted issues in scientific publication and discover how – by taking it one step at a time – we can create a solution.
Molly Campbell (MC): Can you talk to us about your career thus far and perhaps any “milestone” moments that, looking back, you see as points in your life that inspired Octopus?
Alexandra Freeman (AF): Ah well, I’ve changed tack a few times in my life so far – but it seems that most of what I’ve been interested in is in some way about communication. I completed my PhD on communication in butterflies, and I always wanted to work in a storytelling medium. This was either writing or film-making, and then I swapped filmmaking for academic work on “evidence communication”, which is again a very different kind of communication.
It was that move that inspired Octopus. I realized that storytelling – however engaging – was (most of the time) about leading an audience by the hand to a conclusion, being persuasive. When I heard academics asking, “what story shall we tell in this paper?” I felt my stomach turn. It occurred to me that here lay the roots of so many of the problems that I knew about: the problems of HARKing, p-hacking, publication bias and many others. They were all driven by the desire to tell neat, short, easy-to-read and persuasive stories in academic papers. Stories that would convince their readers that “we came up with this idea, and the data suggest we were right” – and, if possible “knowing this, you should now change the way you behave or think about the world” – that that infamous “impact”. I knew this kind of story – I’d been writing them throughout my career in the media!
Defined as presenting a post hoc hypothesis, one that is based on, influenced or informed by results that have already been collected.
Also referred to as “selective reporting” and “inflation bias”, p-hacking is when true effect sizes are misreported in published studies.
Publication bias is the tendency for authors to publish studies that obtained significant data.
Good research, though, isn’t about telling powerful stories. It’s not even about impactful findings. Good research comes in many different forms, but it is never about persuasion – it’s about evidence communication.
All of this occurred to me in such a rush – I literally sat down and wrote out the full spec of Octopus in about two hours in one evening. I think several strands of my life – my experience and ways in which I was thinking – came together. My new understanding of the different forms of communication, my fresh experience of the academic research culture and my knowledge of the replication crisis and issues with publication bias – it all just “made sense” that what we needed to do was change the way we approached academic publication. We needed to reset the incentive structure to match the aims of evidence communication, not more journalistic-style communication.
MC: Can you talk about why the platform is called Octopus?
AF: I wanted a word that didn’t sound “academic” or elitist, one that was easy to pronounce whatever your native language and one that naturally led to a logo. It also seemed justified because octopuses have a kind of “distributed intelligence” – each of their arms has a highly developed and independent neural network, and there are eight types of publication within the Octopus publication model that match the eight arms of an octopus. Of course, they’re also really cool animals.
MC: In your opinion, what are the key issues associated with the current scientific publishing process? How do you hope Octopus could over overcome them?
AF: I could easily use up your entire word count before I even hit my stride!
There are a few easy ones: it should be free to publish and free to read publications – anything else just creates inequalities of access. That means keeping costs to a minimum and running the system as a centrally funded service. That shouldn’t be difficult.
Secondly, we should use automatic language translation to make publication language-agnostic. We can just about do that now, and automatic translation gets better by the month, so in the future it will just be natural, but implementing it can reduce inequalities of access further. While we’re at it, being digital means we can use some tools like automatic plagiarism detection, etc, and we can design the platform to reduce sources of bias such as using surnames and initials rather than first names of authors on publications, and not listing their institution on publications either.
There are the more fundamental issues (including their effects on the research culture). As I’ve said, many of these come down to the way we think of a “paper” as the unit of publication – and the way that we value “findings” and “impact” so highly. If we want to incentivise good research practice, we need to recognize what “good research” looks like first of all.
Research is something done by specialist professionals these days – including specialists in data collection, specialists in analysis, specialists in experimental design, specialists in implementation, etc.
To produce a “paper” including all these different aspects, we are either expecting a specialist in one aspect to “have a go” at another part of the process (like expecting someone who has collected a lot of field data to then analyze it) or expecting a group of specialists in different aspects to all find each other, work together and put their different skills together. The latter is, of course, a better approach than the first, but it is still really limiting the quality of the output. Not only are you, as a specialist researcher, limited in whom you can collaborate with by whom you know or have contact with, but also once a paper is published, it is very unlikely that anyone else will ever reapproach any parts of that research (such as someone else, with different skills, reanalyzing the data, or someone else collecting more data according to the same protocol). That is because they are not incentivised to do so by the publication process. So not only are individual researchers losing out in terms of whom they can “work with”, but the worldwide academic endeavor itself is losing out.
Instead, in Octopus, there are eight types of publication, each reflecting a part of the research process. This allows researchers to specialize in one or two of these parts of the process (with their very different skills) and to publish their work literally in collaboration with the rest of the research community – in time, as well as in space! Someone might publish a theoretical idea now, which someone else, 20 years in the future, will collect data to test. Then another researcher, 20 years further in the future, might again choose to analyze that data using new techniques.
Every publication needs to be linked to an existing one of the type that is “above it in the chain”, which means that it’s easy to find all publications relating to a particular research problem, and also that people can’t “miss out” parts of the process (such as not publishing at least an outline of their results or data).
It changes the way we think about research, and research careers, completely. Instead of trying to build stories ourselves – or with people we know – instead, we can each concentrate on doing our specialist piece of work to the very best of our abilities. This creates a single pieces of the jigsaw and we can then insert them in where we can, rather than trying to create the whole pictures ourselves.
By prioritizing the quality of each individual part of the research process – ensuring that each jigsaw piece is as correct as it can be – the findings and the impact that we all crave will emerge from this trustworthy research base.
That’s the philosophy behind Octopus – and there are various ways in which its design aims to incentivise “best practice” for each part of the research process. I hope, by doing this, it will also have a strong effect on improving the research culture. It is designed to recognize specialists and professionalisation, to encourage smaller author groups and subsequently accountability/meritocracy, to reduce biases and remove barriers to entry in addition to combatting hierarchies and the dangerous power imbalances that they bring. It is also designed to change the funding system radically… but I haven’t got space to talk about all of that as well!
MC: Can you expand on the key differences between using Octopus to publish, vs opting to publish through a journal?
AF: Well, they’re really quite different. Octopus is a bit like a “patent office” for your work.
By publishing your work in Octopus, you are registering it as “yours”. Once you’ve got it in Octopus, you can relax – you have established your priority and your ownership of that piece of work. So that, I hope, will encourage people to share their ideas and their work as quickly as possible. But you are also being judged on the quality of your work – so it’s not a place to rush out a quick draft. This is the official record, and you don’t want to stain your record with work that others then rate poorly.
It's also the place to publish your piece of work in full detail. Publications in Octopus are rated by their readers on pre-set criteria of quality – and these include aspects of reproducibility. For example, if the publication is describing a method, are the details clear and full enough to allow two different researchers to carry it out and do the same thing? If it’s a results or data publication, could anyone then analyze that data? So, forget word counts and brevity for the sake of the casual reader – Octopus needs you to tell the posterity everything they need to know about your ideas and work.
More fundamentally though, forget trying to do every part of the research process – find the bit(s) that you’re good at, and stick to them. Let others do the bits they’re best at. Actually, I think that’s a really exciting prospect. Imagine publishing some data, or a hypothesis, and then getting notification that someone had taken that to the next step, and then seeing what they’ve done with it. And then getting another notification to discover that someone else has taken it in another direction again. It’s really exciting and satisfying working with other experts. Reimagine yourself as part of a huge, worldwide group of people bringing different skills to the same problems.
MC: Can a researcher publish work on Octopus and also in a journal, if they should choose to?
AF: Of course – in the short term I suspect a lot of people will do that. They will see Octopus as a kind of pre-print server and work to write “papers” still and just “cut them up” to publish the separate parts in Octopus. But I think … I hope … that quite quickly people will realise that this is like using the internet only to publish books. You can do it, but it’s not what it’s designed for. Not only that, but Octopus is much faster and easier to use (no need to write all that introductory material since, in Octopus, you link your new publication to a previous one to form a chain. So, if someone has already written up the “research problem” then you don’t need to cover all that background information again. You just link to theirs, and this gets you credit that your funders and institutions can see and assess more easily than they can if you publish in a journal.
MC: Can you discuss the status of Octopus? You plan to launch in the spring of this year, is that correct?
There is currently a prototype of the whole system which you can play with. We have a new version built which we are now user-testing and working towards a full launch on 29th June. With that full launch, publications in Octopus will be ‘official’, with DOIs being minted, and all the information being properly registered and viewable against authors’ ORCiDs etc.
MC: On the Octopus prototype website, it reads “Anyone can read anything on Octopus. Those logged in with an ORCiD can write and rate publications.” Will this be moderated?
AF: No. There is no moderation in Octopus. It’s a system – a service – not a company with staff doing editorial work.
Everything that someone does on Octopus is viewable against their ORCiD. Everyone who publishes on Octopus has their own author page (which anyone can view) and, from that, readers can see all their activity on the platform (as well as their publications in journals). This means that a person’s funders and employers can see what they have published and who they have rated etc. I hope that this in itself will be a moderator of poor practice.
In addition to that there is a “red flagging” system which allows readers to raise concerns about a publication (for example, if they suspect plagiarism, or an ethical problem or unprofessional language). This allows an author to reversion their publication if they agree with a reader’s concern, as well as alerting other readers to the potential problem before it is resolved. Concerns that are not sorted out to mutual satisfaction this way will – eventually – be escalated to a relevant Research Integrity Office (RIO). We realize that, at the moment, there is not always a relevant RIO for all authors, internationally, but we hope that in the future there will be, and that this system will therefore allow suitable regulation of the research record.
MC: Have you received any “pushback” to the launch of Octopus that you are able to talk about?
AF: Not really. Of course, there are people who see it as rather too radical a change and think that it “won’t catch on”, and there are people who have genuine concerns that it won’t function the way it is designed to (usually concerns about bullying or gaming).
Octopus is very much not a social media platform – it is not somewhere that encourages hasty commenting. Publications on Octopus – including peer reviews – are an author’s contribution to the everlasting research record, registered and time stamped against their name and personal identifier.
We are designing the publication process to make this very clear to people – although we want to make it quick and easy to publish, we mean “quick and easy compared to publishing a paper in a journal”, which is quite a low bar to beat. We don’t mean it will be like writing a tweet. For every publication, authors will be entering quite a lot of meta-data, like a CoI statement, choosing a copyright designation – not the sort of thing that people are likely to go through if they are trying to post a short and abusive comment!
Others are worried about the rating system – thinking that ratings somehow reduce the complexity and depth of quality assessment. In fact, I think they do the opposite. Firstly, remember that publications in Octopus are much more specific – people will be rating a method, or a dataset or an analysis. That helps. Then they will be rating on three pre-set criteria. These criteria allow us, as a research community, to define what we see as “good” for each of these kinds of publication. What defines “good” data, or a “good” hypothesis/rationale? These will be things relating to clarity of description, reproducibility, or whether they express something that has not previously been published. I think that ratings like this will give a much less-proxy measure of quality than anything we currently have (and of course there are peer reviews in Octopus as well).
Finally, we have thought about gaming – and by making everything transparent and easy to visualise the data of who is rating whom, and what ratings people are giving etc, it should make it easy to identify “mates rates” or circles of poor practice.
MC: In general, what has been the academic community’s response to the idea of Octopus?
AF: It’s difficult to get an unbiassed view of this. The people I talk to are usually really very positive about it. In fact, I am so touched by some of the messages of support that I get. People who realize its aims and its potential are sometimes so passionate and effusive about what a difference this could make to their lives and careers.
On the other hand, of course I sometimes see frustrating amounts of defeatism. Rather than actually identifying a problem with the idea, some people just say “it won’t work” because they have no faith that anything will ever change the status quo. I have very little answer for them: nothing will ever change unless someone tries. When I first started this, people’s usual chorus was “the current situation is awful, but no one seems to have a solution”. The next barrier they see, now that there is a potential solution is: “you will never change people’s behavior”. Well – it’s one step at a time. Come up with a solution. Make sure it makes people’s lives better and easier from day one. Make it available. Make sure people know about it. If those four steps don’t work, then I’ll dig deeper for more solutions. But so far, I’m just working my way through them!
MC: You have already received funding for Octopus from UK Research and Innovation. What support will this funding provide? Do you envision a need for further funding in the future?
AF: Yes, UK Research and Innovation (UKRI) have been great so far. They gave us everything we asked for and have committed to fund Octopus’ launch and first years – as well as an evaluation programme alongside it which will feed into it and show us whether we’re having the effect that we hoped, and how we might improve.
Everyone asks about how it will be funded after those first few years. My plan is firstly to keep its running costs to an absolute minimal. I hope it will cost below £100k per year to run (because, as I say, it’s a service platform, not a company that employs lots of people). And if it does even half of what I hope it will do, then if the research community can’t together find £100k a year to keep the service running then well… I would just say I’m sure it would.
Dr. Alexandra Freeman was speaking to Molly Campbell, Senior Science Writer at Technology Networks.
Complete the form below to unlock access to this Audio Article: "Radically Rethinking Scientific Publication: The “Octopus” Model"