Chris Chambers has a piece in the Guardian ("Are we finally getting serious about fixing science?") discussing a recent report about reproducibility from the UK Academy of Medical Sciences, based on a meeting held earlier this year in London. A main theme of the piece is that scientists need to focus more on going good science and less on "storytelling":
Some time in 1999, as a 22 year-old fresh into an Australian PhD programme, I had my first academic paper rejected. “The results are only moderately interesting”, chided an anonymous reviewer. “The methods are solid but the findings are not very important”, said another. “We can only publish the most novel studies”, declared the editor as he frogmarched me and my boring paper to the door.
I immediately asked my supervisor where I’d gone wrong. Experiment conducted carefully? Tick. No major flaws? Tick. Filled a gap in the specialist literature? Tick. Surely it should be published even if the results were a bit dull? His answer taught me a lesson that is (sadly) important for all life scientists. “You have to build a narrative out of your results”, he said. “You’ve got to give them a story”. It was a bombshell. “But the results are the results!” I shouted over my coffee. “Shouldn’t we just let the data tell their own story?” A patient smile. “That’s just not how science works, Chris.”
He was right, of course, but perhaps it’s the way science should work.
None of us in the reproducibility community would dispute that the overselling of results in service of high-profile publications is problematic, and I doubt that Chambers really believes that our papers should just be data dumps presented without context or explanation. But by likening the creation of a compelling narrative about one's results to "selling cheap cars", this piece goes too far. Great science is not just about generating reproducible results and "letting the data tell their own story"; it should also give us deeper insights into how the world works, and those insights are fundamentally built around and expressed through narratives, because humans are story-telling animals. We have all had the experience of sitting through a research talk that involved lots of data and no story, and it's a painful experience; this speaks to the importance of solid narrative in our communication of scientific ideas.
Narrative becomes even more important when we think about conveying our science to the public. Non-scientists are not in a position to "let the data speak to them" because most of them don't speak the language of data; instead, they speak the language of human narrative. It is only by abstracting away from the data to come up with narratives such as "memory is not like a videotape recorder" or "self-control relies on the prefrontal cortex" that we can bring science to the public in a way that can actually have impact on behavior and policy.
I think it would be useful to stop conflating scientific storytelling with "embellishing and cherry-picking". Great storytelling (be it spoken or written) is just as important to the scientific enterprise as great methods, and we shouldn't let our zeal for the latter eclipse the importance of the former.
I think there is a fine line between 'selling' and 'telling'. To discern this line requires expertise, time and as few conflicts of interest as possible. If one of these is lacking, the task quickly becomes too difficult. At the top journals who decide careers, it often enough seems like none of them are present as editors too few pick time-constrained big-shots who stand to benefit from high-profile papers in their field.
ReplyDeleteSo, while hopefully most of the time science is being told, in more and more cases, science is sold. Since such publications make careers, these marketers teach their students about how to get a job in science - by selling it.
Given the time-constraints we're all in, telling the difference between a good and a bad story becomes easy and that between good and bad methodology difficult - so we all too often give the story-teller a free pass, while the less eloquent colleague faces an uphill battle. Over time and with enough such decisions, we will have weeded out all bad marketers - only hoping they might also be good scientists.
That previous comment was by me, Björn Brembs
ReplyDeleteThanks for your comment Björn. My main goal here was to push back against the implication that I saw in Chris's piece that good storytelling is necessarily associated with bad science. I completely agree that good storytelling can sometimes obscure bad science lurking behind the story.
ReplyDeleteGreat post. I totally agree that Scientists should not be novelists. A paper should be easy to understand. Although, correct me if i'm wrong, but you're not talking about the root of the problem. Why do writers try to embellish the story even when their methods and results are solid but, like you said, a little bit dull? Why do people, in extreme cases do p-hacking? (Which is a form of embellishing a paper). None of this would happen if journals were accepting more negative/replication results instead of going for sensational positive result.
ReplyDeleteThanks Rodolphe - I am not sure that there is a single root of this problem. I agree that there are problems that arise due to the perceived need for positive results, but I am trying to make a different point: As scientists, our ultimate goal is to come up with effective stories about how the world works, which must necessarily abstract away from the details of the data. If we are not testing and ultimately telling stories then we are not doing science, we are just collecting data. The stories need to reflect the data accurately, both where the data fit the story and where they do not. To me, this is the essence of good scientific expression.
Delete@russ: much as investigative reporters stitch together scant pieces of information to produce a news article, scientists stitch together pieces of information to create a narrative around their results. as such it is colored through the lens of the author (and perhaps some small set of reviewers). scientific storytelling relies on expert knowledge of domain and access to/awareness of information, and as such it carries with it many biases. further, the human brain that comprehends these stories is not capable of thinking beyond a few dimensions. therefore, by reducing the mechanistic complexity of how the world works through a narrow low dimensional projection is only satisfying our inherent need for simplicity, but abstracts away from reality. i don't believe that everything reduces to E=mc2 (and even that equation has a more complex form). fundamentally what needs to be asked is which pillars of our knowledge foundation are shaky, and by reducing to storytelling we communicate the excitement but perhaps not our limitations in understanding.
ReplyDeletecoming back to the original article, i know that you agree that only by making the many pieces of data, whether significant or not, available, do you provide access to the richness of the story and complexity of the characters.
I think that 'telling a story' is important. Before the trolls are at me, I don't mean 'overhype the data'; I mean 'explain what the data mean'.
ReplyDeleteA major advance in animal lesion work was the 'cross-lesion technique'. It enables us to study interactions between areas, not just the function of particular areas.
Both my thesis supervisor, George Ettlinger, and Mort Mishkin at NIH claimed to have been the first to devise the technique. Mort assures me that it was him, and I believe him. But that is not the point.
The point is that George, who published the first full paper using the method (in Brain in 1959), didn't know how to write. The paper is bald, dull, full of tables, with no diagrams and no proper explanation of why the method is helpful and why the results mattered.
Mort wrote up a paper much later in which he produced diagrams of the various stages, showed the data in histogram form and really explained why the method was so powerful.
'Telling a story?' Yes, but not overhyping things. Just taking the trouble to think how best to help readers to understand why the method and results are significant.