I’ve been thinking a lot lately about “experts,” i.e. who we consider credible and who we don’t on any given subject,* and on the roles and responsibilities of those who are widely considered to be experts. When we as scientists conduct studies and publish the results, in general those results and our interpretations of them are complex. Often the conclusions sound like, “We have shown that the one particular scenario we considered does not disprove the model in question, and we encourage further study.” And that one particular scenario is usually just a stone on the path to some bigger understanding of a subset of a subset of a field that studies part of how the universe works.
The problem starts when we have to boil it down. Fillyjonk at Shapely Prose once commented that if you ask an expert for their “elevator pitch” that distills their very complex research down to a sentence or two for non-experts, the answer is usually “… it’s complicated.” I want to amend that, because by the end of grad school we have all had to learn to do that distillation. It’s actually not because communication with the lay public is important (I mean, it is, but that’s rarely structured into a graduate program), or because your advisor wants you to be able to impress friends at parties, but because that’s [part of] how you get funding. Some proposal panels are composed of other experts, and those proposals can be very detailed and complex, but if you can’t describe your work to a general scientific or just a general academic audience, you’ll miss out on a lot of other funding opportunities. We have to be able to make that elevator pitch. That pitch is the “big picture” view of our research — that thing so many graduate students struggle with the most and never remember to include in their presentations and talks. But until you learn to say it, on the spot, you can’t write a good abstract, good conclusions for a manuscript, or good grant proposals. (Of course, if you ask too many questions beyond the elevator pitch, that’s when you’ll usually still come up against the “it’s complicated” answer.)
And then there are press releases. Some journals have staff that write those releases, but in a lot of cases the authors of articles are required to write their own releases. Those big-picture, general, oversimplifying sound bytes that get authors attention (= attract funding) by sometimes catching a science reporter’s eye — what so many people don’t realize is that many of those were written by the scientists themselves.
And when your press release catches a reporter’s eye, leading to further investigation into the study so they can write an article, the reporter usually doesn’t ask the questions that lead to “it’s complicated” answers, even though that’s actually the meat of the study. Who wants to write an article about something that’s complicated? That won’t sell! (i.e. “The public is too stupid and impatient to handle it!”) They want the pat answers and the sound bytes, and the press releases and abstracts hand it to them. Scientists simplify because they are required to do so to “communicate the public” (and “get money from non-specialists”), but then those sell-your-research pitches that boil essentially complex systems down to a couple oversimplified points (and if you’re lucky, advice or warnings about what people can or should do, because that will get you more attention) get turned into news stories promoting behaviors. And usually it isn’t actually based on much, in terms of statistically conclusive results.
The communication disconnect isn’t all on the end of the MSM, though they are to blame for trying to turn the simplified stories (meant, say, to convince a funding agency to fully fund what had been an incomplete pilot study) into stories that sell (meant to prey on the public’s need to stigmatize each other and learn new ways to be better than each other. Oh, and on their fear of death). It certainly isn’t the MSM’s fault that there is so much funding for “obesity research” (actually, on second thought… but that’s not the topic of this blog post, so let’s not go there today**), so scientists whose results are statistically mixed and mostly inconclusive will still try to spin them so they support socially appealing ideas about fatness and “health.” And those scientists might be bigots who want to smack down the fatties – but just as likely, they figure the public will never read their abstract (because most studies don’t get picked up), but that abstract or release will make it possible for their next study to get funded. Scientists are trying to use the media’s penchant for picking up sexy sound bytes to get attention, without always thinking about the impact of promoting iffy (but sexy) conclusions.
When you work in a field where the outcomes of your research (even very preliminary, inconclusive research) can be spun to further stigmatize a severely marginilized group of people, you have an extra responsibility to stop and think. The ethics of spinning everything to get more funding are questionable anyway, although we all do it (hell, money is tight, and attention is good); but it’s particularly problematic when it can actually negatively impact people’s lives — often unnecessarily. That money might be the only thing to keep your research afloat, so yeah, it’s important… but it’s not more important than keeping teen girls from starving themselves literally to death (or just jumping the gun by committing suicide because they’re so severely depressed over being a size 12).
*Related to that, and also rolling around in my head, is the question: when is it healthy to be skeptical, and when is it hypocritical because I’ve been convinced by persuasive arguments on other subjects that are considered just as controversial? For example, the idea of treating fat people like human beings deserving dignity and respect, instead of stigma and discrimination, is treated as a wildly controversial position to stand by by seemingly the majority of people in western society. On the other hand, I am skeptical of pretty much all suggestions that humans were visited by extraterrestrials in their past, despite what some consider to be very convincing arguments. Where do I draw the line? It’s clearly not only because of the academic credentials of the people involved, because some people in favor of the alien hypothesis have PhDs, and the most persuasive critical and statistical arguments I’ve heard in favor of fat acceptance have come from bloggers with at most a masters in english (which is not a science field, though I don’t underestimate the critical thinking required to receive a degree in english). Really what’s going on, for the most part, is that I find some critical arguments persuasive because they’re logical, and generally supported by statistical data when I actually look at the numbers, and I find others less persuasive because they are laden with logical fallacies and misrepresent the data (or those data that exist are inconclusive). In the case of the alien hypothesis, I find the suggestion so outlandish that in the absense of any academic credentials or respect from academic institutions, I take every argument I hear with a massive grain of salt. That skepticism is probably just as infuriating to people who believe in the hypothesis (particularly since I don’t really want to go out of my way to read those papers and books, so I even actively avoid becoming more well-read on the subject) as fatphobic bigots are to me when they refuse to consider actual statistics on the subject, because that would challenge their preconceptions. But I suppose there’s also the fact that I’m human.
** What I’m not getting into today: how this is a vicious cycle, because publicly popular topics, which are generally fueled by promotion in the media, catch on quickly in government, which runs the biggest funding agencies. Hot topics in the public quickly become the hottest research topics. So that’s where the money goes. It’s quite a spiral.
ETA: Man, Jorge Cham and I are on the same page lately.