I missed this earlier and, at the risk of getting myself into trouble, I’d like to say a few words. Ben Goldacre in The Guardian turned his eye toward a recent study about the quality of press releases from major American medical research centers. Having worked in at least one top research institute probably referenced in the study, I’m not terribly shocked.

Sometimes I think it you are less likely to see an exaggeration in a corporate release about a clinical trial than in an academic press release. The corporate flack is beholden to a separate set of rules much stricter than those seen in non-profit academic centers. (In general, however, they overcompensate their bland, corporate releases by being complete PsITAs when it comes to pitching their stories. What isn’t generally well known is how hard they try leaning on academic flacks to do their dirty work for them. In my experience, at least. )

According to Goldacre, among the chief flack crimes is not correctly depicting the size and quality of the research described. I know from experience that some press release editors frown on including such materials, assuming that good journalists would follow up and actually read the study and speak to the reporters. That might have been a safe assumption at one point, but no longer, since many press releases get picked up and used online (and often in print) verbatim.

Researchers at Dartmouth Medical School in New Hampshire took one year’s worth of press releases from 10 medical research centres {The Annals tipsheet, quoted below, mentions 20, hmmm… –Greg}, a mixture of the most eminent universities and the most humble, as measured by their US News & World Report ranking. These centres each put out around one press release a week, so 200 were selected at random and analysed in detail.

Half of them covered research done in humans, and as an early clue to their quality, 23% didn’t bother to mention the number of participants – it’s hard to imagine anything more basic – and 34% failed to quantify their results. But what kinds of study were covered? In medical research we talk about the “hierarchies of evidence”, ranked by quality and type. Systematic reviews of randomised trials are the most reliable: because they ensure that conclusions are based on all of the information, rather than just some of it; and because – when conducted properly – they are the least vulnerable to bias.

He is absolutely right of course, depicting the quality of the study is every bit as important as spelling the lead researchers name correctly. (To be totally honest, I’ve probably failed on both accounts in the course of the hundreds of clinical science releases I’ve written.) And I couldn’t imagine writing a release that didn’t report the number of people in a study. However, it is entirely appropriate for such information to be placed further down in the release. Not buried, mind you, along with the boilerplate and the acknowledgments-you-know-people-won’t-read-but-you-add-anyway-to-appease-the-scientist’s-collaborators. It is also very tricky to explain studies in terms lay audiences might understand without including a few extra paragraphs explaining what a P value means. Again, there is a middle ground, but it behooves flacks to mention the statistical significance of the study they’re promoting. Even a small study with few people can be significant, a fact lost on most folks, flacks especially.

Probably a bigger crime, one that Goldacre doesn’t address directly and is probably not part of the study, is the inability to distinguish between animal and human trials. Many institutions shy away from mentioning animal models as a rule, since people often react angrily — even violently — to the shocking news that you may be working on lab rats. In the past, I’ve used the term “animal model” instead of specifying rat or mouse, which were usually the animal involved. If the study involved a primate, I would have to say something and risk the reaction.

I haven’t read the Dartmouth study myself, but it doesn’t appear that the sin of omission isn’t the only source of exaggeration noted in releases. Here is how the Annals of Internal Medicine’s press tipsheet summarized it:

The news media is often criticized for exaggerating science stories and deliberately sensationalizing the news. However, researchers argue that sensationalism may begin with the journalists’ sources. The researchers reviewed 200 press releases from 20 academic medical centers. They concluded that academic press releases often promote research with uncertain relevance to human health without acknowledging important cautions or limitations. However, since the researchers did not analyze news coverage stemming from the press releases, they could not directly link problems with press releases with exaggerated or sensational reporting. The study authors suggest that academic centers issue fewer releases about preliminary research, especially unpublished scientific meeting presentations. By issuing fewer press releases, academic centers could help reduce the chance that journalists and the public are misled about the importance or implications of medical research.

The problem is that the act of sending out a press release fundamentally risks exaggeration by calling attention to something. Even if you are perfectly clear that the study is small and adds but an incremental bit of information to the larger scientific world, the very fact you are writing a release is calling attention to it. And, of course, you can write the least sensational press release in the world and still have it taken out of context by a reporter looking for lurid headlines.

I’d also like to know what the researchers consider cautions or limitations. According to the Goldacre piece, 58% of releases lack these sorts of things. That’s a fairly high number that, doing the gut check, might be a matter of perspective. Would an un-read disclaimer — in the “forward-looking views”-sense — be viewed as proper caution? Were some releases entirely “cautious” while others not so complete in their cautioning?

So, should institutions send out fewer releases? Some, perhaps, but that’s a superficial answer. I know some places that have instituted a quota system on public relations people and use press releases as a measure of productivity. I think that is a poor practice that practically guarantees shoddy releases, of course. Then again, I’ve worked in places where I would have sent out twice the amount of news releases if I had the time, because the science there was just that plentiful and interesting. It isn’t all that cut and dry.

Press officers are always told to look for clinical relevance in basic science stories. They are told that journalists won’t write about it otherwise. This has a certain bit of truth to it, of course. The journalist you pitch must often, in turn, pitch an editor, who will generally ask about “the point of it all.” The horror.

The majority of biomedical press releases I have written have been about laboratory results. Basic science stuff, molecules bopping into each other, and all. And here you must work hard not to exaggerate the potential clinical use of those findings. Releases like these are often written with the trade press in mind as often — if not more often — than the popular press.

Why? Because, when done well, it helps establish researchers and their institutions as productive and interesting. Because basic science does, in fact, lead to advanced medicine. Because the noise beats signal out there and someone must shepherd the good science around the din.

Still, it is up to the press officer to be an advocate for their institution as well as responsibly advocate the science. That’s where it helps to find a useful story angle to pitch…which, when done thoughtlessly, inevitably leads to the use of the words “holy grail” or, worse, a reference to Star Trek. The trick is to pitch the story behind the science as well as the science itself in order to find the relevance, a feat that is far easier said than done.

With fewer science reporters out there it has become — for better or worse — incumbent upon public affairs people (PIOs, Flacks) to tell the story right the first time.