Puppies of Jenkintown Part VI: Ah, that’s where the camera was, edition

Yes, I admit, it has been a while since the last Puppies of Jenkintown entry, a full month in fact. I don’t want you, dear reader, to suspect that I haven’t been walking my daughter or I haven’t been allowing her to shoot puppies or, heaven forfend, we ran out of puppies. We haven’t, of course.

Why this very evening I went for a walk with Benny — just a block or so — and saw two entirely new pups. I didn’t tell Julia for fear of launching her into a snit, as she was already in a fragile, post-rainy day state of mind. We did manage to bring back the acorns freshly shook from a tree up the street by the earlier thunder boomer. Julia places them strategically around the yard for squirrels.

Squirrel!

Um, where was I, oh yes, more puppies of Jenkintown. My point was that three things must come together toget some proper puppy shots: 1) puppies, 2) camera, 3) fresh batteries. Those three things don’t always coincide. However, here are some from the latest batch, including Grover, the hardest working dog in Jenkintown.

All photos by Julia Rose Lester

Continue Reading Puppies of Jenkintown Part VI: Ah, that’s where...“Puppies of Jenkintown Part VI: Ah, that’s where the camera was, edition”

Beware the Spinal Trap

I’m following the herd, but: Support Simon Singh.

On 29th July a number of magazines and websites are going to be publishing Simon Singh’s Guardian article on chiropractic from April 2008, with the part the BCA sued him for removed.

They are reprinting it, following the lead of Wilson da Silva at COSMOS magazine, because they think the public should have access to the evidence and the arguments in it that were lost when the Guardian withdrew the article after the British Chiropractic Association sued for libel.

We want as many people as possible around the world to print it or put it live on the internet at the same time to make an interesting story and prove that threatening libel or bringing a libel case against a science writer won’t necessarily shut down the debate.

You might be surprised to know that the founder of chiropractic therapy, Daniel David Palmer, wrote that “99% of all diseases are caused by displaced vertebrae”. In the 1860s, Palmer began to develop his theory that the spine was involved in almost every illness because the spinal cord connects the brain to the rest of the body. Therefore any misalignment could cause a problem in distant parts of the body.

In fact, Palmer’s first chiropractic intervention supposedly cured a man who had been profoundly deaf for 17 years. His second treatment was equally strange, because he claimed that he treated a patient with heart trouble by correcting a displaced vertebra.

You might think that modern chiropractors restrict themselves to treating back problems, but in fact some still possess quite wacky ideas. The fundamentalists argue that they can cure anything, including helping treat children with colic, sleeping and feeding problems, frequent ear infections, asthma and prolonged crying – even though there is not a jot of evidence.

I can confidently label these assertions as utter nonsense because I have co-authored a book about alternative medicine with the world’s first professor of complementary medicine, Edzard Ernst. He learned chiropractic techniques himself and used them as a doctor. This is when he began to see the need for some critical evaluation. Among other projects, he examined the evidence from 70 trials exploring the benefits of chiropractic therapy in conditions unrelated to the back. He found no evidence to suggest that chiropractors could treat any such conditions.

But what about chiropractic in the context of treating back problems? Manipulating the spine can cure some problems, but results are mixed. To be fair, conventional approaches, such as physiotherapy, also struggle to treat back problems with any consistency. Nevertheless, conventional therapy is still preferable because of the serious dangers associated with chiropractic.

In 2001, a systematic review of five studies revealed that roughly half of all chiropractic patients experience temporary adverse effects, such as pain, numbness, stiffness, dizziness and headaches. These are relatively minor effects, but the frequency is very high, and this has to be weighed against the limited benefit offered by chiropractors.

More worryingly, the hallmark technique of the chiropractor, known as high-velocity, low-amplitude thrust, carries much more significant risks. This involves pushing joints beyond their natural range of motion by applying a short, sharp force. Although this is a safe procedure for most patients, others can suffer dislocations and fractures.

Worse still, manipulation of the neck can damage the vertebral arteries, which supply blood to the brain. So-called vertebral dissection can ultimately cut off the blood supply, which in turn can lead to a stroke and even death. Because there is usually a delay between the vertebral dissection and the blockage of blood to the brain, the link between chiropractic and strokes went unnoticed for many years. Recently, however, it has been possible to identify cases where spinal manipulation has certainly been the cause of vertebral dissection.

Laurie Mathiason was a 20-year-old Canadian waitress who visited a chiropractor 21 times between 1997 and 1998 to relieve her low-back pain. On her penultimate visit she complained of stiffness in her neck. That evening she began dropping plates at the restaurant, so she returned to the chiropractor. As the chiropractor manipulated her neck, Mathiason began to cry, her eyes started to roll, she foamed at the mouth and her body began to convulse. She was rushed to hospital, slipped into a coma and died three days later. At the inquest, the coroner declared: “Laurie died of a ruptured vertebral artery, which occurred in association with a chiropractic manipulation of the neck.”

This case is not unique. In Canada alone there have been several other women who have died after receiving chiropractic therapy, and Edzard Ernst has identified about 700 cases of serious complications among the medical literature. This should be a major concern for health officials, particularly as under-reporting will mean that the actual number of cases is much higher.

If spinal manipulation were a drug with such serious adverse effects and so little demonstrable benefit, then it would almost certainly have been taken off the market.

Testing yet another blogging app

The beauty of my Dell netbook (bought it refurbished, a steal at $ 179) is that I don’t mind blowing it up on occasion, at least after I made a USB boot disk for my new Linux OS of choice (9.04 netbook remix), which is much better than Dell-ed up version of Linux that came pre-installed.

I’ve had to do this three times so far, since I can’t resist screwing around with things. I have but a wee solid state harddrive, so I keep all my documents on an SD disk that stays in the slot. That way, when my kernel panics or some other weirdness happens, I only have to download a new image from the NASA pic of the day site for my background, get rid of the games that came with the OS, and find a new blogging application, if I care to do so. This time around, I’m using Gnome Blog, which is fairly feature free and simple to use, thus far.

I just have to remember to keep the USB boot drive in the office.

Anyway, Gnome blog seems to be a keeper when I just want to add a quick note.

Flacks exaggerate importance of medical research

I missed this earlier and, at the risk of getting myself into trouble, I’d like to say a few words. Ben Goldacre in The Guardian turned his eye toward a recent study about the quality of press releases from major American medical research centers. Having worked in at least one top research institute probably referenced in the study, I’m not terribly shocked.

Sometimes I think it you are less likely to see an exaggeration in a corporate release about a clinical trial than in an academic press release. The corporate flack is beholden to a separate set of rules much stricter than those seen in non-profit academic centers. (In general, however, they overcompensate their bland, corporate releases by being complete PsITAs when it comes to pitching their stories. What isn’t generally well known is how hard they try leaning on academic flacks to do their dirty work for them. In my experience, at least. )

According to Goldacre, among the chief flack crimes is not correctly depicting the size and quality of the research described. I know from experience that some press release editors frown on including such materials, assuming that good journalists would follow up and actually read the study and speak to the reporters. That might have been a safe assumption at one point, but no longer, since many press releases get picked up and used online (and often in print) verbatim.

Researchers at Dartmouth Medical School in New Hampshire took one year’s worth of press releases from 10 medical research centres {The Annals tipsheet, quoted below, mentions 20, hmmm… –Greg}, a mixture of the most eminent universities and the most humble, as measured by their US News & World Report ranking. These centres each put out around one press release a week, so 200 were selected at random and analysed in detail.

Half of them covered research done in humans, and as an early clue to their quality, 23% didn’t bother to mention the number of participants – it’s hard to imagine anything more basic – and 34% failed to quantify their results. But what kinds of study were covered? In medical research we talk about the “hierarchies of evidence”, ranked by quality and type. Systematic reviews of randomised trials are the most reliable: because they ensure that conclusions are based on all of the information, rather than just some of it; and because – when conducted properly – they are the least vulnerable to bias.

He is absolutely right of course, depicting the quality of the study is every bit as important as spelling the lead researchers name correctly. (To be totally honest, I’ve probably failed on both accounts in the course of the hundreds of clinical science releases I’ve written.) And I couldn’t imagine writing a release that didn’t report the number of people in a study. However, it is entirely appropriate for such information to be placed further down in the release. Not buried, mind you, along with the boilerplate and the acknowledgments-you-know-people-won’t-read-but-you-add-anyway-to-appease-the-scientist’s-collaborators. It is also very tricky to explain studies in terms lay audiences might understand without including a few extra paragraphs explaining what a P value means. Again, there is a middle ground, but it behooves flacks to mention the statistical significance of the study they’re promoting. Even a small study with few people can be significant, a fact lost on most folks, flacks especially.

Probably a bigger crime, one that Goldacre doesn’t address directly and is probably not part of the study, is the inability to distinguish between animal and human trials. Many institutions shy away from mentioning animal models as a rule, since people often react angrily — even violently — to the shocking news that you may be working on lab rats. In the past, I’ve used the term “animal model” instead of specifying rat or mouse, which were usually the animal involved. If the study involved a primate, I would have to say something and risk the reaction.

I haven’t read the Dartmouth study myself, but it doesn’t appear that the sin of omission isn’t the only source of exaggeration noted in releases. Here is how the Annals of Internal Medicine’s press tipsheet summarized it:

The news media is often criticized for exaggerating science stories and deliberately sensationalizing the news. However, researchers argue that sensationalism may begin with the journalists’ sources. The researchers reviewed 200 press releases from 20 academic medical centers. They concluded that academic press releases often promote research with uncertain relevance to human health without acknowledging important cautions or limitations. However, since the researchers did not analyze news coverage stemming from the press releases, they could not directly link problems with press releases with exaggerated or sensational reporting. The study authors suggest that academic centers issue fewer releases about preliminary research, especially unpublished scientific meeting presentations. By issuing fewer press releases, academic centers could help reduce the chance that journalists and the public are misled about the importance or implications of medical research.

The problem is that the act of sending out a press release fundamentally risks exaggeration by calling attention to something. Even if you are perfectly clear that the study is small and adds but an incremental bit of information to the larger scientific world, the very fact you are writing a release is calling attention to it. And, of course, you can write the least sensational press release in the world and still have it taken out of context by a reporter looking for lurid headlines.

I’d also like to know what the researchers consider cautions or limitations. According to the Goldacre piece, 58% of releases lack these sorts of things. That’s a fairly high number that, doing the gut check, might be a matter of perspective. Would an un-read disclaimer — in the “forward-looking views”-sense — be viewed as proper caution? Were some releases entirely “cautious” while others not so complete in their cautioning?

So, should institutions send out fewer releases? Some, perhaps, but that’s a superficial answer. I know some places that have instituted a quota system on public relations people and use press releases as a measure of productivity. I think that is a poor practice that practically guarantees shoddy releases, of course. Then again, I’ve worked in places where I would have sent out twice the amount of news releases if I had the time, because the science there was just that plentiful and interesting. It isn’t all that cut and dry.

Press officers are always told to look for clinical relevance in basic science stories. They are told that journalists won’t write about it otherwise. This has a certain bit of truth to it, of course. The journalist you pitch must often, in turn, pitch an editor, who will generally ask about “the point of it all.” The horror.

The majority of biomedical press releases I have written have been about laboratory results. Basic science stuff, molecules bopping into each other, and all. And here you must work hard not to exaggerate the potential clinical use of those findings. Releases like these are often written with the trade press in mind as often — if not more often — than the popular press.

Why? Because, when done well, it helps establish researchers and their institutions as productive and interesting. Because basic science does, in fact, lead to advanced medicine. Because the noise beats signal out there and someone must shepherd the good science around the din.

Still, it is up to the press officer to be an advocate for their institution as well as responsibly advocate the science. That’s where it helps to find a useful story angle to pitch…which, when done thoughtlessly, inevitably leads to the use of the words “holy grail” or, worse, a reference to Star Trek. The trick is to pitch the story behind the science as well as the science itself in order to find the relevance, a feat that is far easier said than done.

With fewer science reporters out there it has become — for better or worse — incumbent upon public affairs people (PIOs, Flacks) to tell the story right the first time.