I find it an interesting exercise to look at the difference between scientific and popular coverage of news stories about science.
BRAINS. For using, not eating.
For example, a bunch of stories have popped up recently about an experiment using transcranial magnetic stimulation to improve people’s memory. The technique sends an electromagnetic pulse into a targeted part of a person’s brain. In this experiment, researchers sent a pulse into a part of the brain that was connected to the hippocampus, a part of the brain related to associative memory that is typically not accessible without surgery, to see if the connections would transmit the pulse and have some effect on the hippocampus’ function. The results were that, after five 20-minute sessions (one per day), there was “increased functional connectivity among distributed cortical-hippocampal network regions and concomitantly improved associative memory performance”. Basically, they were given a memory test consisting of a set of arbitrary associations between faces and words that they were asked to learn and remember; those who received the placebo had no improvement, and those who had the real treatment did. However, these improvements only lasted for about 24 hours after stimulation. Various online news outlets have covered this in different ways. The most complete one I found was Science magazine, which has the entire study. But it’s highly technical and most people probably would zone out trying to read it. Research papers aren’t for everybody, of course. The one I found most accessible to the layman was from Popular Mechanics. They go into some possible concerns with the design of the study, including that there is a definite detectable difference between the placebo treatment and the real treatment. From Michael Fox, a neuroscientist at the Harvard Medical School who was not involved in the study:
“The real [TMS] causes a contraction on the scalp muscles, almost like somebody is tapping on your head,” he says. “The sham [TMS] feels much, much weaker. After having both, I’m fairly sure you could tell which one you had.” Unfortunately, as Fox points out, the scientists did not ask their participants which procedure they thought they had, which leaves a bit more room for a placebo effect to account for some of their results.
I didn’t see any mention of the duration of the effects in this article, however. The Newsweek article is kind of ridiculous, confusing transcranial magnetic stimulation for “using a powerful electromagnet to shoot electricity into a person’s head”. They also don’t appear to understand the difference between an MRI, which is all about investigating structure, and an fMRI, which is about investigating blood flow in the brain as an analog to mapping brain activity. They repeatedly make references to electroconvulsive therapy, which is totally irrelevant. No mention of the duration limitations. Even Medical Daily made the same false association in an article about a different TMS study:
What once was called “electroshock therapy” in the sordid halls of psychiatric institutions appears ready to make its return, though the newest incarnation of brain stimulation therapy is much more refined and much more targeted.
TMS is not electroconvulsive therapy. It’s a magnetic therapy that induces a small electric current in the wiring of the brain. According to the National Institute of Mental Health, TMS
uses a magnet instead of an electrical current to activate the brain. An electromagnetic coil is held against the forehead and short electromagnetic pulses are administered through the coil. The magnetic pulse easily passes through the skull, and causes small electrical currents that stimulate nerve cells in the targeted brain region. Because this type of pulse generally does not reach further than two inches into the brain, scientists can select which parts of the brain will be affected and which will not be. The magnetic field is about the same strength as that of a magnetic resonance imaging (MRI) scan.
(The variant of TMS used in this new research – repetitive transcranial magnetic stimulation – has been tested as a treatment for all sorts of disorders, including migraine, stroke, Parkinson’s disease, dystonia, tinnitus and depression.) As I browsed through these various articles, I was struck by the different rhetorical techniques the authors used to try to snag the readers’ interest. Some started off their articles with gruesome bits of history, like Newsweek’s (misleading) mention of ECT. Others tried humor, making jokes about refrigerator magnets and metal plates in the head. Lots of articles had a basic summary of the experimental procedure, which was good, but few went into detail about how the study was blinded and placebo controls were used. As skeptics, I think it’s important for us to recognize that there’s often much more to any science story than a single source will tell us. It’s vital to sift through the information heap to find common threads and hard data, so that we can be sure we’re not just falling for one writer’s spin on things. Modern media is all about getting and retaining readers, and when you have a market crowded with sensationalist clickbait articles and pithy 5-second sound bites, it’s getting harder and harder for an outlet to stay relevant when they just want to report the facts. A few tips for the intrepid skeptics who want to be sure they’re getting the real story:
- Seek out multiple sources. Don’t take any single source as authoritative. Some sites will play fast and loose with the facts and make connections that aren’t really appropriate (as seen in the Newsweek example above).
- Give more credence to the original study or the actual source of the science. For example, if NASA scientists say they’re investigating what appears to be a physically impossible propulsion system, try to get the real story from them. Don’t just look at what someone like Buzzfeed says about it; they’re paid to make things seem fantastic and bizarre, and might make leaps that they shouldn’t (e.g., saying that “scientists are baffled” when they’re really just intrigued).
- See if they link to the abstract of the study. If an article makes sure you know where to find the original information, they’re probably not trying to pull a fast one on you. Sure, it’s always possible that they’re doing it just to make it seem like they’re being transparent, but generally it’s safe to give someone the benefit of the doubt if they’re willing to cite the data. Just be sure that you don’t make this rule absolute; your best bet will still be the original, in case the writer is trying to put a spin on it.
- Remember that an article about a single study is usually preliminary, and if the sample size is small, you should be all the more cautious about accepting the results as conclusive. Science is all about getting confirmation and reproduction of your results, and no one study is ever enough to say that the results are absolute. There’s an unfortunate trend in science writing that journalists tend to make a big noise about amazing new studies with surprising results, but if and when those results are later contradicted, not much gets said about that.
- When the study talks about an effect without a known mechanism, be especially skeptical until the results are repeated several times. You might be dealing with a study where some unknown confounding factor is involved, like an unexpected influence that wasn’t being controlled for. This may actually be the case with the rTMS study I discussed: the Time article quotes study author Joel Voss as saying that the research “was more of a hunch than I’d like to admit.” The researchers clearly expected that ‘syncing up’ several regions of the brain that work together with the hippocampus to produce associative memories would improve their function, but it’s not necessarily immediately obvious why the induction of electrical activity would make them sync up. (Then again, I’m no neuroscientist; there might be some underlying concept here I’m just not familiar with.)
- On that note: be aware of your own limitations. If you read about a concept that you don’t understand particularly well, try to seek out the thoughts of someone with more expertise. Chances are that, unless you’re reading a study itself, the person writing the article probably isn’t an expert either, and taking the article at face value might just be compounding the author’s inexperience with your own to confuse things even more. Remember, good skeptics are always skeptical of their own thought processes as well; nobody can fool you quite as good as you can.
- If a study itself doesn’t seem to pass the sniff test, don’t be afraid to look into other work done by the researchers to see how it has been received by other experts in the field. For example, if a study about vaccine safety seems to be coming from way out in left field, and you see that the author is (for example) someone who believes that vaccines cause autism and that autism can be treated through chemical castration, you’re pretty safe being skeptical of the research.
- Watch for ideological red flags. If someone brings up an irrelevant factoid in a science article that seems like it’s designed to make you respond one way or the other to the content, they may be trying to spin you. An article about global warming that talks about Al Gore not being totally green-friendly, for instance, is probably not the most reliable source. This isn’t necessarily a disqualifier, because someone can still relate good science in the midst of an ideological screed, but it’s best to ignore the distractions and just stick to the meat of the subject.
- Keep a look out for false balance. If a subject is new or controversial (politically, socially, or scientifically), writers might seek out an opposing viewpoint just for the sake of appearing unbiased – even if that opposing viewpoint is on the extreme fringe. I’m reminded of articles about new fossil discoveries that explicitly sought out commentary from Ken Ham. If you see this kind of thing happening, there’s a decent chance that the author isn’t familiar with the mainstream viewpoint or doesn’t really care about giving representation to a view in proportion to how accepted it is.
- Look at the ads. (Unless you’ve got them blocked, of course.) Google ads can be especially useful, since they tend to be tailored to the content of the site. If you’re on a site you’re not familiar with, the ads might give you a hint about what sort of things are usually posted there. If you see things about fad diets, alternative medicine, esoteric and fringe science concepts, and so on, you may want to be extra-skeptical!
Now… go forth and read critically!