Tuesday, 29 November 2011

Cognitive Behavioural Therapy vs. Psychoanalysis

Clinical trials of cognitive behavioural psychotherapy (CBT) for depression are often of poor quality - and are no better than trials of the rival psychodynamic school.

So says a new American Journal of Psychiatry paper that could prove controversial.

CBT is widely perceived as having a better evidence base than other therapies. The "creation myth" of CBT (at least as I was taught it) is that it was invented by a psychoanalyst who got annoyed at the unscientific nature of psychodynamic i.e. Freudian-influenced therapy. CBT has always looked on clinical trials more favorably than the dynamic school.

However, the authors of this meta-analysis found that while there are certainly lots of published CBT trials for depression, they're actually no better quality than the psychodynamic trials.

"Surprisingly" (their word), they found no difference between the CBT for depression trials, and the psychodynamic trials, on a rating score of trial methodology.

Trials got better over time, but the two groups improved equally (see above). The mean score was 25.5 for CBT and 25.1 for dynamic, on a scale that goes from 0 to 48. Anything over 24 points is deemed acceptable but this is clearly an arbitrary cut-off.

The RCTP-QRS scale is relatively new and it was developed by the people who wrote this paper (albeit with the input of other experts.) There's 24 items and each gets a score from 0 (bad) to 2 (good). Items are things like "Adaquate sample size", "Patients randomly assigned to group", etc.

Worryingly, better CBT trials tended to find smaller benefits of CBT over the comparison treatment. The overall results showed that while CBT was clearly better than doing nothing, it was pretty much the same as antidepressants, and other psychotherapies, in adults with depression:


The article follows one from the same group, Gerber et al, who reviewed the evidence for psychodynamic therapy in more detail. And last year, another team reported evidence of publication bias in psychotherapy trials. In this study, the authors report possible publication bias, but they don't go into detail.

Overall this is interesting stuff, and a reminder that while CBT has the most evidence of any psychotherapy, this is not the same thing as saying that it has the best evidence...

ResearchBlogging.orgNathan C. Thoma et al (2011). A Quality-Based Review of Randomized Controlled Trials of Cognitive-Behavioral Therapy for Depression: An Assessment and Metaregression American Journal of Psychiatry

Saturday, 26 November 2011

Beware Dead Fish Statistics

An editorial in the Journal of Physiology offers some important notes on statistics.


But even more importantly, it refers to a certain blog in the process:
The Student’s t-test merely quantifies the ‘Lack of support’ for no effect. It is left to the user of the test to decide how convincing this lack might be. A further difficulty is evident in the repeated samples we show in Figure 2: one of those samples was quite improbable because the P-value was 0.03, which suggests a substantial lack of support, but that’s chance for you! A parody of this effect of multiple sampling, taken to extremes, can be found at http://neuroskeptic.blogspot.com/2009/09/fmri-gets-slap-in-face-with-dead-fish.html
This makes it the second academic paper to refer to this blog as far. Although I feel rather bad about this one, since the citation ought to have been to the original dead salmon brain scanning study by Craig Bennett. I just wrote about it.

Actually, though, this editorial was published in five separate journals: The Journal of Physiology, Experimental Physiology, the British Journal of Pharmacology, Advances in Physiology Education, Microcirculation, and Clinical and Experimental Pharmacology and Physiology. Phew.

In fact, you could say that this makes not two but six citations for Neuroskeptic now. Yes. Let's go with that.

Anyway, after discussing the history of the ubiquitous Student's t-test - which was invented in a brewery - it reminds us that the p value you get from such a t-test doesn't tell you how likely it is that your results are "real".

Rather, it tells you how often you'd get the result you did, if there was no effect and it was just random chance. That's a big difference. A p value of 0.01 doesn't mean your results are 99% likely to be real. It means that there's a 1% chance that you'd get them, by chance. But if you did say 100 experiments, or more likely, 100 statistical tests on the same data, then you'd expect to get at least one result with a p value of 0.01 purely by chance.

In that case it would be silly to think that the finding was only 1% likely to be a fluke. Of course it could be true. But we'd have no particular reason to think so until we get some more data.

This is what the dead salmon study was all about. This multiple comparisons issue is very old, but very important. Arguably the biggest problem in science today is that we're doing too many comparisons and only reporting the significant ones.

ResearchBlogging.orgDrummond GB, & Tom BD (2011). Statistics, probability, significance, likelihood: words mean what we define them to mean. British journal of pharmacology, 164 (6), 1573-6 PMID: 22022804

Friday, 25 November 2011

A Dangerous Truth about Antidepressants

An opinion piece by veteran psychiatrist and antidepressant drug researcher Sheldon Preskorn contains a remarkable historical note -
“A dangerous idea!” That was the response after a presentation I gave to a small group of academic leaders with an interest in psychopharmacology [over 15 years ago].
What evoked such a response? The acknowledgment that most currently available antidepressants specifically treat only one out of four patients with major depression based on the bulk of clinical trials data.
There was no argument about the accuracy of this statement, but...some claim it is “dangerous” to admit that the specific response rate to most antidepressants is 20%–30% because such an acknowledgment might undermine the value of antidepressant treatment.
By the "specific" response rate Preskorn means the number of depressed people who'll get better on antidepressants and who wouldn't have done so well on placebo. This rate is fairly low because, while most people get better on antidepressants, most of those improve on placebo as well.

Preskorn rejects the view that it's dangerous to acknowledge this:
...there are several problems with this reaction. First, it is hard to deny reality. The “placebo” response rate in antidepressant trials is arguably the most reproducible finding in psychiatry. Moreover, if available antidepressants were magic bullets, then polypharmacy would not be so common. Second, this reaction ignores the fact that antidepressants are tremendously valuable to the patients who specifically benefit from them...
Every treatment in every area of medicine has limitations. Acknowledging that fact should galvanize us to action. Denial on the other hand perpetuates the status quo.
Unfortunately, we're not told who these academic leaders were. I wonder if they included amongst their ranks some of the "key opinion leaders" in the field whose leadership proved rather less than ideal. The column is actually adapted from a 1996 article by Preskorn.

Preskorn is right, of course, that denying the fact that antidepressants are only substantially better than placebo in a fraction of people who get diagnosed with "depression" is wrong, and also misses the point: because hundreds of millions of Americans have diagnosable depression (due to the loose definition of "depression"), even if they only helped 1% of them, they'd still help over a million people.

But he doesn't mention that this approach was ultimately self-defeating. As a result of the failure to acknowledge that antidepressants are only helpful in some cases of depression (namely "severe" depression), these drugs became very widely used and - oh dear - people started saying that the drugs are being overused, and don't work in most people who take them.

Whoever could have seen that coming.

This has "devalued" antidepressants - and psychiatry itself - more than anything else has.

ResearchBlogging.orgPreskorn SH (2011). What Do the Terms "Drug-Specific Response/Remission Rate" and "Placebo" Really Mean? Journal of psychiatric practice, 17 (6), 420-424 PMID: 22108399

Wednesday, 23 November 2011

The Gene That's "For" Nothing

Scientists like to warn you not to talk about "the gene for" a particular disease or trait.

I've done so in previous posts e.g. this one or this one.

But such scalding is not always very effective. We like simple explanations, so we like to find simple connections between genes and phenotypes.

Which is why a new paper is important. The authors, a large Turkish-American collaboration, found that mutations in a gene, WDR62, are associated with severe brain malformations in 9 patients. But what's interesting is that it doesn't cause any particular malformation.

If you have two faulty copies of this gene, your brain won't be normal, but what goes wrong varies widely amongst different people. Although the 9 cases had some features in common, such as microcephaly (small head and brain), in other respects they differed greatly.

As the authors put it, mutations in WDR62 cause
a wide spectrum of severe cerebral cortical malformations including microcephaly, pachygyria with cortical thickening as well as hypoplasia of the corpus callosum. Some patients... had evidence of additional abnormalities including lissencephaly, schizencephaly, polymicrogyria and, in one instance, cerebellar hypoplasia, all traits traditionally regarded as distinct entities.
These are distinct entities, in the sense that you can have any one of them, without having the others. And they are different brain changes. What the authors mean is that everyone assumed that, because they're  different, they must have different genetic causes. They've just shown that this is wrong.

So what is WDR62 "for"? Experiments in mice showed it to be involved in the migration of new neurons from their origin to their final location in the brain. So it's "for" correct neuronal placement, although how it works remains unclear.

WDR62 ought to remind us that there's a long and winding road from gene to phenotype, and that the same gene can, when mutated, cause very different symptoms. This is especially interesting in the light of recent evidence showing that the same mutations can cause a range of behavioural disorders from autism to ADHD to schizophrenia.

ResearchBlogging.orgBilgüvar K, et al (2010). Whole-exome sequencing identifies recessive WDR62 mutations in severe brain malformations. Nature, 467 (7312), 207-10 PMID: 20729831

Tuesday, 22 November 2011

Was Evita Lobotomized?

Eva Peron, or Evita, is perhaps the most famous woman in Latin American history. As the wife of Argentinian leader Juan Peron she was immensely popular. But she died at the age of just 33 from cervical cancer, after a two year struggle with the disease.


A new paper makes the startling claim that Eva Peron may have received a prefrontal lobotomy in the months before her death. The lobotomy is best known as a treatment for mental disorders such as schizophrenia, but according to Nijensohn et al, Peron was given the operation as a kind of pain relief.

The claim was first made in 2005 by Dr George Udvarhelyi, who worked as a neurosurgeon in Argentina before moving to John Hopkins in Baltimore. After his retirement, Udvarhelyi told the Baltimore Sun that he'd performed the operation.

The authors of this paper checked out the claims against his unpublished memoirs. It turns out that they've just written Udvarhelyi's biography, and managed to slip in a plug for their book. Indeed, this paper could be seen as a plug. But anyway.

The early 1950s were the golden age of lobotomy and it does seem plausible that if she had one, it would have been kept secret. But it seems that the only direct evidence is Udvarhelyi's testimony. The authors point to various facts that could be seen as consistent with it, like this memoir by a close friend:
“The illness continued to advance. I visited her one afternoon andwas shown a notebook belonging to her brother Juancito. There was a drawing of Evita with her head criss-crossed by scissors. The sinister image suggested that she was either crazy or brain damaged. I found her very thin, quiet, and deeply introverted”
But to be honest this is pretty weak. The authors also admit that in interviews with scholarly experts on Peron's illness, they were all surprised by the idea.

They then point to postmortem X-rays of Peron's skull which were made public in 1955 to prove that her corpse hadn't been burned (long story). These, they suggest, show evidence of the kind of burr holes that were used to insert the lobotomy tools -

And they say that a photo of her shortly before her death shows an "indentation at the coronal level" -


Hmm. Not sure what to make of those. Ultimately though, the authors admit that the only way to know for sure would be to exhume Evita and study her skull, but this is unlikely to happen any time soon.

ResearchBlogging.orgNijensohn DE, Savastano LE, Kaplan AD, & Laws ER Jr (2011). New Evidence of Prefrontal Lobotomy in the Last Months of the Illness of Eva Perón. World neurosurgery PMID: 22079825

Saturday, 19 November 2011

Potential Personal Genomics

A while ago I wrote about how new findings in genetics could herald a new kind of "eugenics", based not around selective breeding to ensure that "bad" genes aren't passed on, but rather based on using fetal genetic testing to choose which variants enter the gene pool in the first place.

I said-
In the near future, we might be able to routinely sequence the genome of any unborn child shortly after conception
But I didn't realize that this may be really very near indeed. Two recent reports have shown that it's possible to sequence fetal DNA from a maternal blood sample. In one case it was used to diagnose a 35 week fetus with a genetic deletion on chromosome 12 seemingly associated with autism, developmental delay and shortness.

In this case it was inherited from the father (which is why they decided to test for it), but this approach could equally be used to screen for the de novo mutations that account for much disease, as I discussed in the last post.

This is big. Currently, the main way to get fetal DNA is through amniocentesis, i.e. inserting a needle into the womb. It's a substantial and not entirely safe medical procedure. A blood sample would be an order of magnitude cheaper and safer, but most of all it would be something you could do at home.

No longer would you need to go to a hospital and discuss everything with a doctor. You could take some blood, send it off anonymously to a sequencing company, and get the results in an email. It would take it out of the hands of professionals and open up a space for individual choice.

The cost of whole-genome sequencing has been falling exponentially and many think it will fall below the $1000 mark within a few years. Combine that with fetal DNA testing and we might see moderately well-off parents able to sequence fetal DNA within the next decade.


When this happens I think the personal genomics industry will suddenly become extremely "hot". At the moment you can sequence your own DNA for a few thousand $ if you want. The results may be interesting but they're of little obvious use. Whatever your genes are, you're stuck with them.

But as soon as we're talking about potential human genomes, it'll kick things up a notch. Media interest and political controversy is sure to follow. Personally I think it'll the debate will begin in earnest when we start seeing selective abortions on the basis of genes for "normal" variants rather than "disease" genes.

It's one thing to not want a child with blindness, or a high risk of leukaemia. But as a society I don't think we're ready for not wanting a child because they're predicted to be a B student rather than an A student, or brunette rather than blonde. At some point soon, though, we'll have to decide what we think about that.

ResearchBlogging.orgPeters D, Chu T, Yatsenko SA, Hendrix N, Hogge WA, Surti U, Bunce K, Dunkel M, Shaw P & Rajkovic A (2011). Noninvasive prenatal diagnosis of a fetal microdeletion syndrome. The New England journal of medicine, 365 (19), 1847-8 PMID: 22070496

Srebniak M, Boter M, Oudesluijs G, Joosten M, Govaerts L, Van Opstal D, & Galjaard RJ (2011). Application of SNP array for rapid prenatal diagnosis: implementation, genetic counselling and diagnostic flow. European journal of human genetics : EJHG, 19 (12), 1230-7 PMID: 21694736

Friday, 18 November 2011

Does MRI Make You Happy?


A startling new paper from Tehran claims Antidepressant effects of magnetic resonance imaging-based stimulation on major depressive disorder.

Yes, this study says that having an MRI scan has a powerful antidepressant effect.

They took 51 depressed patients, and gave them all either an MRI scan or a placebo sham scan. The sham was a "scan" in a decommissioned scanner. The magnet was off but they played recorded scannerish sounds to make it believable. Patients were blinded to group.

They found that people in the scanner group improved much more than those in the sham group over two weeks. Actually there were two different kinds of scans, T1 structural MRI and EPI functional MRI, but they were the same:
Now, if this is true, it's huge. Obviously. For one thing, it would undermine the whole premise of functional MRI, which is that it's a method of recording brain activity. If it's also stimulating the brain in some way at the same time, then it would make it hard to interpret those activations. In particular it would cast all the studies using fMRI in depression into doubt.

So is it true? I can't see any obvious flaws in the design. Assuming that the authors are right when they say that "patients could not distinguish the difference between the actual and sham MRI scan", i.e. assuming that the blind was truly blind, then the methodology was sound.

But let's look at the statistics. The paper is full of very impressive p values less than 0.001 but those turn out to all be referring to the changes within each group, and those changes are fairly meaningless. What matters is the differences in the groups and
Changes in BDI scores (between baseline and day 14) were significantly different among the three studied groups (F=5.48, p=0.007 overall) using ANOVA, and between the DWI group vs. Sham and T1 vs. Sham (p<0.05) using post hoc tests. Changes in HAMD24 scores (between baseline and day 14) were also compared among the 3 groups using ANOVA but the level of significance was slightly above the significance threshold (F=2.89, p=0.06).
Which is rather less convincing. There was a close-to-significant group difference in the HAMD24, and a significant but only just effect on the BDI. Remember that there were only 17 people in each group.

I'm inclined to think that this is one of the 5% of experiments which will produce a nominally significant result even assuming everything goes to plan and there are no confounds. My suspicion is that everyone in the trial got better (they were all on antidepressants, plus there's the placebo effect and the effect of time) - except a small number of people who didn't improve. And by chance they were all in the sham group.

The reason I'm skeptical is that I just can't see a plausible mechanism. The authors suggest that MRI scans might stimulate the brain in a similar way to TMS and that this could have antidepressant effects.

But there's a lot of problems with this: 1) the evidence is questionable whether TMS even works for depression 2) the magnetic stimulation of the brain generated during MRI is much weaker than in the case of TMS and 3) if MRI really stimulated the brain like TMS, then, like TMS, it would have a risk of triggering seizures in people with epilepsy. But it doesn't.

ResearchBlogging.orgVaziri-Bozorg SM, et al (2011). Antidepressant effects of magnetic resonance imaging-based stimulation on major depressive disorder: a double-blind randomized clinical trial. Brain imaging and behavior PMID: 22069111

Wednesday, 16 November 2011

One in Four Revisited

In a recent Telegraph article, professional contrarian Brendan O'Neill argues against the idea that one in four people experience mental illness - and indeed against the idea that one in four people are bullied, abused or whatever else:
Can it really be true that a quarter of Brits are bullied or beaten up at home or are mentally ill, or is this simply a case of social campaigners exaggerating how bad life is in order that they can continue to make headlines, make an impact, and get funding? I reckon it's the latter. Next time you see the "one in four" figure, be very sceptical – it's probably Dickensian-style doom-mongering disguised as social research, where the aim is to convince us, against the evidence of our own eyes and ears, that loads of the people we encounter everyday are basket cases in need of rescue.
I say "argues against", but he doesn't actually provide any arguments. He just links to the claims and says they're silly.

As Neuroskeptic readers know, I am myself skeptical of the idea that one in four people are mentally ill, but I'm skeptical of it because I've looked at the evidence and it doesn't support that figure. Actually, if you take the available evidence at face value, it says that the true figure for the lifetime prevalence is much higher than one in four. I don't think those figures are very useful however because of various methodological issues.

So in my view we just don't know how many people are mentally ill, largely because we don't have any clear definition of what "mentally ill" means. But that doesn't mean we can just assume that it can't possibly be one in four just because "our own eyes and ears" tell us that most people are not "basket cases".

Much mental illness goes undiagnosed and unnoticed, and I'd imagine also that Brendan O'Neill and the kind of people who read him don't tend to "encounter everyday" people from groups such as the unemployed, the elderly and so forth, in whom the rates are higher.

But even beyond that, it's a silly argument because of selection bias. If you as a healthy person encounter someone everyday, chances are they're not severely ill - mentally or physically - because if they were, they'd be less likely to be around in places for you to encounter. Unless you're a doctor or whatever, you live your life in the world of healthy people.

It's like saying that you don't believe children or the elderly exist, because in your life as a working age adult, you never meet any of them.


Monday, 14 November 2011

Modern War-fMRI : Graphics Cards for Science

Videogames and neuroscience have a rocky relationship.

On the one hand you have Susan Greenfield and her games-hurt-the-brain theory. But she's not representative of neuroscientists as a whole: games have also helped neuroscience, for example, in this study of the neural correlates of "flow" experiences.

Now neuroscientists have another reason to be thankful for games, according to a new paper. It turns out that modern 3D graphics cards - which mostly exist in order to render videogame visuals - can be used to do fMRI data analysis.

According to Sweden's Eklund et al, a graphics card can perform intensive fMRI analysis hundreds of times faster than a regular processor of the equivalent speed, because graphics processors make use of parallel computing optimized for 3D images and that's ultimately what all brain scans are.

They developed a way to run non-parametric statistical analyses of brain imaging data. Proponents say that non-parametric stats have many advantages over conventional parametric ones - and they're certainly becoming increasingly popular. But they involve doing far more calculations. Thousands of times more, in some cases.

It turns out though that armed with 2.5 GHz CPU and three NVidia GTX 480s, and making use of NVidia's graphics programming language, they were able to cut the time to analyse one person's brain with 100,000 permutations, from 24 hrs to just 9 minutes. The whole setup cost $4000, so it's not cheap, but they say it's "a fraction of the price for a PC cluster with equivalent computational performance" i.e. one relying on lots of general purpose processors, rather than graphics cards. Even on GTX480 did the job very well.

Best of all, this gives neuroscientists an excuse to spend their grant money on awesome gaming rigs. Why do I want the latest GForce on my work computer? To do non-parametric data analysis, obviously. Sure, it would also allow me to run Modern Warfare 3 at the highest settings... but that's not why I want it.

ResearchBlogging.orgEklund A, Andersson M, Knutsson H (2011). Fast random permutation tests enable objective evaluation of methods for single-subject FMRI analysis. International journal of biomedical imaging, 2011 PMID: 22046176

Saturday, 12 November 2011

Autism: What A Big Prefrontal Cortex You Have

A new paper has caused a lot of excitement: it reports large increases in the number of neurons in children with autism. It comes to you from veteran autism researcher Eric Courchesne.


Courchesne et al counted the number of cells in the prefrontal cortex of 7 boys with autism and 6 non-autistic control boys, aged 2-16 years old. The analysis was performed by a neuropathologist who was blind to the theory behind the study and to which brains were from which group. That's good.

They found that the total brain weight of the brain was increased in autistic boys, by about 17% on average. But the number of neurons in the prefrontal cortex was increased by an even higher margin - about 60%. The difference was specific to neurons - glial cell counts were normal. Of the 7 autistic boys, 4 also had intellectual disability - an IQ less than 70. However, the 3 without showed broadly similar results.

As well as having more prefrontal neurons, there were also some other issues in some but not all of the autism brains. Two had prefrontal cortical abnormalities - dysplasia in one case and abnormal cell orientation in another. And no fewer than 4 had flocculonodular lobe dysplasia in the cerebellum.

None of the nonautistic brains had any abnormalities reported but they don't seem to have looked very closely in the controls because that was based on "coroner's report only", rather than a detailed neuropathological exam...

It's a nice piece of work, but very small. These postmortem neuropathology studies always are because postmortem brain samples are in short supply, especially for disorders like autism.

In fact, it's so small, that doing statistics on these data is not really meaningful. The authors do some stats and get some impressive p values but we should take those with a pinch of salt and just look at the individual data (see the scatterplots above).

Now, prefrontal cortical neurons are generated while you're still in the womb. New ones can't be created after you're born - numbers can only decrease. So the increased neuron count in autism must have a very early origin, either genetic or caused by pre-natal environmental factors. Unless the timeline for cell genesis is totally different in autism.

Still, it casts doubt on the idea that, in the brain, bigger is always "better". Assuming that we consider autism to be "bad" - which I'm not saying is necessarily right, but it's fair to say most people do assume that - then the common practice of equating volume increases with all kinds of good things seems rather silly.

ResearchBlogging.orgCourchesne E, Mouton PR, Calhoun ME, Semendeferi K, Ahrens-Barbeau C, Hallet MJ, Barnes CC, & Pierce K (2011). Neuron number and size in prefrontal cortex of children with autism. JAMA : the journal of the American Medical Association, 306 (18), 2001-10 PMID: 22068992

Friday, 11 November 2011

Another Antidepressant Bites The Dust

Yet another up-and-coming antidepressant has flopped.

A paper just out reveals that the snappily-named GSK372475 doesn't work and has lots of side effects. It's a report of two clinicals trials in which Glaxo's contender was pitched against placebo and against older antidepressants in the treatment of depression.

GSK372475 failed to improve depression any better than placebo, even though the trials were large (393 and 504 patients respectively) and twice as long as most antidepressant trials (10 weeks whereas 4 or 6 is more usual)which ought to have given it plenty of room to shine.

The comparison drugs, the widely used venlafaxine and paroxetine, did work. A bit.

One of the trials even used the Bech "Melancholia Subscale" as an outcome measure, which Neuroskeptic readers may remember as I've praised it before. Venlafaxine worked on that, GSK's new pill didn't. If anything, the new drug was worse than placebo, in that patients improved slower.

In terms of side effects it caused dry mouth, insomnia, and nausea serious enough to make many people quit the study early. But even worse, it raised heart rate by almost 10 beats per minute on average, which is really never a good sign.

So, overall, it was an utter flop. In one sense this is not surprising. New "antidepressants" that don't work in trials have been all too common recently. Just last week we learned about the failure of "Serdaxin" in a Phase II trial. Actually Serdaxin isn't a new drug but an old antibiotic called clavulanic acid that a company was trying to rebrand as a mood lifter.

However the failure of GSK372475 is a bit of a mystery. The drug is a potent triple reuptake inhibitor (TRI) which acts on the neurotransmitters serotonin, noradrenaline and dopamine. By contrast, venlafaxine is a double reuptake inhibitor which doesn't hit dopamine, and paroxetine only targets serotonin. I've written about other TRIs before.

Now it seems surprising that venlafaxine worked, but a TRI didn't, in the same trial. That would imply that blocking the reuptake of dopamine makes you more depressed, enough to cancel out the other actions which are shared with venlafaxine. Which is not what I'd have predicted.

There are other differences between the drugs though. Venlafaxine has a very short half-life - it's broken down in the body in a matter of hours. But GSK372475 has a halflife of 8-10 days. Could this be the problem?

ResearchBlogging.orgLearned S, Graff O, Roychowdhury S, Moate R, Krishnan KR, Archer G, Modell JG, Alexander R, Zamuner S, Evoniuk G, & Ratti E (2011). Efficacy, safety, and tolerability of a triple reuptake inhibitor GSK372475 in the treatment of patients with major depressive disorder: two randomized, placebo- and active-controlled clinical trials. Journal of psychopharmacology (Oxford, England) PMID: 22048884

Wednesday, 9 November 2011

The Transsexual Brain

According to a new paper, the brains of male-to-female transexuals are no more "female" than those of men.

The authors write that "The present data do not support the notion that brains of male-to-female transexuals are feminized" and conclude "The present study does not support the dogma that male-to-female transexuals have atypical sex dimorphism in the brain".

That last sentence has gained quite a bit of coverage, including a quote on the Wikipedia page for "transgender".

But is it so simple?

Structural MRI scans were used to compare the size of various brain structures between three groups of volunteers: heterosexual men, heterosexual women and the transexuals (or "MtF"s as I will call them for short) who were diagnosed with gender dysphoria and were "genetically and phenotypically males".

There were 24 in each group, which makes it a decent sized study. None of the MtFs had started hormone treatment yet, so that wasn't a factor, and none of the women were on hormonal contraception.

The scans showed that the non-transsexual male and female brains differed in various ways. Male brains were larger overall but women had increases in the relative volumes of various areas. Male brains were also more asymmetrical.

The key finding was that on average, the MtF brains were not like the female ones. There were some significant differences from the male brains, but they weren't the same differences that distinguished the females from the males.



This is a fairly crude approach. It looks at the groups on average. It's a finding, but there's more you could with this data. It would be better perhaps to look at the male and female groups, and then try to work out which group each individual MtF is most similar to. You could do that using a Support Vector Machine such as was previously used to detect autism.

This would also have the advantage that it would integrate the results across different brain areas: maybe the important thing is not just the size of individual areas but the relative size of one area to another area.

My real problem though is with the language used to discuss the data. The authors say that the study doesn't support "atypical sex dimorphism in the brain" yet this wasn't a study of "the brain". It was a study of one specific aspect of the brain, namely the volume of different regions. There could be all kinds of chemical and microstructural differences that don't show up on these scans.

There are lots of people with severe epilepsy, for example, whose brains clearly differ in some major way from people without epilepsy, yet they look completely normal on MRI. Only using other methods, like EEG, reveals the difference. Because the difference is chemical, not structural.

I have no idea how, or if, the brains of MtF transsexuals are "feminized" but this study doesn't rule it out. Now I'm sure the authors know all this. And in fact they themselves recently published a paper showing atypical neural responses to smelling "oderous steroids" in transsexual people. But while neuroscientists will know what they meant, I worry that studies like this could be miscontrued by other people (like Wikipedia readers) as a result of overenthusiastic language in papers.

Link: Also blogged at BPS Research Digest.

ResearchBlogging.org Savic I, & Arver S (2011). Sex dimorphism of the brain in male-to-female transsexuals. Cerebral cortex (New York, N.Y. : 1991), 21 (11), 2525-33 PMID: 21467211

Sunday, 6 November 2011

Susan Greenfield's Dopamine Disaster

It's Susan Greenfield again.

Continuing her campaign warning of the dangers of modern technology in terms of their effects on the vulnerable brains of the young, the British neuroscientist and Baroness has written another article. This is the latest of many. None of them have been in peer reviewed academic journals.

This one's behind the Great Times Paywall so I can't link to it, but it's called Are video games taking away our identities?

The first part of the article is hard to argue against. Either you'll agree with it or you won't. Personally, videogames as Greenfield describes them bear little resemblance to any games that I've played recently. Similarly for her account of the Internet. But maybe this rings true for some:

Screen images do not depend for their impact on seeing one thing in terms of anything else. Their premium lies invariably in their raw sensory content... we are perhaps heading towards a much weaker sense of identity by engaging in a world where we are the passive recipient of senses and where there is no fixed narrative of past and future but an atomised thrill of the moment. One could even suggest that the constant self-centred readout on Twitter belies a more childlike insecurity, an existential crisis.

Greenfield then moves into discussing the brain, and this is where the science comes in. This is her "home turf" - she's Professor of physiology at Oxford. Yet it's a shambles.
There is one alarm bell ringing, which suggests that increasing 2D screen existence may be having undesirable effects: it is the threefold increase over the past decade in prescriptions for drugs for attention deficit hyperactivity disorder.
While this could be due to changes in doctors’ prescribing procedures, or indeed to a greater recognition and medicalisation of attentional problems, a third possibility could indeed be that if the young brain is exposed from the outset to a world of fast action-reaction, of instant new screen images flashing up with each press of a key, then such rapid interchange might lead to a shorter attention span.

The human condition can be basically divided into two alternating modes, first described by Euripedes... the rational “bread force”, characterised by a strong cognitive take on the world — a personalised past, present and future, in turn related to an active prefrontal cortex and lower levels of the brain chemical dopamine; and the “wine force”, more the state of young children or those adults indulging in “letting themselves go”, in situations perhaps involving wine, women and song, where a strong sensory environment demands less reflection, more passive reaction.
...An increase in physiological arousal can be linked to excessive release of dopamine. Could the screen experience be tilting this ancient balance in favour of the more infantile, senses-driven brain state?
Greenfield says that high dopamine and low prefrontal cortex activity is associated with irrationality and a deficit in attention. Video games are causing a flood of dopamine and causing ADHD. That would make sense, if ADHD was caused by too much dopamine, and if drugs for ADHD reduced dopamine release.

The problem is that it's the exact opposite. Drugs for ADHD increase dopamine release and ADHD is widely believed (although it's controversial) to be caused by a dopamine deficit.

Greenfield then says "We know too that dopamine suppresses the activity of neurons in the prefrontal cortex", but this is a serious oversimplification. Dopamine has complex effects on target neurons. It can inhibit firing, but it can also excite it. It all depends on the conditions. Here's what the authors of an influential scientific review said in 2004: "It is agreed by most researchers is that dopamine is a neuromodulator and is clearly not an excitatory or inhibitory neurotransmitter"

Some say that dopamine helps to "tune" the prefrontal by increasing the signal to noise ratio - more signal, less noise. Here's one of the most cited papers about dopamine and the PFC: Cognitive deficit caused by regional depletion of dopamine in prefrontal cortex of rhesus monkey.

Remember that drugs for ADHD like Ritalin, which are sometimes used illicitly by students without that disorder to help them focus and concentrate, cause dopamine release. If Greenfield were right, it would be the exact opposite.

...[other] people characterised by an underactive prefrontal cortex are those with schizophrenia, this time not due to physical damage but rather a chemical imbalance, in particular an excessive amount of the transmitter dopamine. In schizophrenia, like children, the patient is easily distracted, cannot interpret proverbs, is not strong on metaphor but takes the world literally; it is a vibrant world that can implode on, and overwhelm, the fragile firewall of the schizophrenic mindset.
This again is a serious simplification. Actually, you don't need to be a neuroscientist to work that out. Just recall the earlier bit: Greenfield has said that ADHD is caused by too much dopamine leading to an underactive prefrontal cortex. Now she says that schizophrenia is the same. So why are the symptoms of ADHD completely different from schizophrenia?

Why is it, in fact, that Ritalin and similar dopamine releasing drugs help with ADHD, but can make schizophrenia worse?

As a neuroscientist, I can tell you that we don't really know what's going on with dopamine in ADHD or schizophrenia. There's decent evidence that dopamine is involved in schizophrenia, but not in any straightforward sense. Schizophrenia is now believed to be linked to reduced dopamine in the prefrontal cortex, and too much in other areas.

As for ADHD, remember: the leading theory is that it's about too little dopamine. Not too much.

The only disease that we know certainly is associated with too little dopamine is Parkinson's. Contrary to Greenfield's theory, people with Parkinson's often have cognitive and mood problems as well as the better known difficulties with movements. They're not super intelligent, prefrontal-cortex-wielding geniuses.

I appreciate that an opinion piece in the Times is never going to be a rigorously argued scientific paper, but the fact that Greenfield's article contains several claims which are the exact opposite of the truth (or at least of current scientific thinking) calls her credibility into serious question.

Friday, 4 November 2011

Dream Action, Real Brain Activation

A neat little study has brought Inception one step closer to reality. The authors used fMRI to show that dreaming about doing something causes similar brain activation to actually doing it.

The authors took four guys who were all experienced lucid dreamers - able to become aware that they're dreaming, in the middle of a dream. They got them to go to sleep in an fMRI scanner. Their mission was to enter a lucid dream and move their hands in it - first their left, then their right, and so on. They also moved their eyes to signal when they were about to move their hands.

Unfortunately, only one of the intrepid dream-o-nauts succeeded, even though each was scanned more than once. Lucid dreaming isn't easy you know. Two didn't manage to enter a lucid dream. One thought he'd managed it, but the data suggested he might have actually been awake.

But one guy made it and the headline result was that his sensorimotor cortex was activated in a similar way to when he made the same movements in real life, during the lucid dream -  although less strongly. Depending on which hand he was moving in the dream, the corresponding side of the brain lit up:


EEG confirmed that he was in REM sleep and electromyography confirmed that his muscles were not in fact being activated. (During REM sleep, an inhibitory mechanism in the brain prevents muscle movement. If the EMG shows activity this is a sign that you're actually partially awake).

They also repeated the experiment with another way of measuring brain activation, NIRS. Out of five dudes, one made it. Interesting this showed the same pattern of results - weak sensorimotor cortex activation during movement - but it also showed stronger than normal supplementary motor area activation, which is responsible for planning movements.


This is rather cool but in many ways not surprising. After all, if you think about it, dreaming presumably involves all of the neural structures that are involved in really perceiving or doing whatever it is you're dreaming about. Otherwise, why would we experience it so clearly as being a dream about that thing?

It may be, however, that lucid dreaming is different, and that the motor cortex isn't activated in this way in normal dreams. I suppose it depends what the dream was about.

That raises the interesting question of what someone with brain damage would dream about. On the theory that dream experiences come from the same structures as normal experiences, you shouldn't be able to dream about something that you couldn't do in real life... I wonder if there's any data on that?

ResearchBlogging.orgDresler M, Koch SP, Wehrle R, Spoormaker VI, Holsboer F, Steiger A, Sämann PG, Obrig H, & Czisch M (2011). Dreamed Movement Elicits Activation in the Sensorimotor Cortex. Current biology : CB PMID: 22036177

Wednesday, 2 November 2011

Who Should Catch Fraud?

Whose job is it to detect scientific fraud?


You've probably heard of Diederik Stapel, a Dutch psychologist who's just admitted to scientific fraud on a grand scale, with dozens and maybe over 100 papers published based on made-up data. This comes just months after Harvard's Marc Hauser resigned over unspecified data-meddling activities.

What disturbs me is not just that this fraud happened, but the way it was detected. Both Stapel and Hauser were busted by their own junior lab members. Browsing Retraction Watch and reading over other fraud cases reveals that fraud is almost always detected either by 1) By readers of published papers who notice oddities in the data, or 2) by internal whistleblowers, almost always junior lab members

But these are both ad hoc methods. They rely heavily on individual vigilance and courage in speaking out (especially in the latter case). It seems to me that there's no working mechanism for catching fraud. If there were, such acts of individual heroism wouldn't be needed.

So whose job is it to catch fraud? At the moment, it's all the work of private investigators. Where are the police?

First off, is it the job of journals? That seems plausible. Journals publish scientific papers and by doing so they are saying, implicitly, that the papers are good quality. The way this works is meant to be through peer review.
But peer review is failing to catch many cases of fraud. I guess we don't know how many fraudlent papers are caught at the peer review stage and never published. But one would hope that such cases would come to light anyway because reviewers who suspect fraud ought to alert the relevant authorities. I can't think of any recent cases in which fraud investigations were started by peer reviewers, or at least not that we know about.

Maybe it's up to the institution that employs the fraudster? It's the institution that carries out investigations, "convicts" the fraudster and enacts the punishment. Clearly it's in their interests to do this because they don't want to be seen as soft on misconduct. But rarely do they go out and proactively try and catch or prevent fraud. It's not in their interests to do that.

Undetected fraud does no harm to anyone's reputation. On the contrary fraudsters are often the "stars" of their faculty until they get caught. Hauser and Stapel were. Plus, a department that got a reputation for hard-hitting anti-fraud measures might struggle to recruit people, even perfectly innocent ones who just found it annoying.

So what we see is departments who perform (fairly) good investigations into fraud, but only when someone else tells them to.

Maybe it's the funding bodies? They're paying for the research, so they clearly have an interest in making sure their money is well spent. At present, though, they lack the mechanisms to investigate it.

So those are the three possibilites as I see them - journals, institutions and grant awarders. While all of these organizations have policies for investigating and punishing fraud when it comes to light, they rarely (if ever) actually catch it, leaving this hazardous and stressful job to individuals.

Is there a better way?