Case Study: Antidepressants
Antidepressants represent an attractive market for pharmaceutical marketers for two reasons. First, most often, antidepressants are taken long term, often for life. By contrast, even though there is a pressing need for new antibiotics, to replace older drugs to which bacteria have developed resistance, very few new antibiotics have been introduced in recent years. Though the scientific research needed to gain approval of an antibiotic costs the company at least as much, if not more, than research for a new antidepressant, patients typically take an antibiotic for only two to three weeks. The potential profit from a longer-term drug makes it much more attractive for companies to focus their research efforts in that direction, regardless of clinical need.
The second reason that antidepressants constitute a more lucrative potential market is that there is no definitive blood test or other organic marker for the diagnosis of depression. That makes the diagnosis a more elastic category and provides more of an avenue for drug marketing to expand and extend the diagnosis. (We will see below, in discussing type 2 diabetes, that having a blood test does not necessarily pose an insurmountable barrier, however.)
Even with $57 billion to spend, it is unlikely that the pharmaceutical industry would manage to convince us that something is black when we are quite sure that it is white. So most successful marketing campaigns start out by finding ways to reinforce something the target audience already wants to believe. In the case of antidepressants, this worked well for both psychiatrists and patients. During the last decades of the twentieth century, psychiatrists wanted to emerge from the perceived shadows of the domination of Freudian psychoanalysis, which incidentally was dismissive of most drug therapy. In the minds of both physicians and the general public, psychoanalysis stood for arcane mumbo jumbo akin to witch doctoring. Psychiatrists wanted above all else to prove that they were every bit as respectable as other medical specialists—and in the language of twentieth-century medicine, this meant what came to be known as biological psychiatry. Mental illness had to be just like any other disease of any other organ system—it had to be based on some purported organic lesion that was detectable by the right sorts of chemical tests and imaging studies. Mental illnesses had to be diseases of the brain. If a drug that changed the chemistry in the brain in a targeted and known manner made a mental illness better, that fact helped to convince everyone that mental illness was “real” according to the coin of the medical realm and therefore that psychiatrists were “real” doctors.
Patients for their part also wanted to hear that mental illness was caused by a chemical imbalance in the brain and could be fixed by the right pill. First, many people in society still regarded mental illnesses, especially those that used to be called neuroses, as simple failure of will. Presumably, if only the individual tried hard enough, he’d snap out of it. A disease model that absolved the individual of personal responsibility for mental illness, analogous to the way we typically regard the victim of pneumonia as not having done anything to bring on the disease, was fervently sought. Finally, Americans are generally impatient and hardly want to hear, as psychoanalysis often seemed to say, that years of patient probing into one’s life history and feelings would be needed for a meaningful response. A pill that promised to reverse the condition in just a few weeks was far preferable.
It would perhaps be reassuring to say that the specialty of psychiatry was simply caught napping and was blindsided by a clever industry ploy; but the truth is more complicated. Robert Whitaker, whose history of recent psychiatry offers a scathing critique, names a particular date when psychiatry’s leaders made an explicit decision to get into bed with the pharmaceutical industry. The American Psychiatric Association (APA) formed a task force in 1974 to explore common interests with the pharmaceutical industry, including joint public relations efforts. In 1980, the APA adopted a policy of accepting drug company funds for specific educational presentations at its convention (Whitaker 2010: 268-282). This created the nearcircus atmosphere that a senior psychiatrist deplored in 2002, reporting on an international psychiatric congress in Berlin that featured, among a plethora of drug displays, a picturesque babbling brook and a 40-foot rotating tower. Presentations about drug treatment for mental disorders were thrust into the attendees’ faces; sessions describing alternatives to drugs were hidden away in back rooms (Torrey 2002). Following this sort of negative publicity, the APA announced in 2009 that it would no longer accept drug funding for its annual conferences (Tanne 2009).
As Applbaum argued, once the industry has succeeded in controlling the channels, it hardly matters what drug is put into the pipeline. The selective serotonin reuptake inhibitor (SSRI) class of antidepressants is a good illustration. The ideal of biological psychiatry is that a particular psychiatric condition is caused by a single, unique, and discrete form of chemical malfunction in the brain. The ideal psychoactive drug, in turn, targets that single chemical pathway and leaves all other brain functions untouched—the proverbial magic bullet. The problem from the start was that the SSRIs refused to conform to this pattern. As David Healy, psychiatrist, drug industry critic, and experienced investigator of these drugs, has reported, SSRIs actually appeared to affect a wide array of symptoms (Healy 1997). The first such drug, buspirone, was marketed in the late 1980s as an antianxiety drug, and that effort flopped miserably. The company claimed that it was an excellent tranquilizer but was non-habit forming. But physicians would have none of it. They had just been through a phase of chastisement over the too liberal use of supposedly safer tranquilizers like Valium and Librium, only to find out how addictive these drugs were in practice. Everyone now knew that tranquilizers were habit forming.
Therefore, when the Eli Lilly Company sets out to market its new SSRI drug, fluoxetine (Prozac), they knew exactly what to do and what to avoid. They promoted the drug as an antidepressant, only one that was safer than the older class of tricyclic antidepressants, which were known among other things for occasional side effects involving heart rhythm and so could be fatal in overdose. Everyone knew that antidepressants were nonaddictive, so that part of the marketing campaign worked well. Then, over time, all that was needed was to convince physicians that what had previously been termed anxiety was, in reality, a disguised form of depression.
Two inconvenient facts created potential problems for this marketing campaign—fluoxetine did not work very well against depression, and it had its own nasty set of side effects. (In fact, it’s reported that when Prozac was first submitted for marketing approval in Germany, the drug agency there turned it down, unimpressed with any of the evidence of efficacy.) (Healy 2003: 204) One of the stunning stories about pharmaceutical marketing during this era is for how long the industry was able to conceal these inconvenient truths. Research studies that indicated high success rates with SSRIs were promptly published, often in multiple journals, while equally well-done studies showing disappointing results were quietly buried (Turner et al. 2008). Study design was carefully manipulated to avoid recording serious adverse reactions. The most worrisome though fortunately rare reaction attributed to SSRIs was akathisia, an agitated state, often occurring within the first few weeks of therapy, in which patients might become homicidal or suicidal. Some of the questionnaires used to measure adverse reactions in trials of SSRIs seemed specifically designed to avoid revealing any signs of akathisia. For years, companies insisted that any patient who committed suicide shortly after starting an SSRI did so as a result of the underlying depression and not because of the drug—an explanation that ceased to hold water when non-depressed patients, started on SSRIs for other conditions, also occasionally became suicidal.
Another secret that the drug industry managed to keep for many years is that patients who attempted to discontinue taking SSRIs often experienced a nasty withdrawal reaction. This reaction happened to be good for drug sales, as psychiatrists swayed by company marketing invariably attributed these symptoms to worsening depression and used them as evidence that the patient needed lifelong drug therapy. Even better, the psychiatrist might decide that stopping the SSRI had unmasked a coexisting psychiatric illness such as bipolar disorder, and henceforth this patient needed to be placed on two or three psychiatric drugs, not just one. Whitaker, in his book, reviewed both the extensive clinical evidence supporting such a withdrawal syndrome and the biochemical mechanisms that rendered such a syndrome a logical outcome of drug treatment (Whitaker 2010). But as long as the dominant narrative circulating in the medical community was the one generated by the industry marketers, voices such as Whitaker’s were easily drowned out.
Perhaps the single most successful aspect of the marketing of the SSRIs was selling the general public, as well as the medical community, on the serotonin theory of depression. Many lay people can explain to you today precisely how antidepressants work—that the depressed person has too little serotonin in the synapses between nerve cells, due to too rapid reuptake of the chemical by the cells; and SSRI drugs slow the reuptake process and so restore serotonin to its proper levels. It sometimes seems as if “How’s your serotonin level?” is as likely to occur in casual conversation as “How’s everything?” Like much of the rest of the dominant, industry-promoted narrative, the serotonin theory of depression turns out scientifically to be mostly a mirage. At best it’s a serious oversimplification; at worst it’s a plain falsehood (Healy 2003; Leo and Lacasse 2008). But the theory serves so many useful purposes for both physicians and the general public that everyone is loath to let go of it, regardless of what the scientific evidence shows.