The Science of Alcohol Addiction
by Carrie Sha
It’s a Friday night. House parties. Drinking games. Red solo cups. It’s a common sight. But where does the college drinking culture come from and where can we draw the thin line between being in control of alcohol and having alcohol control you? Approximately one out of five college students meet the National Institute on Alcohol Abuse and Alcoholism’s criteria for alcohol dependence (1). Even those who don’t drink can be one of the 599,000 students that are often unintentionally injured in alcohol-related situations (1). One of the causes behind these alarming statistics is simply the biology of the adolescent brain. College is usually where the last stage of brain development, the maturation of the prefrontal cortex, takes place. The prefrontal cortex is a region key to control and decision-making. Coupled with academic stress and the pressure to succeed, especially in the nation’s top-notch universities, it is no wonder that drinking gets out of control quickly. What is the science behind the addictive nature of the simple ethanol molecule, the key ingredient in drinking alcohol, and what are current researchers doing to tame its effects? Professor Gutlerner, lecturer in Biological Chemistry and Molecular Pharmacology at the Harvard Medical School, explains.
Science of Addiction
Why is the ethanol molecule so addictive? Ethanol (C2H5OH) first binds to a GABA-a receptor respon- sible for creating gamma-aminobutyric acid (GABA). GABA inhibits neuron activity by an increase of chlorine ion release into neurons. (2,3). This dramatic increase in negative charge causes the membrane potential of the neurons to be very negative, thus making it difficult for neurons to cross the threshold membrane potential required for activation. To a similar effect, alcohol inhibits glutamate, the counterpart molecule to GABA used to excite neuron activity. The combined consequences of GABA stimulation and glutamate inhibition cause the calming effects of alcohol that is typically associated with an “optimal buzz.” However, Professor Gutlerner cites that prolonged alcohol use makes the GABA-a receptor less sensitive to activation, which is partially responsible for many of the effects during alcohol withdrawal such as anxiety disorders, panic attacks- the combined result of a hyper-activated central nervous system. As a result, many addicts find themselves drinking to simply feel “normal.”
Reward Pathway
Although the damage to the GABA pathway is important, a significant consequence of alcohol is its interference with the reward pathway. The reward pathway is essentially nature’s way of reinforcing good behaviors and eliminating bad behaviors by generating the neurotransmitter dopamine in the ventral tegmental area (VTA), a group of neurons located in the midbrain. Alcohol’s major interaction with the reward pathway comes through its stimulation of beta-endorphins, which activates opioid peptides, a chain of amino acids that modify the activity of nearby neurons (4). These peptides control feelings of euphoria. Alcohol also increases the concentration of neurotransmitter dopamine, which stimulates desire in the body’s reward center, the nucleus accumbens, an area not too far away from the VTA. Simultaneously, alcohol binds to acetylcholine and serotonin (responsible for inhibition) receptors and alters their respective pathways. After pro- longed use, more and more alcohol is needed to achieve the same level of euphoria as before. The changed neurochemistry of the addict’s brain can be seen following figure, showing the increase of positive reinforcement in the nucleus accumbens in non-dependents and the increase of negative reinforcement in the amygdala independents.
Vulnerability of the teenage brain
Although there has been conflicting evidence over the degree to which age impacts decision-making and its impact on the vulnerability of young adults to various addictions, many researchers subscribe to a top-down impulse control model of alcohol addiction (5). Physiologically, the teen- age brain is in its last stage of development, the maturation of the pre-frontal cortex. This developmental stage makes decision-making and emotional control especially difficult, supporting the top-down impulse control model and the fact that statistically, younger people are more “likely to try and become addicted to alcohol” (Gutlerner).
Stress
Although “stress” is now a common word to describe all aspects surround- ing college life, it has deep physiologi- cal roots. The stress response is seen in the activation of the hypothalamic- pituitary-adrenal axis (HPA), which increases the production of corticotro- pin releasing factor (CRF), the mol- ecule that generates the fight or flight response in all animals (6). The opi- oid pathway is highly integrated with the control of stress responses in the body. Because of alcohol’s alterations on the opioid pathway, alcohol addicts are constantly hypersensitized to stress during withdrawal, meaning that they are more aware and impacted by their stress level. In turn, this stress makes them more likely to drink. In other words, it’s a vicious cycle.
Current strategies
There is a group of drug therapies aimed at attacking GABA receptors and the dopamine and serotonin pathways. For example, Baclofen is an approved GABA agonist for seizures that has shown to decrease craving and anxiety in alcohol addicts (7). Similarly, a low dosage of topira- mate, a natural anticonvulsant, can be used to dampen down excitability and maintain abstinence by reducing the amount of dopamine produced in the reward pathway during alcohol consumption (8).
Another series of perhaps more effective drugs directly target the reward pathway. For example, Naltrexone is an opioid drug that blocks opioid receptors. Its interfer- ence with the dopamine pathway was reported in 1997 (9), and a series of subsequent clinical trials have shown a high degree of efficacy (10).
Perhaps the most effective drug so far is Antabuse, the first drug approved by the USDFA to treat alcohol addiction. The goal of Antabuse is to simulate alcohol intolerance in addicts by acting as an acid aldehyde inhibitor. Usually, alcohol in the body is metabolized to acetic acid by enzyme called acid aldehyde dehydrogenase. A large database study found that East Asian populations were shown to have a low tolerance to alcohol because of a polymorphism for the inactive form of dehydrogenase. Their intolerance to alcohol, expressed by face flushing and digestive problems, also gave them control over their drinking. Thus, Antabuse, working as an acid aldehyde inhibitor, attempts to achieve the same intolerance to alcohol.
Professor Gutlerner cites that not all these drugs have been so effective. She stresses the importance of psychotherapy in combination with medical approaches, “When improvement is shown in alcohol programs, it frequently comes with psychotherapy combined with medicine. Psychotherapy helps addicts learn to better manage their decision making, and only with this self-control can we see a significance drop in relapse.”
References
1. National Institute on Alcohol Abuse and Alcoholism. 2013. College Drinking Fact Sheet.
2. Yamashita, AI. 1998. Neurobiological Mechanisms for Alcoholism. Bryn Mawr College.
3. Bardi JS. 2002. Tragedy of Alcohol Abuse Drives TSRI Researcher’s Work. The Scripps Research Institute News and Views 2(6).
4. Froehlich JC. 1997. Opioid peptides. Neu- rotransmitter Review 21: 132-135.
5. Winters KC. 2008. Adolescent Brain Devel- opment and Drug Abuse.
6. Smith SM, Vale WW. 2006. The role of the hypothalamic-pituitary-adrenal axis in neuroendocrine responses to stress. Dialogues Clin. Neurosci. 8: 383-95.
7. Garbutt JC, Kampov-Polevoy AB, Gallop R, Kalka-Juhl L, Flannery BA. 2010. Efficacy and safety of baclofen for alcohol dependence: a randomized, double-blind, placebo-controlled trial. Alcohol Clin. Exp. Res. 34: 1849-57.
8. Del Re AC, Gordon AJ, Lembke A, Harris AH. 2013. Prescription of topiramate to treat alcohol use disorders in the Veterans Health Administration, Addiction Science and Clini- cal Practice. 8
9. Spanagel R, Zieglgansberger W. (1997). Anti-craving compounds for ethanol: New pharmacological tools to study addictive processes Trends in Pharmacological Sci- ences 18: 54-59.
10. Bouza C, Maagro A, Muñoz A, Amate JM. (2004). Efficacy and safety of naltrexone and acamprosate in the treatment of alcohol dependence: A systematic review Addiction 99: 811-828.
The Science of Swearing: A look into the human MIND and other less socially acceptable four-letter words
by Michelle Drews
Disclaimer: This article covers the psychological, neurobiological, linguistic, and legal aspects of the use of profanity. Readers are advised that it does contain words that some individuals my find offensive or inappropriate for young children.
What’s in a word? Would that which I call my pen write any less well if I call it a banana? Would it taste any better? A core tenant of linguistics is the idea that words are merely a collection of syllables associated with ideas, yet most words are more than just their literal meanings—they also carry an emotional connotation as a result of how they are used within the language (Pinker, 2007). For some words, this emotional connotation is so intense that that, even in a country like the United States, where freedom of speech is a fundamental tenant, the use of these words can be officially (or unofficially) banned. The consequences of using them in an “inappropriate” context can range from censorship and fines to ostracism and the loss of your cooking show. In spite of this, swear words, taboo phrases, and other forms of curses persist across societies and throughout history—a product of culture, language, and the brain itself.
Becoming Taboo
When asked to define profanity in 1964, former Supreme Court Justice Potter Stewart famously stated that he could not describe it and added, “But I know it when I see it” (Jacobellis v. Ohio, 1964). Though the material in question was pornography, the difficulty of a universal definition extends into profane language as well. While there are some qualifications that extend to all swear words, the magnitude of “offensiveness” can vary greatly, making a precise, literal definition of the word challenging. Most swear words and taboo phrases tend to deal with material that is offensive in some manner. Studies of swear words have shown that the most common swear words can be categorized as deistic, visceral, or social (Jay, 2009a) [Fig 1]. In particular, studies show that sex-related insults in particular are common across cultures (Flynn, 1976). However, simply referring to sex or genitalia is not sufficient to make a word or phrase taboo. Our reaction to the word “fuck” is much different than our reaction to “coitus,” “make love,” or even “have sex.” There is also nothing special about the sounds or syllables in the word “fuck.” Close-sounding words—such as “duck,” “truck,” and ”buck”—are not prohibited and in some cases can serve as a more socially appropriate substitution for what everyone understands was meant to be a curse word, for example “mothertrucker!” (Pinker, 2007).
How then does a word become taboo? Since taboos are cultural concepts, the answer must be through society. The word taboo is defined as “a social or religious custom prohibiting or forbidding discussion of a particular practice or forbidding association with a particular person, place, or thing” (Taboo). First, taboos must be internalized by an individual, usually in childhood, along with many other social norms and customs (Jay, 2009a). This early acquisition of taboos is evident in studies of individuals who acquired a second language later in life. These individuals react much more strongly to swear words in their first language than in their second (Harris et al, 2006). As children, we are punished by caregivers such as parents when we swear, and through aversive conditioning we learn that certain phrases are to be avoided (Jay, 2009a). Later, when we mature, we learn the complex social features and characteristics that underlie certain taboos; thus, a more nuanced understanding of where and when to avoid taboo phrases develops (Jay & Janschewitz, 2008).
Furthermore, as culture changes, so does what is taboo (Pinker, 2007). The words “gay” and “nigger” both provide excellent examples. While the word “nigger” used to be considered socially acceptable in many circles, now it is considered a highly offensive term thanks to more modern thinking and the civil rights movement. The word “gay,” originally meaning “extremely happy,” is now associated with homosexuality and can carry a number of different connotations depending on who is using it, and in what context.
Why Swear?
So, if taboo phrases are cultural “no-no”s, why do they persist? The simplest answer is that in certain situations swear words and taboo phrases have their uses: mainly to evoke a strong negative reaction from someone. Speech perception is nearly automatic in mature individuals (Pinker, 2007). Try this: don’t think of an apple. Did you think of an apple anyway when you read the word “apple”? With swear words, your mind immediately drags up whatever offensive combination of denotations and connotations are associated with the word in question when you hear it. These make swear words powerful insults and forceful descriptors of the nastier aspects of things we may not want to think about.
Swear words are also useful and effective ways of conveying that you feel very strongly about something or of inciting strong feelings in someone else, even when used outside of their traditional definitions (Jay, 2009a; Pinker, 2007). Saying that something is “bloody amazing” does not mean that that thing was literally bloody, but adding the term “bloody” to the phrase gives it extra emotional emphasis. Another good illustration of this is in a Stroop test, as illustrated below [INSERT STROOP TEST HERE]. Try to name the color of the word as fast as you can. The attention-grabbing qualities of the swear words used in this task make it especially difficult (Pinker, 2007). In a similar experiment, the use of taboo phrases in a word-location task increased subjects’ ability to correctly remember the location of the word (Mackay, 2005). Swear words effectively stir up strong emotions and grab our attention.
However, swearing is not always about evoking negative emotions; swearing itself can also be a cultural phenomenon. The willingness to break a cultural taboo in front of others creates an atmosphere of informality and sense of community. If taboos are defined by the greater society, an environment where subverting those taboos is acceptable creates a smaller, more intimate society inside of the greater society (Pinker, 2007). Another interesting use of taboo language is as a cathartic experience, a way of expressing and alleviating pain, frustration, stress, or regret (Jay et al. 2006). A classic example of this would be shouting “damn it” after hitting yourself with a hammer while trying to nail something down. Interestingly, studies have shown that, when compared with people who do not swear frequently, frequent swearers also tend to have lower pain tolerance (Stephens, 2011). Swearing was also shown to increase the ability of subjects to tolerate pain (Stephens, 2011). All of these uses contribute to the propagation of swear words and taboo phrases in language, despite their inappropriateness in certain contexts.
On Your Mind: Swearing in the Brain
In an effort to understand how swearing provokes a strong response in individuals, neuroscientists looked to the brain for answers. Using neuroimaging techniques such as PET (positron emission tomography) scans, they demonstrated that a small part of the brain called the amygdala is highly active when exposed to threatening words (Isenberg, 1999). The amygdala is part of the limbic system, one of the primitive parts of the brain responsible for processing emotion and memory. In particular, amygdala activity is correlated with negative emotional associations; stimulating the amygdala can cause panic attacks and aggressive behaviors, while destroying the amygdala causes unusual placidness or fearlessness (Zald, 2003; Davis, 2001). Therefore, it makes sense that the amygdala would be activated in association with unpleasant words such as swear words. The amygdala also makes several connections to memory and association centers in the brain, which could also be responsible for the increased memory skills when subjects are presented with swear words (Davis, 2001).
Swearing in the Clinic
Beyond simply determining what part of the brain is activated, neuroscientists also sought insight into how swear words are produced in the brain by looking to the clinic. Pathological swearing is found in many neurolinguistic disorders, the most famous being Gilles de la Tourette syndrome (GTS). GTS, which was first identified by Itard and Gilles de la Tourette in the 1800s, is a hyperkinetic motor speech disorder characterized by frequent involuntary “tics,” which are sudden pattern-like movements or sounds (Van Lancker, 1999; NINDS, 2012). In most pop-culture portrayals of Tourette’s, corpolalia, or involuntary swearing, features very prominently. In GTS individuals with corpolalia, swearing is a tic. However, despite the prevalence of corpolalia in media depictions, only about 10-25 percent of individuals with Tourette syndrome exhibit corpolalia (Van Lancker, 1999; Pinker, 2007).
Though it is lesser known than Tourette syndrome, aphasia can also heavily feature swearing. Aphasia is a clinical language impairment resulting from damage to the language centers of the brain [See Figure 3], usually following a stroke. The exact specifics of a particular aphasia depend on the location and severity of the damage; in general, though, aphasic individuals have problems with speech, listening, reading, and writing (Van Lancker, 1999; NINDS, 2012). In the most severe case—global aphasia—speech is almost nonexistent. Yet, in numerous cases these individuals are still able to swear normally (Van Lancker, 1999). Even in individuals with less extensive aphasias, where speech is possible but difficult, limited, and often incorrectly pronounced, patients have been known to use swear words easily with the proper pronunciation (Van Lancker, 1999). For example, R.N., a patient with global aphasia as a result of a stroke involving his left frontal, temporal and parietal lobes, could only say “well,” “yeah,” “yes,” “no,” “goddammit,” and “shit” (Van Lancker, 1999). Patient R.N. was able to produce these words properly in the proper context, however, when asked to say the word “shit” out of conversational context by reading it from a written card, he was unable to do so (Van Lancker, 1999).
The use of swearing in both aphasia and GTS gives us a real insight into how swearing works in the brain. Individuals with aphasia have damage to the normal parts of the brain that produce formal language, such as Broca’s area or Wernicke’s area, found in the left hemisphere of the brain. The fact that they are able to swear suggests that swearing is localized outside of these damaged areas and is handled differently in the brain than other parts of language. Psychologist Chris Code, who studied individuals who had their left hemispheres removed, proposed that swear words and several other types of speech preserved in aphasic individuals fall into a category of “lexical automatisms” or automatic speech, which are localized to the right hemisphere instead of the left one (Code, 1996; Van Lancker 1999).
Pathological and neuroimaging studies of individuals with Tourette syndrome implicate the basal ganglia and the limbic system as key players in GTS and corpolalia [See Fig3]. The basal ganglia have several main roles in the brain, including the regulation of actions, and use dopamine as their main neurotransmitter. Parkinson’s disease and Huntington’s disease are two classic examples of basal ganglia dysfunction. In Parkinson’s disease, the basal ganglia are damaged in such a way that they inhibit motor signals coming from the cortex, and thus movement is very difficult. In Huntington’s disease, the basal ganglia are damaged in just the opposite fashion – they do not inhibit motor signals like they normally would, and patients move unintentionally and uncontrollably (Kandel, et al. 2000). If we consider speech as just another type of movement that can either be suppressed or released by the basal ganglia, it makes sense that they would be involved in swearing, keeping taboo ideas that cross our thoughts from being expressed more fully. This is a useful tool for the brain because, to quote Harvard Psychologist Steven Pinker, “you have to think the unthinkable to know what you’re not supposed to be thinking” (Pinker, 2007).
Though studies of GTS individuals show a high level of variability in the brain areas they implicate, the basal ganglia and dopamine system in particular have been shown to be dysfunctional in many studies (Van Lancker, 1999). Dopamine antagonists , drugs that block or lower the effects of dopamine receptor signaling, have also proven effective in alleviating some GTS symptoms, further supporting the idea that the basal ganglia are involved in GTS (Regeur, 1986).
The limbic system, which includes the amygdala, also has a variety of other roles, most of which involve emotion (Van Lancker, 1999). Important to the topic of swearing, the limbic system has been shown to be important in the production of emotional language (Pinker, 2007). Therefore, one theory is that dysfunction in the limbic system and basal ganglia can produce corpolalia, which stems from a loss of inhibitory ability coupled with high emotional reactivity. These two areas are also usually intact after an aphasic stroke, meaning that the ability to swear should also be preserved. Still, we do not have all the answers yet—there are exceptions and inconsistencies in every case. Nevertheless, these findings may give us the beginnings of an understanding of how swearing works in the brain.
Sticks and Stones: Free Speech and Words that Hurt
Though understanding how swearing works in the brain is a puzzle that scientists will keep working on, the far more controversial question about swear words is how we should deal with them legislatively. Freedom of speech, the first and foremost Amendment in the Bill of Rights, is seen as one of the founding tenants of a democratic society. However, there are cases of what the Supreme Court calls “unprotected speech” where speech can be restricted. Slander, libel, and “fighting words” are all examples of unprotected speech. In each of these cases, the speech has been deemed harmful to others and is therefore illegal (Cohen, 2009).
Obscenity is also considered a type of unprotected speech, under the argument that offensive words also constitute a form of harm, particularly for the vulnerable and the young (Jay, 2009b). This idea has been the basis of many of the rules enforced by the Federal Communications Commission (FCC), which has fined TV stations and Radio Networks for everything from broadcasting George Carlin’s “Seven Words You Can Never Say on Television” to Bono’s fleeting use of “fucking brilliant” at the Golden Globe Awards (Pinker, 2007; Jay, 2009b).
Yet, are offensive words actually harmful? Psychological studies have shown that context is essential in terms of harmful speech. On one hand, a study of child victims of obscene telephone calls showed that the children suffered severe psychological consequences from these calls (Larsen et al 2000). Verbal harassment and aggression has also been shown to have clear negative psychological effects (Vissing et al., 1991). On the other hand, the evidence against swearing alone is much less compelling (Jay, 2009b). As discussed above, there are many psychological studies that suggest swear words, in the appropriate context, can be beneficial when used for group unity, coherence, and general expressiveness (Jay, 2009b; Jay, 2006; Heins, 2007).
This is not to say that the use of swear words and taboo phrases is totally without potentially harmful consequences; just ask Paula Deen. In most instances, these words are taboo for a reason. Usually, they are considered offensive in one way or the other and evoke strong emotions (or strong amygdala reactivity), which can be harmful to relationships and other social constructs. However, the question of whether these social harms are sufficient punishment for the use of offensive language or if legislative action must be taken as well remains within the courts and legislators’ discretion (although hopefully informed by linguists, psychologists, and neuroscientists).
Taboo language is defined by culture and is created in the brain through a complex interaction of our speech, emotion, and motivation centers. There are a variety of uses for it, and from a legal standpoint the context of use is everything when determining what is or is not appropriate. While we may not have all of the answers about the science behind swearing just yet, swear words have been a unique feature of language for across cultures and time, showing no signs of leaving anytime soon.
References
Code, C., Wallesch, C.W., Joanette, Y., and Lecours, A.R. (1996). Classic Cases in Neuropsychology, Lawrence Erlbaum, Hove.
Cohen, H. (2009). Freedom of Speech and Press: Exceptions to the First Amendment. Congressional Research Service.
Davis, M. and Whalen, P.J. (2001). The amygdala: vigilance and emotion. Molecular Psychiatry. 6: 13-34.
Flynn, C.P. (1976). Sexuality and Insult Behavior. J. Sex Research. 12(1): 1-13.
Harris, C.L., Gleason, J.B., and Aycicegi, A. (2006). When is a first language more emotional? Psychophysiological evidence from bilingual speakers. In A. Pavlenko (Ed.), Bilingual minds: Emotinal experience, expression, and representation. Clevedon, U.K.: Multilingual Matters.
Heins, M. (2007). Not in front of the children: Indecency, censorship, and the innocence of youth. New Brunswick, NJ: Rutgers University Press.
Isenberg, N., Silbersweig, D., Engelien, A., Emmerich, K., Malavade, K., Beati, B., et al. (1999). Linguistic threat activates the human amygdala. Proc. Nat. Acad. Sci. 96: 10456-10459.
Jacobellis v. Ohio, 378 U.S. 184 (1964).
Jay, T., King, K., and Duncan, D. (2006). Memories of punishment for cursing. Sex Roles. 32: 123-133.
Jay, T., and Janschewitz, K. (2008). The pragmatics of swearing. Journal of Politeness Research. 4: 267-288.
Jay, T. (2009a). The Utility and Ubiquity of Taboo Words. Perspectives on Psychological Science. March 2009, 4, 153-161.
Jay, T. (2009b). Do Offensive Words Harm People? Psychology, Public Policy, and Law. 15(2): 81-101.
Kandel, E. R., Schwartz, J. H., & Jessell, T. M. (2000). Principles of neural science. (4 ed.). McGraw-Hill/Appleton & Lange.
Larsen, H.B., Leth, I., and Maher, B.A. (2000). Obscene Telephone Calls to Children: A Retrospective Field Study. J. Clinical Child Psych. 29(4): 626-632.
MacKay, D.G., and Ahmetzanov, M.V. (2005). Emotion, Memory, and Attention in the Taboo Stroop Paradigm: An Experimental Analogue of Flashbulb Memories. Psychological Science. 16(1): 25-32.
National Institute of Neurological Disorders and Stroke (2012). Aphasia. Retrieved September 31, 2013 from http://www.ninds.nih.gov/disorders/aphasia/aphasia.htm
Pinker, S. (2007). The Stuff of Thought: Language as a Window into Human Nature. New York: Viking.
Regeur, L., Pakkenberg, B., Fog, R., and Pakkenberg, H. (1986). Clinical features and long-term treatment with pimozide in 65 patients with Gilles de la Tourette’s syndrome. Journal of Neurology, Neurosurgery and Psychiatry. 49: 791-795.
Severens, E., Kuhn, S., Hartsuiker, R.J., and Brass, M. (2012). Functional mechanisims involved in the internal inhibition of taboo words. Soc Cogn Affect Neurosci. 7 (4): 431-435.
Stephens, R. and Umland, C. (2011). Swearing as a response to pain – effect of daily swearing frequency. J Pain. 12(12): 1274-81.
Taboo (n). In Oxford Dictionaries. Retrieved on September 31, 2013 from http://www.oxforddictionaries.com/us/.
Van Lancker, D., and Cummings, J.L. (1999). Expletives: Neurolinguistic and Neurobehavioral Perspectives on Swearing. Brain Research Reviews. 31(1): 83-104.
Vissing, Y.M., Straus, M.A., Gelles, R.J., and Harrop, J.W. (1991). Verbal Aggression by Parents and Psychosocial Problems of Children. Child Abuse & Neglect. 15: 223-238.
Zald, D.H. (2003). The human amygdala and the emotional evaluation of sensory stimuli. Brain Research Reviews. 41(1): 88-123.
Alternative Medicine and Patient Self-Care
by Lauren Claus
Although the words “health care” typically evoke images of doctors and drugs, many people nowadays see yoga teachers and acupuncture specialists, as well as physicians, to meet their needs. In the United States, 38% of adults and 12% of children use some form of complementary and alternative medicine, which is defined as any product or service related to health care that is not provided by medical doctors and other conventional health professionals (1). Currently, there is a large debate about alternative medicine, focusing on how complementary and alternative treatments—such as yoga, acupuncture, and herbal remedies—could be harmful rather than helpful to patients (2). Critics often disparage both the government for not regulating these treatments, as well as alternative medicine providers for using these unscientific approaches to medicine. Although forms of alternative medicine are increasingly undergoing rigorous studies, funded by organizations such as the National Center for Complementary and Alternative Medicine, the debate has not reached a clear consensus (3). Through it all, one clear issue still demands attention—how dangerous is it for consumers to choose and use alternative medicine as a means of self-care?
In 2007, the National Center for Health Statistics estimated that $22 billion out of the total $33.9 billion that United States citizens spend on alternative and complementary medicine each year is directed towards efforts at “self-care” (4). Self-care costs are defined as expenditure on complementary and alternative products and classes which consumers may choose without consulting a physician. Although it may not initially seem dangerous to give consumers free rein to purchase herbal medicines or yoga classes, problems can arise when patients neglect to make appointments about medical problems, believing that they can treat themselves with natural remedies. When patients use alternative medicines, physicians might be prevented from having full access to knowledge about their patients’ medical conditions. For instance, patients may not inform their physicians that they are taking herbal supplements every day, leaving their physician unaware of this during a routine check-up.
In fact, patients routinely fail to inform their physicians that they use alternative medicines—and sometimes deliberately (5). According to a national survey from 2001, approximately 70% of patients who see both a medical doctor and a complementary or alternative medicine provider do not mention one or more alternative treatments to the doctor. The survey respondents cited many reasons for doing so, including the fact that their doctor had never asked about such treatments during the appointment. Alarmingly, however, 31% of the participants claimed that they deliberately withheld information because “it was none of the doctor’s business,” and others said they believed that the medical doctor would disapprove of such treatments. This certainly compromises doctor-patient relations, and may even compromise the physician’s ability to accurately prescribe medication to suit the patients’ needs.
Alternative medicine could also pose a danger if patients attempt to diagnose and treat themselves using the Internet, where detailed instructions and information about many alternative medicines can be found. This possibility is made all the more likely because, according to a recent study by the Pew Research Center, 15% of social media users receive medical information from social media sites, which are not regulated for their accuracy. (6). Although the Internet can provide ample information about mild illnesses, such as a common cold, people may misdiagnose or overlook serious conditions if they rely too heavily on the Internet instead of visiting their doctor.
Because of the potential risks of alternative medicines, perhaps efforts should be made to analyze whether patients use them as a means of supplementary self-care or in place of conventional medical treatment. Despite the unique benefits that alternative treatments may offer patients looking for self-care options, these treatments can become dangerous if they interfere with the physician-patient relationship and the physician’s ability to prescribe proper care. Although coming to an eventual consensus on the debate over alternative medicine is important, so too is it critical for patients to find the right balance between treatments they seek out for themselves and treatments their doctors prescribe.
1: (2008). The Use of Complementary and Alternative Medicine in the United States. National Center for Complementary and Alternative Medicine. Retrieved from: http://nccam.nih.gov/news/camstats/2007/camsurvey_fs1.htm#use
2: Offit, P. (2013). DO You Believe in Magic?: The Sense and Nonsense of Alternative Medicine. New York: Harper Collins.
3: Research Results. National Center for Complementary and Alternative Medicine. Retrieved from http://nccam.nih.gov/research/results
4: Nahin et al. (2009). Costs of Complementary and Alternative Medicine (CAM) and Frequency of Visits to CAM Practitioners: United States, 2007. National Health Statistic Reports.
5: Eisenberg et al. (2001). Perceptions about complementary therapies relative to conventional therapies among adults who use both: results from a national survey. Ann Intern Med, 135, 344-51.
6: Bartz, A. & Ehrlich, B. (2012). Be careful when diagnosing your ailments online. CNN. Retrieved from http://www.cnn.com/2012/08/08/tech/social-media/netiquette-online-diagnoses/index.html
Stuck in Bereavement – Complicated Grief
by Lauren Stone
The experience of losing a loved one is something we can all relate to, and for some, this may be especially relevant in light of the recent Boston Marathon bombings. In the United States, almost 2.5 million people die every year (1). Individuals grieve in different ways in response to the common and universally stressful experience of death. Some are overcome with shock, while others develop intense longing or sadness (2). These painful emotions are an entirely natural response and eventually lessen.
Recovery for most people falls around 6-12 months after a loss, beginning the transition from acute to integrated grief (3). Acute grief is intense and occurs in the short-term post loss. Integrated grief involves the transition from acute grief to an acceptance of the loss. In this phase, although feelings of sadness endure, one is able to resume normal proceedings and experience joy. However, a small percentage of bereaved individuals are unable to transition to the stage of integrated grief. They remain stuck in acute grief and experience a protracted or even halted healing process. These approximately 7% of bereaved individuals suffer from complicated grief, a recently recognized syndrome characterized by significant distress and functional impairment (2). Currently, researchers at the Massachusetts General Hospital are collaborating with three other partner sites, including the Columbia University School of Social Work, in the Healing Emotions After Loss (HEAL) Study. The study pilots the use of an FDA-approved antidepressant medication and a targeted psychotherapy for the treatment of complicated grief. According to Dr. Naomi Simon, the director of the complicated grief program at Mass General, the studies under way are working toward an optimal treatment for the illness (2).
Complicated Grief’s Path to the DSM
Grief can lead to other psychiatric disorders following a loss, such as major depressive disorder or posttraumatic stress disorder (PTSD), both of which can certainly overlap with the symptoms of complicated grief. However, complicated grief is a distinct entity. For instance, while PTSD is characterized by fear, complicated grief is dominated by an intense sadness (2). Those afflicted with major depressive disorder experience great sadness as well, but this sentiment is more general, compared with death-related sadness and longing in complicated grief (2).
For these reasons, physicians and researchers in the field of grief have been advocating for the establishment of complicated grief as a distinct disorder (4, 5). In May of this year, the illness was finally included in the Diagnostic and Statistical Manual of Mental Disorders (DSM) Fifth Edition as Persistent Complex Bereavement Disorder (6), and will be titled Prolonged Grief Disorder in the International Classification of Diseases (ICD-11) that the World Health Organization will introduce in 2015 (7). The clinical criteria for the illness as presented in the DSM include symptoms such as intrusive thoughts about the death, persistent yearning for or preoccupation with the deceased, self-blame, and social, occupational, and functional impairment (6) – factors that “complicate” normal grief. Risk factors for the illness include female sex, life trauma, type of death (namely violent, sudden, or due to suicide), and prior loss (2). However, despite the much-anticipated recognition of complicated grief as a clinical entity, there is much unexplored territory in the illness.
Preliminary Research on Complicated Grief
In 2005, Dr. Katherine Shear, the director of the HEAL Study branch at the Columbia University School of Social Work, conducted an important randomized control trial using a targeted psychotherapeutic intervention called Complicated Grief Therapy (CGT). Under CGT, participants demonstrated greater improvement than those who received interpersonal psychotherapy (8). Pharmacological data are scant, though preliminary studies demonstrate that serotonin selective reuptake inhibitor (SSRI) antidepressant medications may improve grief symptoms (9, 10). Given that many clinicians are in fact uninformed about how to recognize when a patient is at risk or to treat the illness (2), the need for evidence-based treatment standards is crucial.
HEAL Study Design and Aims
These are the gaps that the multisite HEAL Study, funded by the National Institute of Mental Health, seeks to rectify. These researchers are testing the efficacy of the SSRI antidepressant citalopram and the targeted psychotherapy that Dr. Shear helped to develop, in an effort to optimize the treatment of complicated grief.
To be eligible for the study, individuals must be 18 years of age or older and must have been suffering from grief for 6 or more months following loss. Interested participants begin with an initial screening visit, in which a clinician confirms whether or not the participant suffers from complicated grief. If eligible, based on confirmation of diagnosis and overall good health, the participant is randomly assigned to one of the following four groups for 16 weeks of treatment:
- Complicated Grief Therapy (CGT) and the medication (citalopram)
- CGT and a placebo (inactive pill)
- Citalopram without CGT
- Placebo without CGT
Other Angles and Future Research
In addition to the HEAL study, other researchers are working to better understand complicated grief. In 2008, a research team led by Mary-Frances O’Connor, a professor of psychology at the University of Arizona, examined a particular area of the brain, the nucleus accumbens, in the context of grief. The nucleus accumbens is a component of the mesolimbic dopamine system, which uses the neurotransmitter dopamine to control sensations of pleasure and reward (11). In O’Connor’s study, bereaved participants with complicated grief, compared to bereaved participants without complicated grief, demonstrated reward activation in the nucleus accumbens in response to being shown cues of the deceased (12). This finding suggests ties between attachment and reward function, a connection that is supported by animal models. Addiction and social interactions have been found to use the same neural circuitry in rats, for instance (13).
Another important advance in the field of grief came from the Harvard University Department of Psychology. In an article published this year, graduate student Don Robinaugh and professor Richard McNally present their results from a study in which they had recruited conjugally bereaved adults to examine autobiographical memory specificity (14). Robinaugh and McNally found that participants with complicated grief had difficulty recalling past and imagining future events without the deceased. However, these same participants had relatively little difficulty imagining future events with the deceased. McNally views the results as seeming to “narrow in” on potential cognitive bases for some of the clinical symptoms of complicated grief. For instance, a difficulty moving beyond the grief and imagining the future without the deceased, yet ability to imagine a “counterfactual future” with the deceased, may be the basis for the tension behind painful yearning for the lost loved one. McNally believes there could be ways of saying “aha” – that this is a problem or deficit that comes with complicated grief. With this, he explains, comes the potential to rectify these deficits. This is an advantage of experimental psychopathology approaches, in which we can “illuminate patterns of deficits” that highlight possibilities for therapeutic interventions (15).
McNally points out that grief is painful enough, but if it goes untreated, there is the risk for other non-psychiatric medical consequences such as heart attacks and a taxed immune system. The “real problem,” he summarizes, is those who are suffering and not being helped. He views the most important future research directions as figuring out why some individuals develop complicated grief and others do not, as well as better understanding the psychopathology of the illness and documenting the efficacy of existing interventions (15). The results of studies targeting neuroanatomical or cognitive correlates of complicated grief may have implications for future treatment of the disorder.
Conclusion
It is evident that DSM recognition of complicated grief is only the beginning. In light of the paucity of evidence-based treatment standards, as well as the potentially grave consequences of developing complicated grief, there is clearly a pressing need for further grief research. Hopefully, the studies under way and those that take place in the future will facilitate a better understanding of the illness and its correlates, and will continue to optimize the treatment of complicated grief.
References
- D.L. Hoyert, J. Xu, Deaths: preliminary data for 2011. Washington, DC: US Dept of Health and Human Services, Centers for Disease Control and Prevention (2012).
- N.M. Simon, Treating complicated grief. JAMA. 310, 416-423 (2013).
- G.A. Bonanno, C. B. Wortman, D.R. Lehman, R.G. Tweed, M. Haring et al., Resilience to loss and chronic grief: a prospective study from preloss to 18-months postloss. J. Pers. Soc. Psychol. 83, 1150-1164 (2002).
- H.G. Prigerson, E. Frank, S.V. Kasl, C.F. Reynolds, B. Anderson, et al., Complicated grief and bereavement-related depression as distinct disorders: preliminary empirical validation in elderly bereaved spouses. Am. J. Psychiatry. 152, 22-30 (1995).
- N.M. Simon, M.M. Wall, A. Keshaviah, M.T. Dryman, N.J. LeBlanc, et al., Informing the symptom profile of complicated grief. Depress. Anxiety. 28, 118-126 (2011).
- Diagnostic and statistical manual of mental disorders: DSM-5 (5th ed.) (American Psychiatric Association, Washington, D.C., 2013).
- A. Maercker, C.R. Brewin, R.A. Bryant, M. Cloitre, G.M. Reed et al., Proposals for mental disorders specifically associated with stress in the International Classification of Diseases-11. Lancet. 381, 1683-1685 (2013).
- K. Shear, E. Frank, P.R. Houck, C.F. Reynolds, Treatment of complicated grief: a randomized controlled trial. JAMA. 293, 2601-2608 (2005).
- N.M. Simon, E.H. Thompson, M.H. Pollack, M.K. Shear, Complicated grief: a case series using escitalopram. Am. J. Psychiatry. 164, 1760-1761 (2007).
- M. Zygmont, H.G. Prigerson, P.R. Houck, M.D. Miller, M.K. Shear, et al., A post hoc comparison of paroxetine and nortriptyline for symptoms of traumatic grief. J. Clin. Psychiatry. 59, 241-245 (1998).
- K.C. Berridge, T.E. Robinson, What is the role of dopamine in reward: hedonic impact, reward learning, or incentive salience? Brain. Res. Rev. 28, 309-369 (1998).
- M.F. O’Connor, D.K. Wellisch, A.L. Stanton, N.I. Eisenberger, M.R. Irwin, et al.,. Craving love? Enduring grief activates brain’s reward center. Neuroimage. 42, 969-972 (2008).
- J. Panksepp, B. Knutson, J. Burgdorf, The role of brain emotional systems in addictions: a neuro-evolutionary perspective and new ‘self-report’ animal model. Addiction. 97, 459-469 (2002).
- D.J. Robinaugh, R.J. McNally, Remembering the past and envisioning the future in bereaved adults with and without complicated grief. Clin. Psych. Science. 1, 290-300 (2013).
- R. McNally, personal communication.
Should we use genetically modified foods to increase our food reservoir?
by Serena Blacklow
Over 80% of all processed foods in the U.S. contain genetically modified ingredients (1). Yet, the use of genetically modified organisms (GMOs) as sources of food remains intensely controversial, with economists, politicians, and farmers as well as scientists taking conflicting stances on the issue. While GMOs have the potential to increase our food supply, their use carries health hazards, environmental risks, and implications for small farmers’ welfare. Safe incorporation of GMOs into our diets thus requires policies ensuring their safety for human, ecological, and economic wellbeing.
A GMO is an organism that has had its DNA modified artificially. In the case of crops, modifications may result in pesticide resistance, enhanced color, or increased size. Grocery store aisles house a plethora of such modified foods, from soybeans to tomatoes to sweet corn. Yet, the U.S. government does not require these items to be labeled as GMOs. This lax regulation eliminates consumers’ rights to not only choose what they are eating, but also know the origins of their food (2).
In addition to being inadequately monitored, GMOs pose significant health risks, which are unknown to much of the public. Rats fed genetically modified potatoes, for instance, displayed smaller organs and impaired immune systems relative to those fed the non-modified item (1). Recent genetic manipulations hold similar potential for harm. One such trait, glyphosate resistance, allows crops to be sprayed with the glyphosate-containing herbicide RoundUp. Although RoundUp is not fatally toxic, its glyphosate still enters and destroys key minerals within the plant, increasing consumers’ risk of micronutrient deficiencies (3). Though genetic modification may expand the global food supply, its health hazards must be considered: what is the advantage of providing a greater quantity of food whose quality is so poor that it jeopardizes our own health?
In addition to endangering our own health, the production of genetically modified foods affects the health of ecosystems. RoundUp, for example, diminishes micronutrient supply to not only human consumers but also the soil, hurting both parties’ health (3). Further, GMOs represent irreversible threats to natural biodiversity: once modifications are introduced into farming, they cannot be rescinded, and they can be spread unwittingly to nearby farms. Unintended cross-pollination of GM-free and GM crops, for example, disrupt local ecosystems and result in the destruction of wild type plants (5). The potential for genetically engineered plants to contaminate local populations can interfere with the natural ecological system.
The widespread use of genetically modified crops affects not only animal and plant populations, but also our economy. Large biotech companies such as Monsanto and DuPont dominate the market for genetically modified seeds and endanger the livelihoods of traditional farmers. Farmers are constantly threatened with contamination of their crops by genetically modified crops that have been grown from seeds issued by these companies. The Monsanto Company has threatened to sue for patent infringement when, in reality, the farmers have had no control over the spread of these genetic modifications (6). Through such actions, corporate giants impede small farmers from producing their own, non-contaminated crops, creating an unsustainably competitive agricultural marketplace.
Unawareness of the health risks and environmental consequences of GM foods has diminished our appreciation for natural, unmodified crops. As a result, big businesses control the markets for seeds and crops, while small farmers lose their livelihoods. Before we incorporate even more genetically engineered foods into our diet, we must realize and address the implications that come with their production and consumption. As the Earth’s current inhabitants, we need to recognize the significance of our actions to Earth’s future. GMOs possess significant drawbacks, which must be properly addressed if we plan to use these items to ensure a sustainable future. For now, GMOs are not the answer.
References
1. S. Lendman, Potential Health Hazards of Genetically Engineered Foods. Global Research, (February 22, 2008).
2. Center for Food Safety, Tell Congress to Oppose Preemption of State GE Labeling Laws (2013).
3. D. M. Huber, The woes of GMOs — Glyphosate and GMO impact on crops, soils, animals and man. GMWatch, (September 2012).
4. E. Clair et al., Effects of Roundup (R) and Glyphosate on Three Food Microorganisms: Geotrichum candidum, Lactococcus lactis subsp. cremoris and Lactobacillus delbrueckii subsp. bulgaricus. Curr Microbiol 64, 486 (May, 2012).
5. Terrascope (MIT class), Genetically Modified Crops. Mission 2014: Feeding the World, Massachusetts Institute of Technology (2010).
6. D. B. Ravicher, “Farmers and Seed Producers Launch Preemptive Strike against Monsanto,” Cornucopia News, 2011.
7. Wikimedia Commons. File: GMO corn label RoundUp Liberty Link Herculex I Cruiser Mid Rate.jpg, (September, 2013).
DNA glue facilitates self-assembly of hydrogel bricks
by Serena Blacklow
Here at Harvard’s own Wyss Institute for Biologically Inspired Engineering, researchers have introduced a new method of self-assembly using DNA glue. Self-assembly, the ability of objects to spontaneously come together, enables scientists and engineers to attach objects too small to be manipulated by hand. Although there has been much progress with self-assembly on a nano-scale level, self-assembly on the meso-scale (30 microns to 1 millimeter in diameter) level had been elusive until now (1). In this new method of self-assembly, “giant” DNA strands, made up of repeats of a shorter DNA sequence, coat a hydrogel (water-filled gel) “brick.” The attraction between complementary DNA strands holds their associated bricks together, enabling the bricks to assemble with each other (2). Since DNA is “programmable,” it can be engineered into a variety of sequences. By coating bricks with unique combinations of these sequences, scientists can create diverse structures with high binding specificity. Use of small, DNA-coated connector cubes to hold together multiple hydrogels enables further complexity (3).
The ability for self-assembly of these medium-sized hydrogels has great therapeutic potential. As biocompatible and biodegradable substances, hydrogels, unlike many pharmaceutical or surgical treatments, pose few health risks. Scientists envision injecting cell-carrying hydrogels into the body, where they would self-assemble and degrade to leave patients with new cells (3). In addition to enabling easier tissue or organ reconstruction, such innovations could eliminate the need for some surgeries.
References
1. S. Sonaliy, “Wyss Researchers Use DNA as Smart Glue,” The Harvard Crimson, 2013.
2. M. G. Hao Qi, Yanan Du, Casey Grun, Hojae Bae, Peng Yin & Ali Khademhosseini, DNA-directed self-assembly of shape-controlled hydrogels. Nature Communications, (September 9, 2013, 2013).
3. D. Ferber, “DNA glue directs tiny gel ‘bricks’ to self-assemble: New method could help reconnect injured organs or build functional human tissues,” Harvard Gazette, September 9, 2013.
4. D. Ferber, “Programmable glue made of DNA directs tiny gel bricks to self-assemble,” September 9, 2013.
DATA: The bigger the better? A survey of analytical traps and tricks
by Elizabeth Beam
Introduction
In the way of gas-guzzling vehicles and the great American gut, data these days is big and getting bigger. And why shouldn’t it? By contrast to the toll that other excesses take on the environment and our bodies, the physical burden of a large-scale dataset is nearly negligible, and decreasing. In June 2013, researchers discovered a new technique for optical recording that will make it possible to store a petabyte of data on a DVD-sized disc (1). This means that you could soon hold in the palm of your hand the genome of every US citizen—with enough room leftover for two clones each (2). Conversely, using a clever method developed by a Harvard bioengineer last year, you can now take DNA as your storage media and encode 5.5 petabits of binary data per droplet (3).
Big is beautiful when it comes to data because quick and dirty analyses run on a large enough dataset can turn up remarkably subtle effects. Borrowing data from Google Trends, Preis et al. made just such a discovery (4). Interested in the relationship between prospective thought and economic success, they compared future-oriented search histories—how often users searched for the year to come (e.g., “2014”) versus the previous year (e.g., “2012”)—and the GDP of countries where those searches originated. Combining their well-directed hypothesis with the statistical power[1] lent by the size of their dataset, Preis et al. uncovered a strong correlation (r=0.78) between GDP and an interest in the future.
So, does this mean we can supersize our data without worrying we will feel sick to our stomachs later? Despite promising advances in data storage, some argue we have already bitten off more than we can chew. A report by the International Data Corporation shows that in 2007, the estimated data created that year exceeded the capacity of storage devices on the market (5, Figure 1). We must also keep in mind that as our Google searches and emails continue to pile up, they become increasingly difficult to keep to ourselves. Thanks to Edward Snowden, we now know that there may be a pasty-skinned, bespectacled NSA agent combing through our private emails from the comfort of his poorly lit office. If that thought gives you the heebie-jeebies, then you have come down with at least one symptom of the modern day data glut.

Furthermore, it is critical to understand that analyzing a big dataset is not the same as analyzing any old dataset, scaled up. With more data come more analytical challenges. First, because big data are often curated from observational studies that yield correlative effects[2], we must address the alluring fallacy that correlation implies causation. Second, it is necessary to note that correlative effects may be due to the structure of a dataset rather than the variables under study. When genomic studies consider the population as a homogeneous whole, they may find that a mutation is associated with a disease—only to realize that the mutation is not pathogenic, but rather, is frequent in an ethnic or geographic subgroup that is commonly affected by the disease. Finally, exploratory analyses of big data are prone to the discovery of false effects. With each test run on a dataset to search for an effect, you run the risk of happening upon a false positive—an effect that was observed by chance, and that with continued observation would fade away. Together, these analytical traps prevent the size of a dataset from guaranteeing the validity of results. The formulation of robust hypotheses remains important for guiding analyses towards logically defensible conclusions.
My intention here is not to pop the big data bubble. Rather, by pointing out and patching a few common analytical holes, I hope to keep the data from being deflated by the disappointment of false conclusions. This intellectual check-up is essential now more than ever as scientists amass data, data everywhere with not enough brains to think. Increasingly, the burden of interpreting data is being shifted from experts wielding supercomputers to curious data consumers squinting at hastily written articles on their smartphones. Fortunately, even if you cringe at the sight of a spreadsheet, there are a few basic logical principles that you can quickly and easily apply before you buy into the latest trend reports.
Correlations Ahead: Proceed with Caution and Hope
Consider the finding that a part of the brain called the amygdala becomes especially active when people laying down in a functional magnetic resonance imaging (fMRI) scanner view images of Mitt Romney (6, 7). An exciting find, no? Well, not necessarily—as pretty as pictures of the brain in action may appear (Figure 2), if you had never heard of the amygdala, then this association of a task condition with a neural activation does not mean much. If you are indeed familiar with the neuroscience literature on the amygdala, then this result means marginally more. In the context of several studies that show heightened amygdala responses when people view stimuli that make them anxious, you may suspect that Romney evokes feelings of anxiety.

Yet, if you take the time to survey the literature in toto, you will be left scratching your head. Based on other studies linking the amygdala to emotions that are strongly negative as well as strongly positive, you could instead conclude that Romney is an emotionally polarizing candidate—just as plausible, especially if the subjects were of both political parties. Or, pointing to studies that show the amygdala is sensitive to novel stimuli, you may theorize that Romney is the new candidate on the block—another likely theory, considering that the study was conducted well before Romney became a household name during the 2012 president election. You may even theorize, based on studies that show the amygdala is more active for faces than inanimate stimuli, that Romney possesses a face—who knew!
Because any one brain region can be involved in many different tasks, to draw a conclusion about the mind from activations observed in the brain is to make an unsubstantiated “reverse” inference. Unfortunately, the authors who conducted this neuroimaging study of politics and the brain made no analytical reservations when presenting their results to the public. In a now infamous New York Times article (7), they explain that the “voter anxiety” indicated by amygdala activation in response to images of Romney was attenuated when subjects viewed and listened to videos of the candidate. They reason, “Perhaps voters will become more comfortable with Mr. Romney as they see more of him.” Huh? Three days after the article hit the stands, a team of neuroscientists co-authored an exacting letter to the editor that criticizes the use of “flawed reasoning to draw unfounded conclusions about topics as important as the presidential election” (8). The truth hurts.
Should neuroimaging researchers, doomed to the logical limbo of correlative results, turn in their fancy fMRI machines? Fortunately, big data offer promise for more meaningful interpretations of correlative findings. The beauty of big data is that, rather than reducing a system to a few parts that are amenable to study, it enables scientists to consider a system as a whole. Neuroscientists should find that a holistic approach to the brain opens the door to remarkable new findings. This is because the brain is a hierarchical system, built up from molecules to neurons to regions that share a common function to our everyday experiences and behaviors. Mysteriously, the properties that emerge on higher levels—our capacities for writing poetry, for feeling moral outrage, for falling in love, for cracking jokes, for being aware of our thoughts—cannot be predicted from the way that the lower levels function. This is why the brain continues to baffle neuroscientists who have heretofore focused their efforts on a few cells or groups of cells at a time.
Thus, a central goal of the government’s $100 million BRAIN initiative is to develop tools for recording simultaneously from many neurons throughout the entire brain (9). The hope is that, when we are able to see how the networks for motion and sensation and language and emotion and logic and memory interacting in real time, we may gain a mechanistic view of the way that poetry and morality and love and humor and consciousness and more happen in the brain. Someday, when our understanding of the brain leaves less to the imagination, we may even be able to breach the problem of reverse inference and make predictions about voting results from patterns of neural activity.
Vanishing Acts and Magically Materializing Effects
Common sense is a mighty tool. If wielded properly, it may defeat the biggest of data and the most mystical of mental powers. To put our common sense to the test, let us go back in time to 1940, when two psychologists published the first meta-analysis of all studies within a research domain (10). This comprehensive analysis sought to settle a cut-and-dry controversy in the field—some studies reported the existence of an effect, while others did not.
When the psychologists gathered together all 145 studies published on the effect between 1880 and 1939, including nearly 80,000 subjects and 5 million trials, the results were overwhelmingly positive. Joseph Rhine and his student Joseph Pratt had found solid evidence in favor of extrasensory perception (ESP)—the ability to use something other than logic and the known senses to predict events. No fewer than 106 studies yielded results in support of ESP. When the trials were grouped by the probability of successfully predicting an event by chance, subjects beat the odds at every level (Figure 3A).

Yet, Rhine and Pratt noticed a curious trend in the data. When they pulled out the most recent studies published between 1934 and 1939, many of their positive results diminished (Figure 3C). Taking a step back, we should take note that most of the studies included in the meta-analysis had invited subjects to guess an attribute of the next card in a series. Researchers in the earlier years tended to use a standard deck of playing cards, asking subjects to predict the color, suit, rank, or exact identity of the card to come. Thus, the most well tested probabilities in the first time period are 1/2, 1/4, 1/13, and 1/52 (Figure 3B). Intriguingly, it is those same probabilities that experienced a decline in their critical ratio—a measure of just how unusual it was for correct predictions to occur—when compared to studies from the later time period. Meanwhile, in the later group, the 1/5 probability spiked in both popularity and the positive value of its critical ratio. This dramatic increase tracks with the rising use of Zener cards to test ESP. There are five Zener card designs—a circle, a plus sign, a square, a star, and a set of three wavy lines—and as before, subjects guessed the design on the next card in the deck.
Why is it that the most popular testing procedure for a given time period produced the most positive results? Unsurprisingly, there is reason to suspect that the data were prone to bias. Even though scientists at the turn of the twentieth century were making conscious efforts to study ESP by empirical methods, that these scientists were interested in the questionable phenomenon at all suggests that they believed it to be true. Deliberately or not, when they found a way to test ESP that produced positive results, they ran with it. What Rhine and Pratt do not report, and what would be most useful to know, are the details of how the card guessing studies were conducted and the data were collected. One possibility is that researchers discontinued testing after a subject hit a lucky streak with the cards, preventing the data from regressing to the mean. However, this fails to account for the switch to Zener cards in 1934. To explain that, we need look no further than the cards themselves, which contain large and simple designs printed on thin, white paper.
This example from the archives of science exemplifies two common themes in big data analysis. The first is the importance of stratification. When Rhine and Pratt considered all the data together, they saw positive effects for every probability tested. When they separated the data by time period, only a select few probabilities retained their strongly positive effects—revealing the difference in testing methods and leading us to our suspicion of bias. Similarly, in modern studies that seek to tie inherited diseases to DNA mutations, genes may be spuriously identified as pathogenic unless the population is divided by the appropriate factors.
Second, the Rhine and Pratt study points to the problem of multiple comparisons. The more cards they asked each subject to guess, the more likely that the subject would hit a lucky streak. Big datasets are especially vulnerable to the problem of multiple comparisons because they tempt researchers to test and test again until an attractive correlation catches their eyes. Likewise, genome-wide and whole-brain analyses that treat each locus or voxel as an independent test can yield false positives unless the p-value[3] is sufficiently low. In the sections to come, we inspect each of these deep yet frequently overlooked statistical pitfalls.
The Sly Effects of Data Structure
When Preis et al. cracked open the Google Trends data containing trillions of searches logged across the globe, they discovered a strong effect in part because they brought the right tools with them to dig (4). Having done their homework in economics and sociology, the authors had good reason to select GDP and an index for future thinking as two promising variables to include in their cross-country analysis. However, in many cutting-edge analyses of big data that are more complex or that lack a foundation in previous studies, it is not possible to formulate hypotheses ahead of the results. In those cases, the inherent structure of the data may compete with the loose analytical structure to determine the observed “effects.”
Before genes that cause an inherited disease have been identified, geneticists often begin their search with a genome-wide association study (GWAS). This type of exploratory analysis takes DNA from people with a disease and compares it to that of people without the disease, seeking point mutations that are correlated with the disease trait. The strength of GWAS is that it does not rely on assumptions about the data—any correlation between a mutation and the disease will turn up as a candidate for the genetic cause, to be investigated further by more targeted approaches.
Yet, the strength of GWAS is also a weakness. The unstructured form of the analysis leaves the demographic structure of the population free to exert undesired effects. If a certain mutation occurs in only a subset of people with a disease—as is more often than not the case for genetic conditions—then any mutation that is common in that subset will be associated with the disease (11). For instance, in an early associational study, Blum et al. reported a link between alcoholism and an allele of a particular dopamine receptor (12). The allele was present in 77% of alcoholics and absent in 72% of non-alcoholics in their unstratified experimental and control samples—rather impressive statistics for such a straightforward study. In the years since, however, attempts to replicate the findings have shown that the relationship is not so simple, pointing to ethnic heterogeneity as a confounding factor (13). While certain ethnic subgroups such as Mayans and Colombians have the dopamine receptor allele in high frequency, others such as Jews and Pygmies possess it rarely (14).
Looking ahead, what will there be for a biologist to hold onto in the vast and rapidly expanding seas of profuse genetic data and profound genetic complexity? Perhaps most important will be the collection of meta-data—that is, data about the data. Divisions in the population can be drawn along not only basic demographic lines like ethnicity and gender, but also along differences in lifestyle, personality, substance abuse, family history, medications, and health conditions other than the disease of interest. Vilhjálmsson and Nordborg recently made the case that typical methods of population stratification are insufficient, as demographics can be traced back one step further to differences in environment and genetic background (15). In order to control for the confounding variables we know of as well as those not yet identified, researchers must keep track of as many potentially relevant factors as possible.
Unfortunately, as Howe et al. note in their report on the future of biocuration, “not much of the research community is rolling up its sleeves to annotate” (16). One solution that would not only encourage thorough surveying, but also ensure the quality and consistency of data, is to create an exclusive data repository. Biologists could be permitted elite access to this repository only if they agree to submit their data along with a common battery of meta-measures. This method is now proving successful for the Brain Genomics Superstruct Project, a large-scale collection of neuroimaging, genetic, and survey data collected under a common protocol and made available to those who contribute (17).
The Promising (Yet False) Positive Effect
For scientists afloat in big data and hungry for results, it can be tempting to embark on fishing expeditions. These unguided analyses undermine one of the strengths of big data—the statistical power that comes with a large number of data points. As more and more tests are run on a dataset, there is an increasing probability that a test will come up positive by chance alone. This is the problem of multiple comparisons. Although not inherent to big data, the problem of multiple comparisons often arises when independent tests are run on sets of many data points. This is a common way of analyzing genome-wide and whole-brain data (e.g., testing every locus or every voxel for an association with a disease or a task condition). Likewise, overeager researchers may run excessive tests on a dataset, expressing a strong sense of determination but a lack of predetermined hypotheses (e.g., if Preis et al. had tested whether GDP is associated with Google searches for various celebrities, ice cream brands, musical instruments, butterfly species, and office furniture stores).
The strange effects that can turn up after running many tests can be rather unbelievable, challenging the crude probabilistic expectations that our brains use to anticipate how the world should work. For this reason, we need to be the most skeptical of the most exciting results. Looking back to the data from Rhine and Pratt (10), we see that the largest probability they tested was 1/100. While none of the subjects in their meta-analysis succeeded in beating those odds, someone out there most certainly could. Think about it. If tested on cards numbered one through 100, a subject guessing 33 every time would guess correctly once. If you shuffled the deck and tested subject after subject, it would become increasingly likely to find a “clairvoyant” that guesses correctly on the first try.
To one group of neuroscientists, the term “fishing expedition” is more than a metaphor. The team of Bennett et al. inserted a post-mortem Atlantic Salmon into an fMRI scanner to serve as biological filler material while they tested their protocol, a series of images of social interactions (18). Remarkably, when they analyzed the data for kicks and giggles, the researchers found evidence of activity in several voxels of the dead fish brain. Bennett’s response? “[I]f I were a ridiculous researcher, I’d say, ‘A dead salmon perceiving humans can tell their emotional state” (19). Of course, Bennett is not ridiculous—despite his dry sense of humor and his proclivity for scanning unusual objects—and so he points to the number of voxels in the human brain as the problem. With 130,000 voxels and independent tests run on all of them, the problem of multiple comparisons is not one that can be overlooked.
Unfortunately, when the subject is a human instead of a dead fish, it is trickier to tell whether an activation corresponds to neural activity evoked by a task or to a statistical fluke. A quick fix for this problem is to run a correction for multiple comparisons. The popular Bonferroni correction is an adjustment to the p-value that takes into account the number of voxels in the brain that are checked for activation. Of course, as the p-value is reduced by this correction, an even larger sample may be required to obtain significant effects. For a neuroimaging study, this would mean that the large number of voxels tested in the brain requires a larger number of subjects in a study.
It is also critical to note the Bonferroni correction does not address the more spurious problem with running many tests on a big dataset. Exploratory analyses that seek correlative results are not true applications of the scientific method—of manipulating one variable to measure the effect in another. As such, they cannot inform causal models for how the world works. Neuroimaging studies can turn up interesting results—as you may recall, the finding that our amygdalae go wild for Romney (6, 7)—but there is no straightforward way to turn isolated correlations between brain activity and task conditions into reliable models of brain systems.
In order to use big data wisely, researchers must follow the lead of Preis et al. and do their homework before beginning an analysis (4). When it is finally time to dive into the data, it is important to have an idea of what to expect and how to interpret the results. As quoted in a Nature Methods report, Sean Carroll argues, “hypotheses aren’t simply useful tools in some potentially outmoded vision of science; they are the whole point. Theory is understanding, and understanding our world is what science is all about” (20). Amidst the mesmerizing quantities of data, scientists must not lose sight of this mission.
The Next Big Step
Humans are fascinated by size. Indeed, scientists often go to extraordinary lengths to unearth the heaviest, the tallest, the longest, the largest creatures of their kind. Perhaps that is the only explanation for the excitement in the news when the world’s biggest organism was discovered to be a fungus living in the Blue Mountains of eastern Oregon. Yet, after I had processed the image of a continuous network of spindly roots extending through 2,200 acres of topsoil, I found this humungous fungus to be of limited interest. It is the same everywhere, and it is going nowhere.
Like a fungus feeding off the forest, big data is no more than a big burden if it sits unanalyzed on our hard drives. Although businesspeople pitch data as if it were the latest hot commodity—the “new oil,” according to data scientist and company founder Clive Humby—data is not a resource with intrinsic value. Like soil, data acquires value only after it has been cultivated into knowledge. And it is this potential that makes big data so thrilling. With a few statistical filters and a reasonable hypothesis, new insights can be ours for the taking. The possibilities are nearly endless and growing, as big data begets bigger data. As noted earlier, the solution to several big data problems is even more data—specifically, meta-data for annotating genetic information and larger sample sizes for overcoming corrections to the p-value.
Going forward, scientists can maintain a high level of rigor in their big data analyses by investing in data of high quality and of anticipated utility for informing future hypotheses. Witness the Human Genome Project, a massive effort to uncover the raw information and experimental methods that now guide the modern generation of experiments in biology. The ability to decode an individual’s genome is the basis for GWAS, the first step in studying the genetic basis of a disease. After candidate genes are identified, researchers can then begin the classically empirical work of manipulating genes and proteins to deduce the biological mechanisms of disease. As Massachusetts General Hospital now embarks on the Human Connectome Project, a similar endeavor that will map the connections in the human brain using advanced diffusion imaging techniques, it is time to start thinking about how this map can guide future studies of the neurobiology of human experience.
Awe-inspiring though it may be, scientists must not let their eyes glaze over at the size of their data. As the big data continue to grow, now is the time to run with it—to run logically sound analyses, that is. With big data come the big responsibilities to create knowledge that is as close to the truth as can be managed and to wield that knowledge wisely.
References
- Z. Gan, A. Cao, R.A. Evans, M. Gu, Three-dimensional deep sub-diffraction optical beam lithography with 9 nm feature size. Nature Communications 4, 1-7 (2013).
- B. McKenna, What does a petabyte look like? Computer Weekly (2013).
- G.M. Church, Y. Gao, S. Kosuri, Next-generation digital information storage in DNA. Science 337, 1628 (2012).
- T. Preis et al., Quantifying the advantage of looking forward. Scientific Reports 2, 350 (2012).
- J.F. Gantz et al., The diverse and exploding digital universe: an updated forecast of worldwide information growth through 2011. IDC White Paper (2008).
- J.T. Kaplan, J. Freedman, M. Iacoboni, Us versus them: Political attitudes and party affiliation influence neural response to faces of presidential candidates. Neuropsychologica 45, 55-64 (2007).
- M. Iacoboni et al., This is your brain on politics. The New York Times (2007).
- A. Aron et al., Politics and the brain. The New York Times (2007).
- C. Bargmann et al., Interim report: Brain research through advancing innovative neurotechnologies (BRAIN) working group. Advisory Committee to the NIH Director (2013).
- J.B. Rhine, J.G. Pratt, Extra-Sensory Perception After Sixty Years (Bruce Humphries Publishers, Boston, 1940).
- J. McClellan, M.C. King, Genetic heterogeneity in human disease. Cell 141, 210-217 (2012).
- K. Blum et al., Allelic association of human dopamine D2 receptor gene in alcoholism. Journal of the American Medical Association 15, 2055-2060 (1990).
- C.N. Pato et al., Review of the putative association of dopamine D2 receptor and alcoholism: a meta-analysis. American Journal of Medical Genetics 48, 78-82 (1993).
- C.L. Barr, K.K. Kidd, Population frequencies of the A1 allele at the dopamine D2 receptor locus. Biological Psychiatry 34, 204-209 (1996).
- B.J. Vilhjálmsson, M. Nordborg, The nature of confounding in genome-wide association studies. Nature Reviews Genetics 14, 1-2 (2013).
- D. Howe et al., The future of biocuration. Nature 455, 47-50 (2008).
- R.L. Buckner et al., The brain genomics superstruct project. Society for Neuroscience 510.15 (2011).
- C.M. Bennett et al., Neural correlates of interspecies perspective taking in the post-mortem atlantic salmon: An argument for proper multiple comparisons correction. Journal of Serendipitous and Unexpected Results 1, 1-5 (2010).
- A. Madrigal, Scanning dead salmon in fMRI machine highlights risk of red herrings. Wired (2009).
- Defining the scientific method. Nature Methods 6, 237 (2009).
[1] In statistics, “power” is the probability of rejecting the null hypothesis (i.e., the hypothesis that an effect does not exist). Power depends on the criteria for statistical significance, the size of an effect, and the size of the sample. Big data lend large sample sizes, but as we will see, require more stringent criteria for significance in order to avoid the discovery of false positives.
[2] The type of “effect” I refer to is the relationship between variables (e.g., A and B). This relationship may be due to one variable causing a change in another (e.g., A changing B or B changing A), a third variable causing a change in both, (e.g., C changing A and B), or more complex interactions and chains of causality.
[3] The “p-value” is the probability of obtaining a result as high or as low as the result observed, assuming that the null hypothesis is true. The null hypothesis can be rejected only if the probability of obtaining the result is smaller than a threshold p-value for significance. For example, for a significance level of 0.05, if the null hypothesis is that Harvard students have average intelligence, then we can reject the null hypothesis only if their mean IQ is within the top 5% on the standard normal distribution.
Human Cloning: Unmasking the Controversy
by Francisco Galdos
Suppose you have a year-old laptop that has been working well for you. You begin to notice one day that the computer freezes more frequently, and you continue to have problems. After taking your computer to the engineers, the engineers discover that a few of the small components of the motherboard are faulty, so they decide to replace it. Sounds simple doesn’t it? If we compare the act of replacing a computer part with the feat of replacing a faulty organ in our bodies, we can greatly appreciate the idea of interchangeable parts. Imagine, for example, that someone is born with a defective heart and has had so many surgeries that all that is left is a stiff and scarred heart. If we equate the body to a laptop we could say, “why not replace the organ with a new one?” Why not produce healthy clones of our organs so that we can just replace them when they are defective?
Although a simple idea, scientists and physicians have struggled for more than 50 years to understand how we can manipulate our cells in order to replace or regenerate our bodies. As scientists continue to advance techniques in cloning technologies, we have seen an increase in the number of ethical debates on the future of cloning. Cloning, albeit a straightforward solution to generating new organs, has become taboo in itself, causing the path to duplication to become less linear and more complicated for the scientist. If we would like to see medicine reach an era of curative intervention rather than palliative treatment, it becomes necessary more than ever to fully understand the fundamental scientific questions that cloning has sought to answer. As we look into the history and science of cloning, we find that it reveals a remarkable flexibility in our biology that could allow us to repair many of the problems that often lead to death.
Cloning: A Background
To find the origins of cloning, we need to go back to the 1950s. Embryologists had long been grappling with understanding how the vast diversity of cells in the body could be derived from a single fertilized egg cell. Scientists were puzzled by the concept of cellular differentiation, the ability of a fertilized egg cell to become a unique cell type within the body [1]. Up to this point in history, all that scientists knew was that within the nucleus of a cell there was genetic information, and this nucleus was bathed within the surrounding fluid in the cell, known as the cytoplasm. In order to investigate cellular differentiation, two scientists, Robert Briggs and Thomas King, sought to answer whether there was some type of irreversible change that occurred in the nucleus of the cell, which caused cells early in development to differentiate into the vast array of specialized cells in our tissues and bodies. They pioneered a laboratory technique known as somatic cell nuclear transfer (SCNT)[1]. SCNT involved taking the nucleus of a frog cell that they deemed to be further along in differentiation, and transferring the nucleus into a frog egg cell that had had its nucleus removed. Briggs and King hypothesized that a differentiated cell nucleus that has undergone irreversible genetic changes should have a decreased potential to develop into other cell types, since it would be lacking the genetic information needed to differentiate into all the cells of the body of an animal. Many scientists shared this hypothesis, as well as the idea that some factors within the cytoplasm cause irreversible changes to the genetic material in the nuclei of cells. In 1962, however, a graduate student by the name of John Gurdon conducted SCNT experiments in which he took differentiated frog intestinal cells and transferred their nuclei into enucleated egg cells. Gurdon modified his experimental procedure to conduct serial nuclear transplantations in which he took the already transplanted nuclei, and transplanted them again. In 1966, Gurdon demonstrated that he could effectively obtain adult frog clones with this method [2]. He proposed that if a differentiated cell nucleus could be used to form all the tissues of an entire animal, then the nucleus of a differentiated cell must not have undergone irreversible changes during cellular differentiation [2].
Gurdon’s frog experiment represents the first time in history that anyone had effectively cloned an animal, and also the first time that anyone had experimentally shown that during differentiation, cells do not undergo irreversible genetic changes. In a time when the secrets of molecular genetics were still being experimentally discovered, Gurdon proposed that it was through some type of mechanism that genes were turned on and off rather than lost as the embryo began to differentiate [2]. The idea that you could take a fully mature cell and reprogram it to become a cell capable of becoming any cell in the body led to the coining of the term pluripotency, which is the idea that a cell can become any cell in the body. His idea that all cells in the body maintain the same genome and have the potential to be reprogrammed into pluripotent cells inspired efforts to discover new ways to reprogram cells. To start, scientists began by studying the pluripotent cells—the embryonic stem cells. In 1998 James Thompson derived the first human embryonic stem cells [3]. By deriving these cells, scientists such as Thompson were interested in discovering the various genetic factors responsible for maintaining the pluripotent state. Theoretically, if pluripotent stem cells, i.e. embryonic stem cells, could be derived from a patient’s cells, such as a skin cell, scientists and physicians could use the pluripotent cells to make replacement tissues that are derived from the patient’s own cells.
In 1996, Ian Wilmut effectively derived the first embryonic stem cells using SCNT in a mammal and effectively cloned the first mammal—Dolly the sheep [4]. This lead to widespread fear and resistance that SCNT could be used to clone human beings, and that embryonic stem cells destroyed human life in the process of their derivation. Such fears lead to an 8 year national ban on the use of federal funds for the creation of new embryonic stem cell lines during the Bush administration, which caused massive funding problems for the field of regenerative medicine [5]. Because human embryonic stem cells are both expensive to derive, and ethically controversial to use, they became incredibly inefficient to use for therapeutic purposes. Moreover, as a clinical application, reprogramming was in its infancy since at the time, derivation of human pluripotent stem cells through SCNT had not been done. It was not until 2006 that Shinya Yamanaka and Kazutoshi Takahashi demonstrated the derivation of mouse pluripotent stem cells using the over-expression of four transcription factors [6]. This paper revolutionized the field of cellular reprogramming because it provided an effective alternative to deriving cells equivalent to embryonic stem cells. This effectively took the place of SCNT as a reprogramming technology, as it was accepted as a less controversial form of deriving pluripotent stem cells. SCNT was deemed as controversial because it generates embryos whose embryonic stem cells are then harvested to generate pluripotent cells in a dish. Yamanka’s method bypassed the need to generate the embryo and developed pluripotent embryonic-like stem cells directly from a differentiated cell in the body. Indeed, Yamanaka’s induced pluripotent stem cells (iPSCs) have been derived from human cells and have been used for a variety of purposes. Despite the shift to Yamanaka’s technology, this year, a group of US researchers in Oregon successfully derived the first human embryonic stem cell lines using SCNT, both reviving the scientific discussion of reprogramming and the controversy over human cloning [7]. Looking back, in more than a half-century of research, reprogramming experiments have demonstrated the remarkable flexibility of our cells to be converted into different cell types that can serve as the basis for regenerative therapies.
Therapeutic Hope, The Promise of Cloning
As we saw with the engineer replacing a laptop’s motherboard, we can now see how cloning technologies could be used to achieve such “replacements” in our bodies. Cells, it turns out, can be thought about as computers. The DNA of our cells can be thought of as the motherboard of a computer in that DNA essentially controls all the functions of the machine, our cells. The motherboard controls the entire computer’s functions depending on how it is programmed. Similarly, cells also depend upon how the DNA is programmed to express certain genes that carry out a particular function. Knowing this, we can then try to drive stem cells such as pluripotent stem cells to differentiate into a cell type that we are interested in obtaining. Take for example, a heart attack patient. During a heart attack, the heart muscle often dies off, causing irreplaceable damage in the heart that often puts patients on heart transplant lists [8]. Making heart cells from pluripotent stem cells would allow us to regenerate the damaged heart.
Regenerative technologies do not solely depend upon the generation of pluripotent stem cells. Scientists such as Doug Melton have sought to explore the possibility of bypassing the pluripotent state altogether and directly reprogram one cell type to another. Melton and coworkers showed that a type of cell known as an exocrine cell, located in the pancreas, could be directly reprogrammed into an insulin producing ß-cell by expressing transcription factors that are only present in ß-cells. If Melton’s group can one day make fully mature and functional ß-cells, these cells could effectively be engineered in such a way that they can be transplanted into the pancreas of a patient with type 1 diabetes, which could in theory cure the patient’s diabetes.
The fundamental promise of cloning is that scientists can take a person’s own cells and manipulate the biology of these cells to regenerate injured or diseased tissues. Using Yamanaka’s induced pluripotent cell (iPS) technology, it is even possible to take cells that may have genetic defects, such as defective genes, and genetically engineer the iPS cells derived from a patient such that the defective gene is replaced with the correct gene [8]. For example, consider a patient with muscular dystrophy who has a mutation in the gene called dystrophin. Using iPS technology, we could theoretically take skin cells, make iPS cells, replace the defective dystrophin gene with the correct gene, and make muscle tissue that could be transplanted into the patient to effectively cure his muscular dystrophy [9]. In addition to fixing genetic defects, scientists and physicians such as Harald Ott at the Harvard Stem Cell Institute are pioneering new technologies in what is known as whole-organ assembly [10]. The idea of whole-organ assembly consists of using iPS cells to seed tissue scaffolds that can be assembled to create on-demand replacement organs for patients [10]. Such technology could one day provide patients with fully functional replacement organs made from their own cells.
Breaking Down the Controversy
Despite the incredible promise of these technologies, they continue to find opposition from groups that argue that the use of embryonic stem cells and cloning of human cells into embryonic stem cells devalue human life, and could potentially give rise to the cloning of human beings [11]. The controversy is fueled by questions of right to life and individual determinism [12]. The fact that embryonic stem cells (ESCs) have the potential to give rise to all the cells in the body, and theoretically give rise to human beings, creates vast opposition based on fears that human lives are essentially being killed through the use or creation of these cells [14].
When John Gurdon cloned the first animal, the scientific question he sought to answer was whether cells have some irreversible change in their nuclei as they differentiate. Today, scientists are taking this question a step further towards understanding the molecular and cellular biology of how pluripotent cells undergo cellular differentiation. The Oregon study, which developed SCNT reprogramming of human cells, will serve as a vital study for modifying iPS technologies to make reprogramming more effective, and to remove the inefficiencies of genetic reprogramming that we often see with iPS technologies compared to SCNT [7].
Induced pluripotent stem cells were hailed as ethically acceptable because they bypass the need to use human eggs and human embryos. Although the goal of iPSCs is to replace embryonic stem cells as a away to avoid using human embryos, iPSCs contain many genetic differences that currently make them unsuitable to use for therapeutic purposes [8]. The golden standard for deriving pluripotent cells is in fact an embryonic stem cell derived from an embryo that has been made from the fertilization of an egg. If we are to work out the kinks in the iPS system, the use of embryonic stem cells will be key for making iPSCs suitable for clinical use. Thus, if the controversy arises due to the creation of embryonic stem cells, the following question arises: if we are to perfect iPS technology to effectively derive pluripotent cells that are equivalent to ES cells, should iPS cells be banned as well since our golden standard of comparison must be derived from human blastocysts that have the potential to become a human individual?
If a human life is defined from the moment that a cell has the potential to become a human being (i.e. conception), we find ourselves in an ethical conundrum when thinking about our genome as a whole. We know that all differentiated cells are equivalent in their genomes’ potential to become any cell in our bodies, and to also generate an entirely new adult, as we saw with Gurdon’s frogs and Wilmut’s sheep. Thus, does this indicate that all cells in the body have the potential to form a life and therefore should be considered as such? The beauty of John Gurdon’s, Ian Wilmut’s, the Oregon Group’s, and Yamanaka’s experiments are not that they derived a Brave New World type of technology to institute human cloning, but rather they reveal the inherent flexibility of our biology. If we define life from the moment we make a cell that has the potential to produce an entire individual, then we potentially must begin to categorize everything in our bodies by their own inherent potential to form an individual. It becomes incredibly difficult to make these categorizations. Unfortunately, the biology of our cells cannot be so clearly confined with these strict definitions. We are constantly learning that cells are dynamic systems. One way we could think about the flexibility of our cells is that engineering them for medicine capitalizes on this inherent biological flexibility. We should also keep in mind that when developing an ethical position, we should remember both the incredible life saving potential of these cloning technologies, as well as the historical scientific questions that they have answered.
Towards the Future
Cloning technologies have the potential to drive medicine into an era of regeneration. If we define human life as beginning when a cell has the potential to become a full human being, then we may run into difficulties when we consider that essentially any cell in our bodies has the potential to become a full human being. Many ethical arguments against cloning technologies and embryonic stem cell research argue that doing such research inherently destroys human life. We cannot dismiss these arguments, as they propose a valid question, that is—how do we define a human life? Ideally, for the benefit of both scientists and society, we would set ethical boundaries that would allow cloning technologies to benefit humanity in the best possible way.
Acknowledgements:
I’d like to thank my editor Jennifer Guidera for all of her help and feedback during the writing process.
References
1. Briggs, R. and T. King, Transplantation of Living Nuclei From Blastula Cells into Enucleated Frogs’ Eggs. Proceedings of the National Academy of Sciences of the United States of America, 1952. 38(5): p. 455-463.
2. Gurdon, J. and V. Uehlinger, “Fertile” intestine nuclei. Nature, 1966. 210(5042): p. 1240-1241.
3. Thomson, J., et al., Embryonic stem cell lines derived from human blastocysts. Science (New York, N.Y.), 1998. 282(5391): p. 1145-1147.
4. Campbell, K., et al., Sheep cloned by nuclear transfer from a cultured cell line. Nature, 1996. 380(6569): p. 64-66.
5. Stolberg, S.G., Bush Vetoes Measure on Stem Cell Research, in The New York Times. 2007. p. A21.
6. Takahashi, K. and S. Yamanaka, Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell, 2006. 126(4): p. 663-676.
7. Tachibana, M., et al., Human embryonic stem cells derived by somatic cell nuclear transfer. Cell, 2013. 153(6): p. 1228-1238.
8. Takahashi, K. and S. Yamanaka, Induced pluripotent stem cells in medicine and biology. Development (Cambridge, England), 2013. 140(12): p. 2457-2461.
9. O’Connor, T. and R. Crystal, Genetic medicines: treatment strategies for hereditary disorders. Nature reviews. Genetics, 2006. 7(4): p. 261-276.
10. Soto-Gutierrez, A., et al., Perspectives on whole-organ assembly: moving toward transplantation on demand. The Journal of clinical investigation, 2012. 122(11): p. 3817-3823.
11. Pollack, A., Cloning Is Used to Create Embryonic Stem Cells, in The New York Times. 2013.
12. Franklin, S., Stem Cells R US: Emergent Life Forms and the Global Biological, in Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems. 2005, Blackwell Publishing Ltd.
The Secret Life of Plants
by Tristan Wang
On a night in 1966 interrogation specialist Cleve Backster taught how to perform lie detection to policemen. On a whim, Backster attached electrodes of a galvanometer to a nearby dracaena plant. A galvanometer is an instrument that detects minute electric currents, often used as a part of the polygraph lie detector. When Backster began to water the plant, the galvanometer did not show the same growth in electrical conductivity as he would have expected. Instead, the needle of the galvanometer started to move downward, a response often only seen with surges of human emotion. Caught completely by surprise, Backster started formulating ideas about plant conscience. Because he knew that that some of the strongest emotional stimuli came from life-threatening situations, Backster thought about burning the actual leaf the electrodes were attached to. Before he could reach for a match, the tracing pattern on the graph swept upwards as if in response to the thought of threat. These 10 short minutes changed Backster’s life and gave him the idea of plant sentience—an idea so grand that it later was coined into the term “the Backster effect.” (1)
It is no argument that humans use their senses to feel, think and act. Even the smallest baby cries when tripped and laughs when played with, but that’s not the whole story.
What about organisms that lack the complex nervous system that we share? Would animal-rights activists protect the cockroach that was stepped on or even the callous sponge and coral? Take this a step further and let us wonder about the life of plants. Plant biologists have long spurned the crazy ideas of botanical feelings and consciences such as those explored by Backster, but does that mean that these topics are not worth studying?
In the 1973 book The Secret Life of Plants by Peter Tompkins and Christopher Bird, several brazen souls decided to explore a topic that is today considered as pseudoscience: plant sentience. Some went to great lengths to see if plants could detect, understand and pinpoint pain. While this research and subsequent book made a great splash in the media, it efficiently turned away scientists from the study of senses of plant biology as a sort of the unthinkable, taboo if you will. No one was able to reproduce Backster’s projects, an overarching necessity for correct science, and plant sentience became a joke in the field of plant biology.
However, researchers recently have explored something that seems to belong in Tompkins’ book: plant communication. While incredible, there has been hard evidence that plants have been talking to each other all along through chemical signaling, but that’s not the only way that plants interact with the world. Senses like sight and smell, a domain argued to belong to animals, also apply to the plant world and attests to the significance of plant senses.
Only by learning how plants see the world through their senses can we understand how to use these interactions in the context of agriculture, ecology and human life.
Plant Sight
Every spring narcissus flowers, better known as the daffodil, bloom as the sun’s rays arch over meadows of the north, but it’s no accident that these flowers time their flowering in spring. Although the idea that daffodils may be timing their flowers for our pleasure may be enchanting, plants have developed ways of knowing their world through sight not too differently from ours. The concept of photoperiodism conveys how the length of the night (or day as many call it) dictates seasonal responses that include flowering. (2) Plants achieve this feat through the expression of certain genes and plant hormones such as florigen (a biological flowering signal) , all of which accordingly changes to the amount of light the plant receives. (2)
Thus, a daffodil plant can “sense” the changing seasons simply by measuring how long the days and nights are. As Daniel Chamovitz puts in in his book What a Plant Knows, sight is “the physical sense by which light stimuli received by the eye are interpreted by the brain and constructed into a representation.” (3) Take out the “eye” and “brain” in this definition, and plants can see, just like us, but of course it’s difficult to project an image of the environment simply based on how much light is received.
Get this though; plants can also perceive color. Whereas humans largely rely on two types of photoreceptors (rods and cones), plants dwarf that number with at least 11 different kinds of photoreceptors that aid plants in reacting to sunlight. (3) This shouldn’t come as a surprise seeing how important light is to a plant’s survival.
Plant Smell
Even things like fragrance can prove to be useful to plants. One study in 2006 explored the significance of plant volatiles as a method that parasitic plants use to detect host plants.
(4) In this experiment the Cuscuta pentagona (dodder) seedlings, parasitic plants known to sap life out of hosts, are germinated without contact to it’s natural host (in this case, the tomato plant). (4) The vast majority of the seedlings were able to find the nearby tomato plant indicating a possibility of volatile clues emitted by the tomato; and when a wheat plant was placed into the mix, the dodder was able to distinguish between the volatiles between the two plants and preferred the tomato. (4) Even with animal interactions, plants seem to react to insect scents. A recent study in 2012 has provided evidence that plants seem to produce defense responses in reaction to incoming herbivores. (5) Scientists exposed sex attractants of Eurosta solidaginis flies to goldenrod plants and in the field, it appeared that the female flies preferred to not lay eggs on the exposed plants. (5)
Plant Communication
All of these senses are interesting but what of it? Surly plants can use this information to react to the world, but the curiosity arises when one asks whether plants can organize themselves to do something more than just a slight movement to the sun, or the blooming of a flower. With all of this information that plants are getting, it’s not surprising that recent research has looked into the ways that plants interact together.
Last May, several researchers looked into just how plants can react so quickly to oncoming herbivore invasions even if the plants were not in direct contact with the herbivores. Similar to how some plants can pick up the scents of fellow neighbors or notice when another plant was casting a shadow, plants were communicating. In this study, bean plants were set up in close proximity to each other and when aphids attacked one bean plant, nearby plants picked up this system of warnings and then produced plant defenses, chemicals like methyl salicylate that act to repel the herbivores and to attract the predators of aphids. (6)
The researchers set up five bean seedlings in an order in which there was a central ‘donor’ plant surrounded by four ‘receiver’ plants. (6) The central plant would be in direct contact with the aphids and in theory would send signals to the other four plants. (6) Two of the receiver plants were connected to the donor plant via mycorrhizal fungi with the other two lacked physical connection to the other plants. (6) Mycorrhizal fungi forms a symbiotic association between fungal hyphae and roots and is quite prominent in most natural soils. (7) Fungal hyphae invade roots and grow extensively often even having more than one plant host. (7) Several experiments were set up so that meshes helped to restrict plant-to-plant interaction through mycorrhizal growth and roots. (6)
After a couple of days, volatile gases were collected from each of the plants and it was demonstrated that plants connected to the central plant via underground fungal networks were able to produce protective chemicals as opposed to the two unconnected bean plants. (6) The implications of plant communication can be significant. Would it be possible to use a tribute plant far away from commercial crops as a sort of early warning for the others?
How plants react to their environment and to each other ultimately comes back to our interactions with plants. Humans are consistently changing the environment whether it’s by deforestation or agriculture and this has forced plants to adapt in different ways to cope with the stress of change. Thus, there is even more of a need to study how plants see this consistently evolving world, not through conscience thinking, but through their senses.
Backster wasn’t too far off when he thought that plants were talking, despite the ridicule his works received from the scientific world. Maybe he’s right that there is a world of plants talking and understanding each other and that we just don’t know much about yet; perhaps learning about how plants see the world may help us appreciate our world. Who knows—because after all, none of us are plants.
Works cited
1.Tompkins, Peter, and Christopher Bird. The Secret Life of Plants. New York: Harper & Row, 1973.
2. Tsuji, Hiroyuki, Ken-ichiro Taoka, and Ko Shimamoto. “Regulation of Flowering in Rice: Two Florigen Genes, a Complex Gene Network, and Natural Variation.” Current Opinion in Plant Biology 14.1 (2010): 45-52.
3. Chamovitz, Daniel. What a Plant Knows: A Field Guide to the Senses. New York: Scientific American/Farrar, Straus and Giroux, 2012.
4. Runyon, J. B. “Volatile Chemical Cues Guide Host Location and Host Selection by Parasitic Plants.” Science 313.5795 (2006): 1964-967.
5. Helms, Anjel M., Consuelo M. De Moraes, John F. Tooker, and Mark C. Mescher. “Exposure of Solidago Altissima Plants to Volatile Emissions of an Insect Antagonist (Eurosta Solidaginis) Deters Subsequent Herbivory.” PNAS 110.1 (2013): 199-204.
6. Babikova, Zdenka, Lucy Gilbert, Toby J. Bruce, Michael Birkett, John C. Caulfield, Christine Woodcock, John A. Pickett, and David Johnson. “Underground Signals Carried through Common Mycelial Networks Warn Neighbouring Plants of Aphid Attack.” Ecology Letters 16.7 (2013): 835-43.
7. Bolan, N. S. “A Critical Review on the Role of Mycorrhizal Fungi in the Uptake of Phosphorus by Plants.” Plant and Soil 134.2 (1991): 189-207.
Fighting Infections with Feces: The Promise of Fecal Microbiota Transplantation
by Brendan Pease
The phrase “cutting-edge medical treatment” often conjures up images of complex technologies derived from neural mapping or stem cell research. However, one of the newest medical treatments may not originate from neurons or progenitor cells, but from fecal matter.
Every year, the bacterium Clostridium difficile kills 14,000 Americans and infects many more, causing severe diarrhea and inflammation of the colon. Statistics point to a growing problem: incidence has increased dramatically in recent years, with over 500,000 cases in 2012 (2). Though most infections have communal or zoonotic origins, roughly 20% of infections are spread through hospital settings (3). The bacterium is particularly dangerous for patients with weakened immune systems, such as the elderly and those with autoimmune disorders, as well as those with Inflammatory Bowel Disease. While standard antibiotics such as vancomycin and metronidazole are often used to treat C. difficile infection, they are ineffective for up to 26% of patients due to drug resistance by the bacterium. A significant portion of patients will also have recurrent infections, for which there are no effective antibiotics (1). Yet, recent reports point to a solution. Though unsavory to a squeamish some, the experimental medical procedure known as a Fecal Microbiota Transplant (FMT) holds enormous therapeutic promise for both C. difficile and a variety of other diseases.
While there is no standard procedure for a FMT, most follow a common framework. Prior to the actual procedure, a healthy donor usually related to the recipient is identified and a sample of their stool is collected (4). Once a sample is obtained, the contents of the recipient’s colon – including all C. difficile bacteria – are flushed out. The stool sample is then transferred to the recipient’s duodenum, the primary location of the gut microbiota, via an enema, a colonoscopy, or a nasogastric tube. This restores a healthy bacterial flora in the patient’s gut because the bacteria contained in the donor’s stool now inhabit the recipient’s intestines (5).
Although there are rumors of similar procedures being performed for decades, the first well documented FMT transplant was performed in 2006 by Max Nieuwdorp, a young Amsterdam-based researcher and physician. Frustrated by the lack of effective treatments for recurrent C. difficile, Nieuwdorp developed the idea for a transplant that would flush out a patient’s gut microbiota and replace it with a healthy donor’s microbiota. The treatment proved exceptionally successful. Yet, the procedure’s novelty, indecorous basis, and youthful inventor caused it to be doubted and even ridiculed by some of Nieuwdorp’s colleagues (5).
However, there is much less skepticism about FMT today than there was six years ago. In January 2013, the first randomized controlled clinical trial was published in the New England Journal of Medicine. The study focused on patients who had recurrent C. difficile, despite successful treatment of the initial infection with antibiotics.. Of the 16 patients given FMT, 13 were cured after one transplant, while two of the remaining three were cured by a second; thus, FMT cured 94% of subjects overall. By comparison, only 31% of those given the antibiotic vancomycin were cured (1). Similar rates were observed in another study performed in the same year, with patients returning to normal bowel patterns within one week of the procedure (6).
The merits of FMT, however, extend beyond its efficacy in combatting recurrent infections. The procedure is extremely cost efficient, requiring nothing more than basic nasogastric tubes or enemas (4). Patients are also attracted to the procedure, despite its nature, once they weigh the results of FMT. In a recent survey on patient perceptions of FMT, 85% of respondents would choose to have the procedure. Though respondents generally rated the aesthetics of FMT “somewhat unappealing,” they were ultimately won over by its end results (7). Further, FMT may be able to halt initial as well recurrent C. difficile infection. Initial infections are treated with broad-spectrum antibiotics that eliminate not only C. difficile, but also many beneficial members of the gut microbiome. As a result, patients are more susceptible to subsequent infections, leading to a high recurrence rate. Using FMT to replace, rather than diminish, individuals’ microbial communities could break this cycle of infection (4).
Despite growing evidence for its benefits, FMT also has a few drawbacks. Though patients are willing to receive the procedure, the unpleasant aesthetics of FMT results in it being used as a “last resort” treatment, reserved for extreme cases (7). In addition, the procedure carries the risk of spreading infectious diseases, including HIV, hepatitis B and C, cytomegalovirus, Epstein-Barr virus, and Campylobacter jejuni. Because donors are usually relatives of recipients, donors can be hard to find for patients whose families are affected by an infectious disease that could be spread by the procedure or does not have healthy gut microbiota (5).
The greatest danger of FMT, however, occurs when it is taken from hospital to home. Since there is no standardized procedure for FMT, demand for transplants vastly outstrips supply., leading some desperate patients, jaded by waves of ineffective antibiotics, to look to the internet, where they can find dubious instruction for “do-it-yourself” home transplant procedures. In one such video on YouTube, the instructor informs his viewers to store the stool sample in Tupperware kept in the refrigerator and to use a kitchen blender to change the consistency of the sample. The only methods of sanitation are rubbing alcohol wipes and basic hand soap (8). Besides jeopardizing FMT’s legitimacy as a medical procedure, these videos can result in their performers suffering complications as harsh as norovirus gastroenteritis (5).
These “do-it-yourself” videos reflect the obstacles to the development of FMT as a common medical procedure. Though a growing body of literature testifies to its success, the transplant procedure remains unstandardized; thus, further clinical trials examining a standardized method are needed to establish a general protocol and verify its efficacy. These measures would promote its adoption among the medical community, improving health outcomes for C. difficile patients.
As FMT itself is continuing to be developed, researchers at the University of Calgary have begun preliminary work on an alternative to the procedure. Instead of using an enema, a nasogastric tube, or another conventional method of transferring the stool sample, researchers created pills containing donor bacteria. These pills were equally effective, curing 30 out of 31 patients with C. difficile infections; however, as a therapy still in development, their higher cost hinders mainstream adoption. Thus, these pills may instead be used to treat “niche” cases, including patients who cannot tolerate enemas or nasogastric tubes (9).
Though most FMT-related research focuses on C. difficile, FMT has the potential to treat other diseases as well. The most apparent of these additional applications are other bacterial infections in the gut microbiota. In addition, some researchers are beginning to look into whether FMT could be used on patients with autoimmune diseases such as Inflammatory Bowel Disease, in which the body’s immune system attacks elements of its own digestive system (5). Researchers at Harvard Medical School (HMS) believe that because modern humans live in overly hygienic environments and eat processed foods, their gut microbiota composition is different than that of their ancestors. This change, according to Dr. Dennis Kasper, William Ellery Channing Professor of Medicine and an HMS professor of Microbiology and Immunobiology, may explain rising incidence of autoimmune disease, as modern humans now lack the microorganisms that have been properly balancing our immune systems for millennia (10).
Recent research has also indicated that the microorganisms in the gut microbiota are metabolically significant. Intestinal microbes have the ability to generate short-chain fatty acids, SCFAs, which can stimulate the secretion of peptide YY, a hormone that reduces appetite and is thought to play a key role in obesity; thus, an individual’s body weight results from the metabolism of their microbes as well as their own. While fecal transplants are hardly a conventional anti-obesity therapy, a growing body of scientific evidence supports its efficacy for treating disorders of energy balance (11). The influential metabolic role of gut microbiota has also led some physicians to propose the procedure as a treatment for diseases such as diabetes (11). While further investigation is required, FMT may prove to have broad clinical relevance.
While its efficacy in treating C. difficile infections is increasingly well-documented, FMT is arguably in the infancy of its therapeutic development. Though procedural standardization and more clinical validation are required for widespread adoption, transplants are becoming better accepted among both the medical community and general public. With the potential to treat pathologies ranging from known bacterial infections to the politically significant obesity epidemic, FMT possesses far too many benefits to be ignored – despite its unsightly nature.
References
- Van Nood, E. et al. Duodenal Infusion of Donor Feces for Recurrent Clostridium difficile. The New England Journal of Medicine 368, 407-415. (Jan, 2013).
- “Fecal microbiota transplants effective treatment for C. difficile, inflammatory bowel disease, research finds.” American College of Gastroenterology. (Dec, 2011).
- Gallagher, J. “Most C. diff infections are ‘not hospital spread.’” BBC. (Sep, 2013).
- McKenna, M. Swapping Germs: Should Fecal Transplants Become Routine for Debilitating Diarrhea? Scientific American (2011).
- De Vrieze, J. The Promise of Poop. Science 341, 954-957. (Aug, 2013).
- Petrof, E. et al. “Stool Substitute Transplant Therapy for the Eradication of Clostridium difficile Infection: ‘RePOOPulating’ the Gut.” Microbiome 1, 3. (Jan, 2013).
- Zipursky, J. et al. Patient Attitudes Toward the Use of Fecal Microbiota Transplantation in the Treatment of Recurrent Clostridium difficile Infection. Clinical Infectious Diseases 55, 1652-1658. (Dec, 2012).
- Hurst, M. “Fecal Transplants: How to Do it Yourself Video.” (June, 2013; http://fecaltransplant.org/fecal-transplants-how-to-do-it-yourself-video/).
- Zhang, S. Feces-Filled Pill Stops Gut Infection. Nature (Oct, 2013).
- Karcz, S. “Cottage Industry.” (Oct, 2013; http://hms.harvard.edu/news/harvard-medicine/harvard-medicine/how-bugs-are-built/cottage-industry?utm_source=Silverpop Mailing&utm_medium=email&utm_campaign=10.03.daily%20(1)).
- Nieuwdorp, M. Metabolic Function of Microbiota and their Produced Short Chain Fatty Acids: Animal and Human Data. International Scientific Association for Probiotics and Prebiotics. (2013).
Facing the Fats: Should You Be Scared of Saturated Fat?
by Emily Groopman and Jen Guidera
Butter. Bacon. Heavy cream. While considered wholesome staples in 19th and early 20th century America, these foods are now seen as “greasy killers” rather than good nourishment (1, 2). The fear and even shame surrounding consumption of such items reflects a powerful nutritional paradigm. Associated with a plethora of unpalatable conditions, including obesity, cardiovascular disease (CVD), and Type II diabetes, saturated fat is the ultimate dietary taboo. Yet, despite favoring lard, butter, and beef tallow, populations such as the French are both leaner and less prone to CVD (3). The simultaneous rise in saturated fat phobia and obesity rates casts further doubt on its supposed dangers. Faced with such contradictions, consumers may begin to question mainstream beliefs. Though seen as a virtually fatal substance, how bad is saturated fat truly?
Fat 101: Saturation, Hydrogenation, and More
While well informed of the potential dangers of saturated fat, the average consumer likely lacks similar knowledge of its chemical structure or biological functions. Dietary fats are largely triacylglycerols (TAGs), which consist of three fatty acids attached to a glycerol backbone (4). TAGs can be classified by three biochemical properties: saturation, conformation, and chain length. Saturation refers to the state in which carbons are bonded to the maximum number (i.e., “saturated” with) hydrogen atoms (5). Thus, saturated fats have no carbon-carbon double (C=C) bonds, while unsaturated fats have one (monounsaturated) or more (polyunsaturated). This results in differing molecular shape and as well as structure: the double bonds create “kinks” in the fatty acid chains, increasing their surface area and preventing them from packing tightly together (5). Thus, at room temperature, unsaturated fats, derived from nuts, seeds, and fish, are fluid, liquid “oils,” whereas saturated fats, derived from animal products and coconut, are dense, solid “fats” (6).
Conformation is similarly important for TAG structure. Within unsaturated fatty acids, hydrogen atoms about the C=C bond can be on either the same (cis) or opposite (trans) sides. The latter conformation decreases the “kink” of the C=C bond, causing trans isomers to be structurally – and behaviorally – similar to saturated fatty acids (5). Finally, TAGs vary in chain length, the number of carbon atoms in the fatty acid backbone. Dietary fat largely consists of long (>14 carbon) chain fatty acids (LCFAs); however, some saturated fats, such as milk fat and coconut oil, contain significant amounts of short (<8 carbon; SCFAs) and medium (<14 carbon; MCFAs) fatty acids (4, 7). Shorter chain length increases solubility in the blood, enabling SCFAs and MCFAs to flow directly to the liver, where they are oxidized to generate energy (8, 9). As a result, they, unlike LCFAs, bypass adipose tissue – and therefore are not stored as body fat. MCFAs have also been shown to actively oppose body fat synthesis, via down-regulating involved genes and stimulating fat oxidation (9). Thus, substances high in these shorter chain fatty acids likely promote weight loss and/or maintenance – regardless of their degree of saturation.
But Is It Fattening? Assessing the Relationship Between Saturated Fat and Obesity Risk
It seems very intuitive that fat ingested is stored as fat in the body. However, fat, including saturated fat, has many other roles in the body than just adding padding. Saturated fat regulates gene expression as a transcription factor, releases hormones coordinating immunity and metabolic function, and is a vital component of cell membranes (5). Still, we might wonder, even given these other functions of fat, does fat lead to weight gain more so than other macronutrients like carbohydrates? We have good reason to ask this question–in the nutritional movements happening in our day and age, fat seems to have a special role in weight gain. Foods we typically think of as “fatty”–french fries, potato chips, M&Ms–have all been implicated in campaigns against obesity (10). The USDA, for example, states in their most recent Dietary Guidelines Consumer Brochure, “Make major sources of saturated fat–such as cakes, cookies, ice cream, pizza, cheese, sausages, and hot dogs–occasional choices, not everyday foods” (10). One thing we should notice is that many of these foods, including French fries, potato chips, M&Ms, cakes, cookies, ice cream, and pizza, are also high in highly processed carbohydrates.
The fact that many foods we normally consider high in saturated fat are often high in refined carbohydrates poses a challenge to those looking to study the effect of saturated fat on weight gain. Further, foods high in saturated fat are also often “calorie dense” foods, packing more energy per unit weight and per unit volume. You would have to eat, for example, five times as much broccoli by weight to intake the same number of calories that you would get from cheddar cheese. With all of these factors coming together in the same group of foods, it is indeed challenging to separate the variables and make conclusions about which variables are actually linked to weight gain.
Nevertheless, several studies have made an effort to separate these variables. For example, a study examining juvenile obesity found that caloric intake, not fat intake, correlated most closely with weight gain (11). Another study approached the fat-weight gain question from the weight loss angle, and also found that total caloric intake, not fat intake, correlated most closely with weight–subjects that were put on a low-fat diet lost the same amount of weight as subjects put on low carb diets (12). Based on these findings, it seems that total caloric intake is more closely tied to weight than is fat intake.
Interestingly, some studies have found that certain fat-rich foods are associated with weight loss. For example, a New England Journal of Medicine study looking at long-term changes in weight that found that peanuts were correlated with weight loss (13). Other studies show that fat reduces food consumption after a meal (14). But are certain fats more satiating than others? A recent study compared the satiety-effects of foods with different levels of saturated and unsaturated fats. The study found that chocolate, a food high in saturated fat, had similar satiety effects as peanuts, which contain primarily unsaturated fat (15). These findings suggest that saturated and unsaturated fat may have similar effects on satiety.
LDL and HDL: Assessing the Impact of Saturated Fat on CVD Risk
Though important, body mass index is not the sole measure of health: while obesity may strongly increase chronic disease risk, it is the diseases themselves that kill. Thus, the effects of saturated fat on CVD risk are as – if not more – important as its impact on body weight. For most Americans, exposed by the prevalent nutritional messages of the past 60 years, the association seems obvious: as a rich source of cholesterol, saturated fat raises levels of serum cholesterol, which “clogs” vessels, blocking blood flow to the heart and thereby resulting in CVD (16). However, evidence for this “Fat-Heart Disease” hypothesis was hardly conclusive. Supporting reports such as Ancel Keys’ famous “Seven Countries Study,” found a positive relationship between national saturated fat intake and per-capita CVD incidence, with countries with high intakes, like the United States, exhibiting higher rates than those with lower intakes, like Japan (17). Yet, such national-level analysis provided no insight on individual risk, as those consuming the most saturated fat may not have been the same persons developing CVD. Such broad analysis also fails to account for a plethora of potential confounds, including differences in intake of other nutrients, activity level, and relative genetic risk (18).
Assume, as did the US Department of Agriculture and American Heart Association in the 1980s, that these investigations provide sufficient evidence. Nevertheless, the “Fat-Heart Disease” remains unproven: for, despite the statements of such health authorities, total cholesterol is an unreliable marker of heart disease risk. Serum cholesterol is transported by two major carriers: low (LDL) and high (HDL) lipoprotein (18). While LDL accumulates in arterial walls, resulting in vessel narrowing and increasing CVD risk, HDL has the opposite effect, removing cholesterol from cells back to the liver, where it can be metabolized and eliminated (5). Thus, the LDL: HDL ratio, rather than absolute serum concentrations of the two, predicts CVD risk (19, 20). Recent randomized clinical trials (RCT) – the “gold standard” of dietary research – report that saturated fat has a neutral effect on LDL: HDL ratio, as it raises levels of both (21, 22). Further, saturated fat intake has even been negatively associated with CVD risk among certain groups: for instance, Mozaffarian, Rimm, & Herrington, found that postmenopausal women consuming higher saturated diets displayed less arterial narrowing and cholesterol blockage (23). Strikingly, intake of “good” polyunsaturated fat significantly increased CVD risk (23). Though requiring further validation by RCTs, such findings imply that saturated fat does not harm, and in some cases, may promote health.
The Final Say on Saturated Fat
Contrary to what diet gurus and health magazines may claim, fat is more than fattening. Saturated fat has many key roles in the body, functioning as a transcription factor, regulator of hormone release, and building block of cell membranes. And, although calorie-dense, saturated fat does not necessarily lead to weight gain–total caloric intake seems to be the better predictor of weight gain. In fact, recent research suggests that some fat-rich foods may be satiating, promoting weight control.
Given the mixed and constantly fluctuating messages we receive from the nutritional mainstream, what should we do as consumers? While most dietary messages we receive revolve around weight, it is important to remember that health should be the focus. Further, we should keep in mind that health more than a number on the scale, and that likewise, food is more than the sum of its macronutrients. Ultimately, when making decisions, it is often wise to be wary of absolutes. Using moderation as a guide, we should make choices that both align with our current lifestyle and support a healthy and happy future.
References
1. G. Taubes, “What if It’s All Been a Big Fat Lie?,” New York Times, July 7, 2002 2002.
2. H. Levenstein, Revolution at the Table: The Transformation of the American Diet. (University of California Press, California, USA, 2003).
3. M. de Lorgeril et al., Mediterranean diet and the French paradox: two distinct biogeographic concepts for one consolidated scientific theory on the role of nutrition in coronary heart disease. Cardiovascular research 54, 503 (Jun, 2002).
4. H. Mu, C. E. Hoy, The digestion of dietary triacylglycerols. Prog Lipid Res 43, 105 (Mar, 2004).
5. B. A. Griffin, S. C. Cunnane, in Introduction to Human Nutrition, M. J. Gibney, S. A. Lanham-New, A. Cassidy, H. H. Vorster, Eds. (Wiley-Blackwell, United Kingdom, 2009), pp. 86-117.
6. K. Sato, Crystallization behaviour of fats and lipids – a review. Chem Eng Sci 56, 2255 (Apr, 2001).
7. S. Gallier et al., In vivo digestion of bovine milk fat globules: effect of processing and interfacial structural changes. I. Gastric digestion. Food Chem 141, 3273 (Dec 1, 2013).
8. M. P. St-Onge, Dietary fats, teas, dairy, and nuts: potential functional foods for weight control? The American journal of clinical nutrition 81, 7 (Jan, 2005).
9. B. Marten, M. Pfeuffer, J. Schrezenmeir, Medium-chain triglycerides. Int Dairy J 16, 1374 (Nov, 2006).
10. U. S. D. o. Agriculture, U. S. D. o. Agriculture, Ed. (2011).
11. L. J. Gillis, L. C. Kennedy, A. M. Gillis, O. Bar-Or, Relationship between juvenile obesity, dietary energy and fat intake and physical activity. International journal of obesity and related metabolic disorders : journal of the International Association for the Study of Obesity 26, 458 (Apr, 2002).
12. I. Shai et al., Weight loss with a low-carbohydrate, Mediterranean, or low-fat diet. The New England journal of medicine 359, 229 (Jul 17, 2008).
13. D. Mozaffarian, T. Hao, E. B. Rimm, W. C. Willett, F. B. Hu, Changes in diet and lifestyle and long-term weight gain in women and men. The New England journal of medicine 364, 2392 (Jun 23, 2011).
14. V. Van Wymelbeke, A. Himaya, J. Louis-Sylvestre, M. Fantino, Influence of medium-chain and long-chain triacylglycerols on the control of food intake in men. The American journal of clinical nutrition 68, 226 (Aug, 1998).
15. S. V. Kirkmeyer, R. D. Mattes, Effects of food attributes on hunger and food intake. International journal of obesity and related metabolic disorders : journal of the International Association for the Study of Obesity 24, 1167 (Sep, 2000).
16. R. M. Krauss et al., Dietary guidelines for healthy American adults – A statement for health professionals from the Nutrition Committee, American Heart Association. Circulation 94, 1795 (Oct 1, 1996).
17. A. Keys, Coronary heart disease in seven countries. Circulation 41, 186 (1970).
18. W. C. Willett, M. J. Stampfer, Rebuilding the food pyramid. Scientific American 288, 64 (Jan, 2003).
19. W. C. Willett, The dietary pyramid: does the foundation need repair? The American journal of clinical nutrition 68, 218 (Aug, 1998).
20. L. Berglund et al., Comparison of monounsaturated fat with carbohydrates as a replacement for saturated fat in subjects with a high metabolic risk profile: studies in the fasting and postprandial states. The American journal of clinical nutrition 86, 1611 (Dec, 2007).
21. R. P. Mensink, M. B. Katan, Effect of dietary fatty acids on serum lipids and lipoproteins. A meta-analysis of 27 trials. Arteriosclerosis and thrombosis : a journal of vascular biology / American Heart Association 12, 911 (Aug, 1992).
22. A. A. Rivellese et al., Effects of dietary saturated, monounsaturated and n-3 fatty acids on fasting lipoproteins, LDL size and post-prandial lipid metabolism in healthy subjects. Atherosclerosis 167, 149 (Mar, 2003).
23. D. Mozaffarian, E. B. Rimm, D. M. Herrington, Dietary fats, carbohydrate, and progression of coronary atherosclerosis in postmenopausal women. The American journal of clinical nutrition 80, 1175 (Nov, 2004).
How Leukemia Cells Remodel Bone Marrow
by Lauren Claus
Approximately 254,000 United States citizens currently have leukemia, a type of cancer in which abnormal blood cells, such as white blood cells, crowd and interfere with normal bone marrow function (1). Despite the range of treatments, including chemotherapy and bone marrow transplants, it is estimated that 7.1 out of 100,000 adults die each year from this disease (2). However, a recent study published in Cell Stem Cell points to new areas of investigation for treatment. Written in part by Amy Wagers of Harvard’s Department of Stem Cell and Regenerative Biology, the research focused on the question of why the bone marrow of leukemia patients can only produce cancerous cells—and not healthy cells, as well (3).
Although it was previously believed that the cancerous growth of leukemia simply crowded out and overwhelmed healthy cells, Wagers and her colleagues discovered that leukemia cells actually remodel bone marrow. By signaling maintenance cells in bone marrow to emit collagen and inflammatory proteins—which contribute to the buildup of scar tissue in the bone cavity—leukemia cells exploit bone marrow to suit their own needs. This altered microenvironment in the bone marrow is much more hospitable to the production of leukemic stem cells than that of hematopoietic stem cells, those that normally occur in the bone marrow and develop into mature blood cells.
This understanding of how cancerous growth impacts the bone marrow microenvironment has implications for novel leukemia treatments. Emmanuelle Passegué, one of the researchers of this study, has demonstrated that eradicating malignant bone marrow cells can cause the bone marrow environment to begin to return to its normal state; Passegué is now researching ways to regenerate normal bone marrow microenvironments in people suffering from leukemia. The results of this research could open up a wide variety of possibilities to the researchers still searching for cures and the patients who are waiting for them.
1–(2013). Statistics. Leukemia Research Foundation. Retrieved from http://www.allbloodcancers.org/statistics
2–Seer Stat Fact Sheets. National Cancer Institute. Retrieved from: http://seer.cancer.gov/statfacts/html/leuks.html#incidence-mortality
3–Schepers, et al. (2013). Myeloproliferative Neoplasia Remodels the Endosteal Bone Marrow Niche into a Self-Reinforcing Leukemic Niche. Cell Stem Cell, 13(3), 285-99.
4–(2013). Cross-Country Collaboration Leads to New Leukemia Model. Science Daily. Retrieved from http://www.sciencedaily.com/releases/2013/07/130731122948.htm
Extinct Today, Alive Tomorrow: The Science And Ethics of De-extinction
by Caitlin Andrews
As humans, we have a constant curiosity to know what life on an earlier Earth might have looked like. We use fossils, skeletons, and our own imaginings to reconstruct images of dinosaurs, woolly mammoths, and other prehistoric creatures. We visit museums to come “face-to-face” with these animals and pay tribute to species that were driven to extinction by our ancestors’ actions. All the while, we wonder what it would be like if these animals still roamed the planet.
Scientists estimate that 99% of the roughly 4 billion species to ever live on Earth have gone extinct (1). But, although we often hear extinction being associated with human action, most of these events have been a natural part of Earth’s evolutionary history. Since the dawn of life, extinction has represented the constant ebb and flow as species have fought and dominated or struggled and fizzled out. At five points in time, this process has been drastically accelerated by geological disasters or other events, leading to periods of “mass extinction” in which over 75% of Earth’s biodiversity has been lost. As deforestation, poaching, and climate change impact nearly every ecosystem, there is strong evidence to suggest that we are now in another of those periods—a sixth mass extinction (1). Instead of a more natural pace of one to five species per year, we are losing dozens of species every day. Until recently, these losses were assumed to be permanent (2). But, what if we could somehow reverse the clocks and bring these animals back to life? Would we want to do it? Should we do it? And what would our decision mean for our planet?
THE LAST BUCARDO: The Technology of De-Extinction
While de-extinction, or the process of reviving extinct species, might sound like something out of a science fiction movie, it has already been attempted—and nearly achieved. In the late 1980s, the fate of the Pyrenean ibex, a subspecies of European goat also known as the “bucardo,” looked grim. With only three females left in the wild, attempts to hybridize with a related subspecies failed, and, by 1999, two of the three females had died. With one aging individual left, this species would have been considered doomed by most. But scientists had other ideas. Just months before the last bucardo died and the species was declared extinct, they collected a small skin sample from the remaining individual and preserved it for future use (3).
With this small sample, scientists ensured that the bucardo’s genetic code lived on even as the species did not. Like all cells in the body, skin cells contain an individual’s entire genome; scientists are able to obtain nearly all of the raw material needed to clone a new animal by isolating the DNA-containing nucleus from one of these cells. In 2003, three years after the death of the last bucardo, scientists did just that. They inserted the nucleus into the egg of a domestic goat—whose own nucleus had been removed—and, through in vitro fertilization, implanted a bucardo embryo into a female Spanish ibex-goat hybrid. Of the seven female surrogates who successfully carried a bucardo embryo, this female was the only one to carry the bucardo to term. But, tragically, only ten minutes after birth, the infant female bucardo died from respiratory failure—a fairly common occurrence in cloned animals (3).
Would we consider this infant’s short life to be a true de-extinction event? Although most call it the greatest attempt at de-extinction ever made, there was likely little chance that the species could have been successfully revived even if the bucardo had survived. The Long Now Foundation, an organization determined to bring extinct species back from the dead, uses several criteria to identify viable de-extinction candidates. From how practical it would be to generate enough genetic variation to sustain a healthy population to how much the species’ original habitat has been changed since its extinction, these criteria highlight the main goal of de-extinction: to actually restore a wild population, and not just an individual or two (4). While the bucardo seemed a viable candidate based on these criteria, the technology may have been too new to successfully reintroduce the animals to the European mountainsides.
THE “WOOLLY ELEPHANT”: Alternate Revival Technologies and the Ethics of De-Extinction
The case of the bucardo showed that de-extinction could give recently-extinct species a second chance at life. But, what about species that have been extinct for longer periods of time? Fossils and museum specimens can hold viable DNA for hundreds of thousands of years, so, while dinosaurs are too far gone to be revived, bringing back dodo birds, woolly mammoths, and saber-toothed tigers is not out of the question. However, older specimens are unlikely to yield the pure, intact DNA samples that scientists need to carry out successful cloning. Instead, we would need to use other technologies in order to bring these species back (5).
The Long Now Foundation considers the woolly mammoth to be one of the prime candidates for de-extinction; not only does it meet the criteria for being “iconic” and “beloved,” but there are many well-preserved specimens available for DNA extraction (4). Unfortunately, because these specimens provide only a partially-complete picture of the woolly mammoth’s genome, cloning is not a practical option. Instead, scientists have considered hybridizing woolly mammoth DNA with DNA of the mammoth’s closest living relative—the elephant. Similarly, they could also reverse engineer elephants; through careful and tedious selective breeding and genetic modification, this process could eventually yield animals which, though technically derived from elephants, resemble and even act like woolly mammoths (6).
The woolly mammoth highlights the ethical debate at the heart of de-extinction—why bring back extinct species in the first place? The woolly mammoth was hunted to extinction 4,000 years ago, and the species that were left behind on the tundra have since adapted to the loss (7). However, proponents of de-extinction say that it is our duty to revive species that were driven to extinction by humans. Not only would having real, live woolly mammoths teach us more about their behavior and ecology, but who knows what other knowledge could be gained from these creatures? By learning about how they went extinct, we might be able to learn how to most effectively conserve their nearest relatives and other species. Perhaps most importantly, bringing back mammoths could help reverse the trend of biodiversity loss from which our planet is so greatly suffering. With so many species going extinct, why wouldn’t we bring one back if we could? (8)
While there are many who see the positives of de-extinction, there are those who think that the complications would outweigh any potential benefits. In the case of animals like the woolly mammoth, which have been extinct for thousands of years, there is no doubt that their habitat has changed dramatically since their extinction; in some cases, extinct animals might not even have a habitat to return to, as land development and deforestation have leveled countless ecosystems. Could scientists simply place woolly mammoths back on the tundra and expect them to survive, or would the ecosystem have to be reconstructed exactly as it was thousands of years ago in order to support the mammoth again? Considering the realities of global warming, these animals, if brought back, would have to keep up with the pace of climate change. Additionally, the factors that caused them to go extinct in the first place may be waiting for them when they return. Woolly mammoths were hunted to extinction in the first place, so we might not be able to guarantee that they wouldn’t go down that exact same road again (9).
In response to these criticisms, some proponents claim that bringing back extinct species could actually help reverse climate change. Woolly mammoths, as just one example, could help the tundra by eating dead grass and promoting new growth, as well as by stomping through the snow and letting cold air refreeze the soil (7). Despite these possible benefits, some people still question the motivations behind de-extinction, wondering if it is fair to bring back extinct animals who may have as much of a chance of going extinct again as they did originally. Additionally, selectively breeding mammoth-like traits into the elephant species doesn’t actually recreate woolly mammoths but creates a mere mammoth look-alike—a “woolly elephant” of sorts (9). So, we are left asking ourselves: does de-extinction even achieve its intended goal?
THE TASMANIAN DEVIL: Implications for Conservation
Given the possibility to make extinction a reversible phenomenon, one might think that conservationists would be among the greatest proponents of de-extinction. And, in some cases, they are. The same cloning techniques that are used in de-extinction could also be used to help species that are teetering on the brink of extinction, and there is perhaps no better example of cloning’s potential than the Tasmanian devil. These incredible Australian marsupials are severely endangered due to a deadly, transmissible cancer which has devastated the wild population. While veterinarians and zoologists have struggled to find a cure, cloning could prove invaluable in halting the cancer’s spread. Just as in de-extinction, Tasmanian devil cells could be cloned following genetic modification to remove the single gene that causes the deadly cancer; once these cancer-free individuals are introduced into the wild, they could spread the improved genes throughout the population until all individuals are immune. Considering how much trouble the Tasmanian devil cancer has caused scientists, cloning seems like a simple—and perhaps necessary—solution (8).
De-extinction could also bring extra funding to conservation efforts. Imagine how easily zoos—which are often great financial supporters of conservation—could draw in the public if they were the only places on Earth where you could see living passenger pigeons, giant sloths, and, perhaps someday, Tasmanian devils. This would not only provide zoos with a way of educating the public about conservation issues, but it would also bring in money that could then be used to help conserve other species (8).
But, in other cases, conservationists fear that cloning and de-extinction could spell disaster for some endangered species, even as it saves others. If all of our resources are spent bringing back extinct species—or even saving species that many would consider destined for extinction, like the Tasmanian devil—there are those who fear that currently-threatened species will be forgotten. And, if it were that easy to revive an extinct species, people might be less motivated to protect the planet and prevent extinctions in the first place (10). De-extinction may help increase biodiversity, but it will not prevent a sixth mass extinction unless we address the underlying problems that our planet faces (11).
THE MOA: De-Extinction and the Future of Science
In 1839, Richard Owen, a British professor, purchased a fragment of a femur bone that had been recovered from a river in New Zealand. Despite the seller’s claims that it was from an eagle, Owen had his doubts. As a professor of comparative anatomy and physiology, he knew bones, and he was convinced that this could not have been the bone of a winged animal. But, after tirelessly comparing the bone’s structure and composition with countless museum specimens, Owen had no choice but to admit his error. The femur was, indeed, from a bird, but it was a bird nearly beyond imagination—at least the size of an ostrich. Three years later, Owen stood with one hand grasping the original femur bone and the other resting on the back of a looming, towering figure: the completed skeleton of the bird (12, 13).
Owen would come to be known as one of Darwin’s harshest critics, but the study of the New Zealand moa was his pet project. Reconstructing the creature, bone by bone, he discovered that the moa, standing at an astounding two meters tall, was the only wingless bird to ever live. Unfortunately for Owen, the moa died off around the year 1400, after the Maori people arrived in New Zealand and began hunting the birds for food. So, the most Owen could have done was admire his carefully-reconstructed skeletons—and do his best to imagine what these amazing creatures might have been like (14).
Three hundred years later, the New Zealand moa is one of the top candidates for de-extinction, given the extensive collections of skeletons which are well-preserved and available for DNA extraction, largely thanks to Owen’s work. From a single bone to a completed skeleton, what Owen did in his time was a significant step in preserving these animals for future generations to enjoy, just as de-extinction could become a vital conservation tool in the near future (4). With our planet facing its most grave crisis yet, now is the time to decide whether de-extinction is something dangerous, extreme, and beyond the rights of human beings—or merely the next step in bringing extinct species back to life.
Sources
1: Barnosky, Anthony, et al. “Has the Earth’s Sixth Mass Extinction Already Arrived?” Nature 471 (2011): n. pag. Web.
2: “The Extinction Crisis.” Center for Biological Diversity. N.p., n.d. Web. 14 Oct. 2013.
3: Folch, J., et al. “First Birth of an Animal from an Extinct Subspecies (Capra Pyrenaica Pyrenaica) by Cloning.” Theriogenology 71.6 (2009): 1026-034. Web.
4: “Revive & Restore.” Long Now Foundation. N.p., n.d. Web. 14 Oct. 2013.
5: Zimmer, Carl. “Your De-Extinction Questions Answered.” National Geographic. N.p., 19 Mar. 2013. Web. 14 Oct. 2013.
6: Switek, Brian. “How to Resurrect Lost Species.” National Geographic. N.p., 10 Mar. 2013. Web. 14 Oct. 2013.
7: Church, George. “De-Extinction Is a Good Idea.” Scientific American. N.p., 26 Aug. 2013. Web. 14 Oct. 2013.
8: Brand, Stewart. “Opinion: The Case for Reviving Extinct Species.” National Geographic. N.p., 11 Mar. 2013. Web. 14 Oct. 2013.
9: Switek, Brian. “Reinventing the Mammoth.” National Geographic. N.p., 19 Mar. 2013. Web. 14 Oct. 2013.
10: Pimm, Stuart. “Opinion: The Case Against Species Revival.” National Geographic. N.p., 12 Mar. 2013. Web. 14 Oct. 2013.
11: Switek, Brian. “The Promise and Pitfalls of Resurrection Ecology.” National Geographic. N.p., 12 Mar. 2013. Web. 14 Oct. 2013.
12: Dawson, Gowan. “On Richard Owen’s Discovery, in 1839, of the Extinct New Zealand Moa from Just a Single Bone.” BRANCH: Britain, Representation and Nineteenth-Century History (n.d.): n. pag. Web. 14 Oct. 2013.
13: Campbell, Hamish. “Fossils – Bird Fossils.” The Encyclopedia of New Zealand. N.p., n.d. Web. 14 Oct. 2013.
14: Roach, John. “Extinct Giant Bird Doomed by Slow Growth, Study Says.” National Geographic. N.p., 15 June 2005. Web. 14 Oct. 2013.