The Simple Science of a Grandiose Mind

by Kristina Madjoska

If asked the question: what is the similarity between Adolf Hitler and the modern selfie taker, would you be able to respond? Certainly, only one of them is responsible for an aggressive regime that claimed the lives of millions. While they may seem worlds apart, these two personalities do share something in common—an obsession with themselves. The term ‘narcissist’ has been loosely used and overused in pop-culture to describe those who only care about themselves, who are overly and overtly concerned about their own appearance, wealth and social status. Especially in the western individualistic culture, being a narcissist is frequently considered a lifestyle choice, one adorned with vibrant images of assertiveness, self-confidence and success. Perhaps this social tendency has, in the eyes of many people, framed narcissism more as a cultural phenomenon than a pathology. Still, the psychiatric society of today recognizes the existence of a narcissistic personality disorder, a psycho-pathological state that manifests itself in many ways beyond the obvious display of grandiose self-love. In response, researchers are becoming increasingly interested in mapping the neurobiological and genetic sources of this disorder. Although the existent empirical evidence on narcissistic personality disorder is rather scarce, there have been some significant findings that are beginning to help us grasp the science behind this curious illness.

FROM MYTH TO MEDICAL FACT

The history of diagnosis of narcissistic personality disorder (NPD) goes far back to ancient Greece. The ancient Greeks used the term hubris to describe excessive haughtiness, arrogance and pride. Interestingly, the concept of hubris was vital to many narratives of Greek mythology, usually to capture the misfortune that ensues the vain hero. Many centuries later, the famous psychoanalyst Freud described narcissism as a magnified and extreme display of otherwise normal feelings of “self-love” and “libidinal self-nourishment”.1 Subsequent psychoanalysts built from Freud’s ideas on narcissism and the ego and recognized the degree of abnormality and the nature of the symptoms that characterize pathological narcissists. In 1980, the third edition of the acclaimed Diagnostic and Statistical Manual of Psychiatry established narcissism as a distinct type of personality disorder and outlined criteria for its diagnosis. Nowadays, these criteria have been revised as part of the newly published DSM-V.2 Accordingly, the personality disorder is placed in the Cluster B category, a group of personality disorders that is generally identified by problems with emotional regulation, lack of impulse control and decreased ability for social bonding. In fact, all four personality disorders that belong to this group—borderline, histrionic, antisocial and narcissistic—are moderately to highly comorbid;3 that is, they tend to occur and be diagnosed simultaneously.

BECOMING A NARCISSIST

There are certain symptoms of NPD that distinguish it from the other Cluster B subtypes. Thus, a diagnosed narcissist would pervasively feel grandiose, unique and chosen. Time and again, he would fantasize about unlimited success, power, beauty and influence. In the pursuit of those goals, a narcissist would exploit and manipulate others without fretting about their personal well-being. Yet, a narcissist is far from dismissive of others when it comes to appraisal of his own self-worth—it is absolutely essential to him that others enthusiastically affirm his highly idealized self-image.

In those displeasing moments when a narcissist does not get the needed affirmation from others, he is bound to experience a severe drop in his self-esteem. Even though there is a certain variance between the character and intensity of symptoms among diagnosed patients, there is a general consensus as to what qualities the personality disorder entails.

In discussing the possible biological bases for the occurrence of these symptoms, it is important to note that the disorder is not due to the presence of a foreign substance in the body. Rather, the biological basis of its symptoms is found in the imbalance of neurochemistry and brain anatomy that manifests into the generation of extreme thoughts and behaviors which are otherwise normal among the general population. Because there is no clear cut line that distinguishes NPD and non-NPD, psychologists and psychiatrists have recently tried to portray it as a spectrum or a range, rather than strictly delineated categories. Recently, researchers have been able to produce evidence for the neurobiological foundations of the disorder, particularly in the realm of social interaction. There is now significant insight into the diminished brain structures responsible for some of the symptoms that NPD patients experience.

SOME MEN DO FEEL LIKE ISLANDS

In a normal human brain, there are special neural circuits that are responsible for empathy, the ability to understand and share the feelings of another.4 There are two parallel brain systems responsible for the feeling of two different kinds of empathy. The first type of empathy involves feeling others’ feelings as if they were one’s own. It is felt due to the simulation system, located in the insular cortex of the brain.5 According to research,5 the brains of people that have a normally functioning insular cortex fire the same neural circuits both when they themselves feel pain and when someone close to them feels it. The healthy insular cortical region makes people able to effectively respond to other people’s emotional experiences. The second type of empathy is often referred to as cognitive empathy, or the ‘theory of mind’ —the abstract understanding that other people have their own feelings and thoughts and experience the world differently.5 Several other cortical regions of the brain are responsible for this type of empathy.

In one study conducted by the German research university Humboldt in Berlin,6 patients diagnosed with NPD were studied through self-reported and experimental methods for their ability to experience empathy. The study’s findings supported the theory that narcissists exhibit little to no responsiveness to other people’s feelings, which is to say that they have a dysfunctional emotional simulation system. However, they do show a great capacity to cognitively recognize other people’s emotions, which may be the reason why narcissists are very successful at exploiting and manipulating others around them for their own personal gains. In another study conducted by the same researchers,6 the same patients were subjected to brain imaging scans. The results showed that, in comparison to non-NPD patients, those who were diagnosed with the disorder had a statistically significant decrease in grey-matter volume in the left anterior insula, the part of the brain related to the simulation system. Moreover, additional results from the same study showed a reduction in the gray matter volume of two other significant parts of the brain: the cingulate insular cortex system and parts of the prefrontal cortex. The former part of the brain is related to decision making in a social context (that is, thought about how one’s decisions affect others) as well as the pain simulation system. The latter part is related to the ability for self-reflection in terms of emotional experience, which has a positive correlation with the ability to empathize with others and understand their own emotional experience. In lacking of empathy, a vital capacity for social bonding, a narcissist tends to form shallow and unfulfilling relationships. Ultimately, a narcissist who cannot bond with others around forms an instrumental relationship with them—they themselves are necessary tools for a path to glory, and their compliments and praise are necessary for the continuity of a grandiose self-image.

BEHIND THE MASK OF CONFIDENCE

Although on the surface a narcissist seems to be an invincibly confident person, feelings of deep shame and low self-esteem in response to social disapproval are at the core of NPD. One study tested the relative shame-proneness of NPD patients compared to healthy participants through a standardized testing procedure measuring strength of reaction to shame-provoking stimuli.7 The study showed that, in comparison to the control healthy participants, the NPD patients were significantly more shame-prone and had stronger reactions to the shame-provoking stimuli. In such a setting where shame is invoked, narcissists tend turn intensely angry. Undoubtedly, this occurs because they are fundamentally unable to reconcile the highly idealized sense of self they are trying to maintain and the diminished sense of self that occurs when they receive negative social feedback. Narcissistic anger has been correlated to diminished levels of the neurotransmitter serotonin.8 A neurotransmitter is a molecule that transmits messages between neuronal cells. In particular, serotonin has been linked to the ability for emotional regulation, which high-NPD patients lack. Because they lack this ability, narcissists tend to react more intensely and impulsively when they perceive a person or an event as a threat.8 Thus, instead of trying to induce sympathy, an ashamed narcissist would even further alienate others through seemingly self-preserving aggression.

THE CONFUSING GAME GENES AND ENVIRONMENT PLAY

In light of all these experimental findings, what seems to befuddle psychiatrists is the relationship between genetics and environment that is decisive in the development of a narcissistic personality. A research institute in Norway has recently generated three important findings that help stretch our understanding of genes and their relation to personality disorders.9 For these purposes, the research studies the difference in expression of symptoms between monozygotic and dizygotic twins. In one of these studies, it was found that narcissistic personality disorder has an index of 24% heritability, a somewhat moderate genetic correlation. A more detailed study looked at specific personality traits that are characteristic for different categories of personality disorders; this study showed that affective lability (or emotional dysregulation) had an overall heritability of 45%, whereas narcissism has a 53% heritability index. An analysis of these numbers suggests that these personality traits are, in fact, highly heritable. Perhaps most importantly, the third study has brought to light the complexity of gene-environment interaction through incorporation of epigenetics—the study of the modification of gene expression rather than change in the genetic sequence itself. What epigenetics has highlighted in the study of personality disorders is that inheritance is non-linear; that is, it is not solely a specific gene or the environment that determine a personality, but rather that the environment can fundamentally change how a gene is expressed. To illustrate, the novelty-seeking characteristic linked to narcissistic personality disorder is correlated to the expression of a specific gene only if the child carrying that gene was also raised in a hostile environment where the parents were either emotionally involved with the child or punished the child for expressing emotions. This developmental condition can leave a child ill-equipped to regulate his or her own emotional experiences, respond to those of others, and to pursue a stable sense of self. Although the field of epigenetics is still quite new, it may be able to precisely capture the intricate connection between genes and the environment in the development of a narcissistic personality.

HOW AND WHY DO WE THINK ABOUT NPD?

Now that we have glanced at the biology and genetics of a narcissist, it is important to think about the social consequences of the disorder. In reality, people tend to have little empathy for a narcissist, precisely because narcissists themselves display arrogant and emotionally indifferent behavior. However, this isolation renders them even less capable of dealing with their pervasive and intense emotion in a social environment, which can possibly lead to destructive and antisocial behavior. And although it may seem as if the greatest threat posed by a narcissist to our society may be a figure like Kanye West, there is in fact a growing body of evidence suggesting that today’s extremist religious leaders and individual terrorist attackers could also be sufferers of the maladapted sense of grandiosity and of the lack of empathy characteristic of a narcissistic personality. By all means, understanding this illness can help us better respond to the needs of both sufferers and those close to them, as well as tackle the threats it may pose to a wider social scene.

Kristina Madjoska ‘19 is a freshman in Hollis Hall.

Works Cited

  1. Freud, S. On Narcissism: An Introduction. Read Books Ltd. 2013.
  2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. Fifth Edition. APA, 2013.
  3. “Comorbidity.” TheFreeDictionary. com. Web. 19 Oct. 2015.
  4. “Empathy.” TheFreeDictionary.com.
  5. Blaney, Paul H., Robert F. Krueger, and Theodore Millon. Oxford Textbook of Psychopathology. Third Edition. Oxford UP: UK, 2014. 791-805.
  6. Ritter et al. The Narcissistic Personality Disorder: Empirical Studies. Humboldt U: Berlin, 2013.
  7. Ritter et al. Shame in Patients with Narcissistic Personality Disorder. Psychiatry Research 215.2, 2014, 429–437.
  8. Adshead et al. Personality Disorders as Disorders of Attachment and Affect Regulation. Advances in Psychiatric Treatment. 12 (2006): 297-305.
  9. Nurnberger Jr., J.I. et al. The Genetics of Personality Disorders. Principles of Psychiatric Genetics. Cambridge UP: UK, 2012, 316-323.

 

 

Invading the Human Heart

by Hanson Tam

Pathogenic viruses and bacteria routinely invade the human body. But so do curative treatments ranging from drugs to surgery. In a society in which invasion connotes violence and injury, many people avoid acknowledging the intrusive nature of medicine. Awareness is important; it encourages the development of less invasive yet equally effective procedures. Such has characterized cardiac surgery’s rapid advances in treating coronary artery disease (CAD), the leading cause of death worldwide.1 As cardiologists and cardiac surgeons explore new options, they seek the right balance between invasiveness and effectiveness.

CAD is the narrowing of the coronary arteries most often due to cholesterol-rich plaque accumulating along artery walls. The narrowing limits the amount of blood and nutrients the heart receives. A heart attack occurs when a coronary artery is completely blocked and the dependent heart muscle dies. While administering drugs such as nitroglycerin (which dilates blood vessels) and beta-blockers (which reduce blood pressure) constitutes the least intrusive treatment, drugs are not sufficient in many cases of CAD.2

Percutaneous coronary intervention (PCI), which includes stenting, represents the next level of invasiveness. It was first performed in 1977.3 Not technically surgery, PCI involves threading a long thin catheter through blood vessels to reach the afflicted, plaque-laden location. The cardiologist inflates a balloon to widen the artery and then reinforces the artery with a tubular stent. PCI remains as an appealing, minimally invasive solution when only one or two blood vessels need attention. Beyond that, however, it is not nearly as effective as the open-heart procedure developed over a decade earlier.4

1962 marked the invention of coronary artery bypass graft (CABG) surgery, the gold standard for CAD treatment to this day.1 In this most invasive procedure, the surgeon harvests a nonessential artery or vein from the patient and joins one end of that vessel to the aorta and the other end to the afflicted coronary artery just below the blockage. CABG requires a doctor to saw open the sternum, temporarily stop the heart, and pump and oxygenate blood through an external machine. This process is risky and traumatic to the body; however, for severe cases of the disease, graft surgery leads to higher long-term survival rates than does regular stenting.4

Improvements in CAD treatment have tended to be variations on PCI and CABG. To address the reclogging of stented arteries in up to 30% of PCI cases, scientists created a variety of drug-eluting stents (DES) in which cells cannot grow.5 Yet DES often still require reintervention. On the other hand, advancements in surgical technique have yielded less invasive forms of CABG that can be performed on beating hearts, require only small incisions, and can be executed by robotic arms. Such techniques are significantly less invasive and promote faster recovery than traditional CABG while retaining the safety and efficacy of the approach.1

At the current frontier of CAD surgery lies hybrid coronary revascularization (HCR). Although this procedure was first performed in 1996, its use has increased only recently. HCR is novel in that both PCI and CABG are done as a single treatment. The rationale is to combine the benefits from both procedures, namely CABG’s superiority when treating the main coronary artery and PCI’s effectiveness at treating smaller arteries that branch off from the main artery.1 HCR has preliminarily been shown to be at least as safe and effective as conventional CABG—perhaps even leading to a faster recovery. To make conclusive evaluations, however, more studies comparing HCR with existing procedures are required.1,6,7

The surgical treatment of CAD is especially exciting because it is in a stage of refinement. Physicians are striving to retain the benefits of invasive bypass surgery while minimizing the side effects of large incisions and cardiac arrest. Physical invasion of the body remains necessary for the foreseeable future. However, as researchers weigh the effectiveness and invasiveness of different treatment options, heart doctors will be able to personalize care based on the unique aspects of each individual.

Hanson Tam ’19 is a freshman in Matthews Hall.

Works Cited

  1. Ejiofor, J. et al. Prog. Cardiovasc. Dis. [Online] 2015, doi:10.1016/j.pcad.2015.08.012 (accessed Oct. 4, 2015).
  2. Parmet, S. et al. JAMA 2004, 292, 2540.
  3. Hessel, E. In Cardiac Anesthesia: Principles and Practice, 2nd ed.; Estafanous, F. et al., Eds.; Lippincott Williams & Wilkins: Philadelphia, PA, 2001, 3-36.
  4. Rosengart, T. et al. In Surgery: Basic Science and Clinical Evidence, 2nd ed.; Norton, J. et al., Eds.; Springer: New York, 2008, 1627-1635.
  5. Rinfret, S. et al. J. Am. Coll. Cardiol. 2015, 65, 2508-2510.
  6. Harskamp, R. et al. J. Am. Coll. Surgeons 2015, 221, 326-334.
  7. Zhu, P. et al. J. Cardiothorac Surg. [Online] 2015, 10, doi:10.1186/s13019-015-0262-5 (accessed Oct. 4, 2015).

 

 

 

You vs. Your Grocery

by Jeongmin Lee

NO CHOLESTEROL! ZERO TRANS FAT! ALL NATURAL!

Hundreds of labels bombard consumers in the grocery store, vying for their wallets and claiming to offer health benefits. It would take only a quick glimpse to notice recurring slogans, many of which use terminology unfamiliar to the general public. The “Gluten-free!” sign may indicate that the food is healthy, but how exactly does a gluten-free product help you? Hopefully, these short explanations can help you defend yourself from the myriad of advertisement labels.

EITHER OR-GANIC

“100% organic” and “all natural” labels flood the grocery store to the point where buying anything inorganic feels guilty. “Organic” usually refers to the environmentally friendly methods of growing food such as reducing water, land usage, and pollution emissions. Smaller companies have slightly more liberty in using the “organic” banner, but for most products certified by the U.S. Department of Agriculture (USDA), an organic product implies the use of no preservatives, no artificial flavoring, and no artificial coloring.1 Although “100% organic” is truthful to its name, the names “organic” and “made with organic ingredients” suggest that only 95% and 70% of the product is organic respectively.2 An “all natural” label is an even looser term as it only deals with how the food was processed. There are almost no formal requirements in using that term. Overall, buying organic means supporting eco-friendly practices, not procuring health benefits.

CHO-LESS-TEROL

First of all, every cell in your body contains and uses cholesterol, a waxy substance produced by animals to make hormones. A possible reason for cholesterol’s negative reputation is that humans can manage and produce enough cholesterol without supplements. Thus, additional intake is considered a surplus, which is known to lead to clogged blood vessels. But not all cholesterol leads to high blood pressure and strokes; a more detailed classification of cholesterol involves the process of lipoproteins, globules in the bloodstream that carry fats that come in two variations, low-density lipoproteins (LDL) and high-density lipoproteins (HDL).3 These two types are respectively deemed “bad” and “good” cholesterol by the media because LDL cholesterol can get caught in arteries, leading to a buildup that restricts the blood flow while HDL cholesterol effectively delivers unused cholesterol to be properly disposed by the liver.4

So how are these products misleading? One example are carrot packages that have large “no cholesterol” signs. Although it is true that carrots do not have cholesterol, only animals produce cholesterol. Unless those carrots are made out of meat, any plant product should be assumed to have no cholesterol. Additionally, HDL cholesterol actually aids the body. In a diet, levels of LDL and HDL cholesterol are dependent on the type of fat that is consumed. Trans fat and saturated fat lead to more LDL cholesterol while unsaturated and polyunsaturated fat aids HDL cholesterol to be more functional.

ANTIOXIDO’S OR ANTIOXIDON’TS?

Among the various labels “superfoods” flaunt, one of them usually includes the abundance of antioxidants. Our body naturally produces a harmful set of toxins called Reactive Oxygen Species (ROS); however, the body detoxifies ROS through antioxidant enzymes. ROS damages proteins and cell membranes severely, and without enough antioxidants, the concentration of ROS would be significant enough to cause health problems such as heart disease.5 There are many types of antioxidants because once an antioxidant detoxifies an ROS, it forms another harmful byproduct, a free radical, where another kind of antioxidant molecule must alleviate.6 Thus, a singular type of antioxidant would not help as much as a variety of them working in tandem, and a variety can be achieved with a regular healthy diet.7

Antioxidants are certainly worthy to be commercially displayed, and buying food with antioxidants will be beneficial to your health. Just remember not to rely just on blueberries to give all the necessary antioxidants. Rather, a nutritious variety of fruits and vegetables will be best even if most are not considered “superfoods.”8 Even proclaimed “superfoods” require continuous consumption to maintain their benefits because no matter how high the percentage of nutrients they have, researchers believe that their effects are short-termed.

VITAMIN, SEE?

Multivitamins and dietary supplements- what can be easier than taking a pill with all the nutrients you need? The nutrition labels proudly state that the daily value of various vitamins and supplements are met or even exceeded with one serving. While vitamins all have different functions, most have to do with boosting your immune system.9 On a multivitamin label, you can usually see a nearly complete set of vitamins, but not all vitamins can be absorbed by the body in the same way. Specifically, vitamins A, D, E, and K are considered to be fat-soluble which means they need fat or oil to allow our body to absorb those vitamins. Therefore, without proper consumption, these vitamins can simply pass through the body without delivering a significant health benefit in contrast to how fat-soluble vitamins which have the ability to be stored in the body’s tissues. This brings up the fact that the other vitamins, which are considered to be water-soluble, can be utilized easily by the body, but any surplus of water-soluble vitamins will be excreted.10 This means that any of the 100% daily value of vitamins from the supplement that cannot be processed in one take will not be as beneficial as taking that amount slowly throughout the course of the day. Interestingly, vitamin supplements, especially those for vitamins A, C, and E, have no studies that prove their effectiveness in lowering cholesterol or reducing blood pressure.11

Taking multivitamins may not be as beneficial as they appear to be. Most of the vitamins could not be retained by your body by one tablet. While taking supplements can certainly help, especially when your usual diet does not reach recommended daily values, a healthy variety of food in your meals will ensure your health the most.

AS A MATTER OF FAT

Trans, saturated, unsaturated, and polyunsaturated: these categories of fats can be found on nutrition fact labels. Saturated fat refers to natural but harmful fat that can increase your risk of heart disease. Trans-fat is even more detrimental to your body as it is usually artificially made with partially hydrogenated oils. Unsaturated and poly-unsaturated fats have slightly different molecular structures compared to saturated fat with small bends that make them effective at lowering heart disease risk.3

Popular media is becoming more aware of distinguishing between these types. Note that there are a few caveats. For example, having no saturated fat is very difficult especially in meat because most red meat contains naturally formed saturated fat. Be wary of your saturated fat intake, and when you see unsaturated or polyunsaturated fats, remember that they actually help you. Plus, always remember to be thorough when reading through the nutrition facts of products that advertise low saturated fats or high unsaturated fats just to make sure the labels are not concealing an unhealthier aspect of the product.

GUT INSTINCT

Probiotics by definition increase the production of certain microorganisms in order to kill others. These bacteria produce acids that will lower the pH of your digestive system, eliminating many harmful microorganisms and pathogens. With this method, probiotics can aid in decreasing the risk of diseases caused by pathogens or even the risk of cancer.5

When a product claims to have probiotic bacteria, the benefits stated are usually true. In fact, foods such as yogurt were known to have health benefits and were made since ancient times. Similar to the previous advice, even when you find a product with probiotics, check the nutrition facts. In some cases, there is a high amount of sugar or sodium that might cause other health issues.

GLUTEN FEES

Gluten is a protein that can be found is several types of grain, and gluten-free products are made for people who have gluten allergies. Those who severely react to it are known to have celiac disease where gluten blocks the body from absorbing nutrients properly, and some people without the allergy still exhibit minor symptoms from gluten. Unfortunately, gluten is present in a variety of products as it can be found in “frozen vegetables in sauces, soy sauce, some foods made with “natural flavorings,” vitamin and mineral supplements, some medications, and even toothpaste.”12 In recent years, even more people buy gluten-free food due to a diet plan.

A gluten-free diet exists, but it is usually meant for people with celiac disease as they are forced to follow that meal plan. For those who voluntarily follow a gluten-free diet, there are no studies that prove the existence of health benefits caused by the lack of gluten. As a matter of fact, most gluten-free products end up being more expensive. As with many popular diets, a voluntary gluten-free diet does not specifically aid you to have a healthier body. Rather, it may give you an emptier wallet.

SHORT AND SWEET

Of course, sugar is an essential to your diet; however, too much sugar is not a good thing, and you can easily exceed the recommended amount. Even nutritious nourishments such as fruits contain a significant amount of sugar, specifically fructose which is the same sugar used in most processed foods under the name “high fructose corn syrup.” According to the Childhood Obesity Research Center at the University of Southern California, there is a “growing body of evidence suggesting fructose is a riskier substance than glucose.”13 With these studies, apple juice can end up being unhealthy as coke, at least in respect to the fructose content. The problem with reading nutrition labels is that the grams of sugar does not discern between the types of sugar, so aim to lower your sugar intake.

SODIUM A-SALT

Like sugar, sodium is part of a nutritious diet, but should be limited. High amounts of sodium can lead to “high blood pressure, heart attack, stroke [and] can also lead to heart failure.”14 When a product does claim to have low amounts of sodium, their claim usually has no tricks. In order to find food with low sodium content, always check the labels, and if you are buying produce or meat, choosing a fresh variety will most likely have a lower sodium concentration.15

 

A balanced meal is the safest way to maintain a healthy diet, and all grocery advertisers try to appear as a part of that nutritious routine. The consumer, on the other hand, must peruse through unfamiliar vocabulary, treading through the traps set up by corporate advertising teams. Even with the knowledge of these definitions, you must remember to be attentive. For example, “zero fat” may only refer to trans fat and “all natural” may not be “natural” at all. In fact, many healthy labels might be concealing a high concentration of sugar or saturated fat only seen by a thin line in the nutrition facts. Certain terms such as “all natural” or “vitamins” are popular and have an appealing connotation, but do not be deceived as some of these benefits might not be as beneficial as they claim to be. Food companies utilize as many marketing strategies as they can to attract the consumer with extraneous labels. As a consumer, you need to navigate through the aisles of advertisements. Keep your eyes open and be smart in your choices because the right steps can lead you to a healthy lifestyle.

Jeongmin Lee ‘19 is a freshman in Hollis Hall.

Works Cited

  1. The Cornucopia Institute. [Online] http://www.cornucopia.org/natural-versus-organic/ (accessed Sep. 28, 2015).
  2. Mayo Clinic Staff. Mayo Clinic. [Online] Jun. 09, 2014, http://www.mayoclinic.org/healthy-lifestyle/nutrition-and-healthy-eating/in-depth/ organic-food/art-20043880 (accessed Sep. 28, 2015).
  3. Harvard T.H. Chan. [Online] http://www.hsph.harvard.edu/nutritionsource/what-should-you-eat/fats-and-cholesterol/ (accessed Oct. 3, 2015).
  4. National Heart, Lung, and Blood Institute. [Online] Sep. 19, 2012 http://www.nhlbi.nih.gov/health/health-topics/topics/hbc (accessed Sep. 28, 2015).
  5. Hunter, Beatrice Trum. “Foods as Medicines. (Food for Thought). (Brief Article).” Consumers’ Research Magazine 2002, 8.
  6. Zampelas, Antonis, and Micha, Eirini. Antioxidants in Health and Disease. 2015; pp. 28-36.
  7. European Food Information Council. [Online] Nov. 2012, http://www.eufic. org/article/en/artid/The-science-behind-superfoods/ (accessed Oct. 3, 2015).
  8. The George Mateljan Foundation. [Online] http://whfoods.org/genpage.php?tname=george&dbid=143 (accessed Oct. 3, 2015).
  9. Harvard Health Publications. [Online] http://www.health.harvard.edu/staying-healthy/how-to-boost-yourimmune-system (accessed Sep. 28, 2015).
  10. Medline Plus. [Online] Feb. 18, 2013, https://www.nlm.nih.gov/ medlineplus/ency/article/002399.htm (accessed Oct. 3, 2015).
  11. American Heart Association. [Online] Jun. 12, 2015, http://www.heart. org/HEARTORG/Conditions/Vitamin-Supplements-Healthy-or-Hoax_ UCM_432104_Article.jsp (accessed Oct. 3, 2015).
  12. Strawbridge, Holly. Harvard Health Publications. [Online] Feb. 20, 2015, http://www.health.harvard. edu/blog/going-gluten-free-justbecause-heres-what-you-need-toknow-201302205916 (accessed Oct. 3, 2015).
  13. Barclay, Eliza. NPR. [Online] Jun. 9, 2014, http://www.npr.org/sections/ thesalt/2014/06/09/319230865/fruitjuice-vs-soda-both-beverages-packin-sugar-and-health-risk (accessed Oct. 3, 2015).
  14. Harvard T.H. Chan. [Online] http://www.hsph.harvard.edu/nutritionsource/salt-and-sodium/sodiumhealth-risks-and-disease/ (accessed Oct. 3, 2015).
  15. Healthfinder. [Online] http://healthfinder.gov/HealthTopics/Category/health-conditions-and-diseases/heart-health/low-sodium-foodsshopping-list (accessed Oct. 3, 2015).

 

Bio-Inspired Slippery Surface Technology Repels Fouling Agents

by Serena Blacklow

A start-up launched late in 2014 from our own Wyss Institute for Biologically Inspired Engineering is working to commercialize ‘SLIPS’ technology. SLIPS Technologies’ mission is to customize super-repellent surfaces for whatever application under demand.

Slippery lubricant-infused porous surfaces (SLIPS) can be formulated to repel water, bacteria, and oil, among other “fouling agents”. They can prevent biofilm formation, reduce ice accumulation, and increase flow rates through tubing.1 Thus SLIPS can be applied to medical devices, such as catheters, outdoor heating installations subject to cold weather environments, and pipelines that transport fluids such as mud, cement, and oil. SLIPS Technologies works on translating a laboratory engineering development into a method for commercial use.

First inspired by the lotus leaf, Harvard Professor Joanna Aizenberg and her team sought to emulate the leaf’s superhydrophobic (water repellent) surface to create a novel liquid and ice-repellent surface. The idea behind these surfaces is to maintain the largest contact angle possible between the droplet and the surface it contacts. The more preserved the droplet structure after it hits the surface, the more easily the droplet will slide off.

The first surfaces created in the lab were micro- and nano-structured, so they relied on these small structures to prevent water droplets from breaking their shape and spreading over the surface upon contact. In high humidity conditions, however, water and ice crystals would accumulate in the air pockets between the micro- and nano-structures, and the surface would lose its superhydrophobic ability.

Currently, fabrication of SLIPS uses an additional liquid lubricant that is immiscible (does not mix) with the fluid it repels; this addition to the original structured surfaces eliminates the problems associated with high humidity conditions as long as there is lubricant infused in the surface because there are no air pockets in which that water or ice could accumulate. More specifically, there are two main approaches to manufacturing SLIPS:

  1. Lubricant-coated nanostructured surface
  2. Lubricant-infused polymer surface

The lubricant-coated nanostructured surfaces have been created with aluminum and a perfluorinated lubricant and have displayed anti-icing promise.2 This system can be applied to many surfaces, but one drawback is the depletion of lubricant leading to failure of the system again under high humidity conditions. The second approach aims to solve this problem by incorporating a vascular system that can replenish lubricant at the surface as it is depleted.3 This system, however, is a silicone oil-infused polymer-based system so it requires applying the additional layer of polymer to the surface of interest. Some combination of these approaches could be the next step towards creating a more robust superhydrophobic surface.

A superhydrophobic surface is just one example of a bio-inspired material. Hopefully, innovations around this technology will extend beyond nature’s ability so a durable superhydrophobic surface can revolutionize the anti-fouling and anti-ice industries.

Serena Blacklow ’17 is a junior in Leverett House, concentrating in Engineering Sciences.

Works Cited

  1. SLIPS Technologies. http://www.slipstechnologies.com/solutions.php (accessed Oct. 31, 2015).
  2. Kim, P., et al. ACS Nano. 2012, 6 (8), pp 6569–6577.
  3. MacCallum, N., et al. ACS Biomaterials. 2015, 1, pp 43−51.

 

 

Tuberculosis Declines in the US but Remains a Global Health Threat

by Jacqueline Epstein

By the beginning of the 19th century, tuberculosis (TB) had killed one in seven people who ever lived.1 The disease is caused by a bacterium called Mycobacterium tuberculosis, which is spread through the air from one person to another. While not every single person infected by the bacterium contracts the disease, people with a weakened immune system have a significantly heightened risk. In those infected, TB mainly attacks the lungs, causing fatigue, weight loss, the coughing up of blood and, ultimately, death if left untreated.2 Once a common killer in the United States with no known cure, the disease is now thought to be completely eradicated by many Americans.

The first case of TB was reported in 1768 by the Encyclopedia Britannica, describing TB as a disease that consumes the lungs. This caused the infection to be commonly referred to as “consumption.” Throughout the 19th and early 20th centuries, TB was responsible for one quarter of the deaths in Europe.3 The first successful vaccine emerged in 1921, when French bacteriologists Albert Calmette and Camille Guérin used a strain of live, attenuated M. tuberculosis to develop the Bacillus Calmette– Guérin (BCG) vaccine.3 A second successful vaccine emerged in 1943, when microbiologist Selman A. Wakman witnessed the destruction of the TB bacterium by a separate strain of bacterium named streptomycin.1 These two drugs were administered nationwide in the mid 20th century, significantly abating the epidemic. Combined with other novel antibiotics, these treatments proved to be so effective that by 1968, the number of TB cases halved since 15 years earlier in 1953. Health officials in the US declared TB to be on the verge of complete eradication.1

So why does TB remain a global threat in the 21st century? Despite previous success in curing the disease, the World Health Organization (WHO) declared TB a global emergency in 1993, with 8 to 10 million cases being reported each year.4 Particularly in Africa, a large contributor to the resurgence of TB was the emergence of the human immunodeficiency virus (HIV), which leads to the progressive failure of the immune system. A weakened immune system greatly increases an individual’s susceptibility to M. tuberculosis infection. Poverty and lack of access to basic resources also promote low immunity in populations, which explains the prevalence of TB in developing nations.4 While successful vaccines have been propagated worldwide, the course of treatment is long and costly. Vaccines such as BCG only assure protection for 10-20, creating the possibility of reinfection.4 Further, the rise of drug resistant strands of TB has diminished the efficacy of existing vaccines. Random mutations in the M. tuberculosis genome have allowed the bacterium to develop increased virulence and ability to withstand treatment with antituberculosis drugs. In particular, lineages derived from the Beijing genotype of M. tuberculosis, which is characterized by 53 different mutations largely traced to the regulatory region of the genome, have been associated with the multi-drug resistant strain of TB throughout Asia, Europe, and Africa.5

Declining rates of TB in the US over the past few decades have decreased national awareness of the global epidemic.6 Current efforts to eradicate the disease focus on optimizing screening and treatment in high-risk groups, and on investing in new research and tools, particularly to target multi-drug resistant TB strains. Future efforts to eradicate TB must also address the causes behind higher susceptibility in certain populations, such as the prevalence of HIV.

Jacqueline Epstein ’18 is a sophomore in Leverett House.

Works Cited

  1. “Timeline: Tuberculosis in America.” PBS. WGBH Educational Foundation. http://www.pbs.org/wgbh/americanexperience/features/timeline/plague-timeline/ (accessed 15 Oct. 2015).
  2. “Basic TB Facts.” CDC. Centers for Disease Control and Prevention. http://www.cdc.gov/tb/topic/basics/default.htm (accessed 15 Oct. 2015).
  3. “Tuberculosis (TB) Fast Facts.” CNN. http://www.cnn.com/2013/07/02/health/tuberculosis-tb-fast-facts/ (accessed 15 Oct. 2015).
  4. Sohail, M. J Mol Genet Med 2006 2(1), 87-88.
  5. Borgdorff, M.W. et al. Clinical Microbiology and Infection. 2013, 19(10), 889-901.
  6. Bernstein, Lenny. “Vast Majority of U.S. Tuberculosis Cases Come from Abroad, but Rate Still down.” The Washington Post. 19 Mar. 2015. https://www.washingtonpost.com/news/to-your-health/wp/2015/03/19/tb-still-declining-in-the-u-s-most-cases-brought-in-fromabroad/ (accessed 15 Oct. 2015)

 

 

Treatment as Prevention: Updates on Efforts to Combat the HIV/AIDS Pandemic

by Elliot Eton

The target is 2030. The Joint United Nations Programme on HIV/AIDS (UNAIDS) has ambitiously set 2030 as the year by which we should achieve the end of the HIV/AIDS epidemic, which has claimed the lives of 39 million people globally since the first cases were reported in 1981.1 This past year, to help drive united progress and accountability towards the goal, UNAIDS articulated the 90-90-90 targets. If these goals are reached by 2020, UNAIDS predicts, then the AIDS epidemic could come to an end by 2030:

  • Goal 1: “By 2020, 90% of all people living with HIV will know their HIV status.”
  • Goal 2: “By 2020, 90% of all people with diagnosed HIV infection will receive sustained antiretroviral therapy.”
  • Goal 3: “By 2020, 90% of all people receiving antiretroviral therapy will have viral suppression.”2

These new aims directly emphasize the importance of maintaining the HIV treatment cascade while scaling up treatment programs. In the past, programmatic success was measured by the number of those on treatment; the 2011 High-Level Meeting on AIDS, for example, set a goal of reaching 15 million people on treatment by 2015, which has since been reached.1,2 While the numbers certainly provide an essential indicator of progress, the new UNAIDS goals add an extra dimension: focusing on maintaining quality of treatment while expanding programs. Indeed, infectious disease specialist Dr. Edward Gardner and collaborators observed in 2011 that “to fully benefit from potent combination antiretroviral therapy, [infected individuals] need to know that they are HIV infected, be engaged in regular HIV care, and receive and adhere to effective antiretroviral therapy.”3

Approximately 36.9 million people are living with HIV, of which 25.8 million (~70%) live in sub-Saharan Africa.1 Though some countries are nearing 90-90-90 (e.g. Rwanda, Botswana), most of the region is lagging behind: 45% of those with HIV in sub-Saharan Africa know their status, 39% with diagnosed infection are receiving antiretroviral therapy (ART), and 29% of those on ART have suppressed viral load.2 A variety of obstacles hinder maximal treatment engagement (e.g. insufficient infrastructure). Furthermore, certain key populations, burdened by the persistence of stigma and discrimination, often institutionalized in national laws (e.g. criminalization of same-sex relations, sex work, and drug use), still experience inadequate access to care. The UNAIDS targets emphasize equity and speed: all communities must have equal access to comprehensive treatment, and infection must be recognized and treated early if the goals are to be met by 2030.3

To achieve 90-90-90, the HIV/AIDS community has rallied around the strategy “treatment as prevention” (TasP). In 2011, the 052 clinical trial conducted by the HIV Prevention Trials Network (HPTN) published breakthrough results, revealing that putting an HIV-infected individual on ART early reduced the risk of heterosexual viral transmission to the individual’s uninfected partner by 96%.5 Treatment hits two birds with one stone; it suppresses viral replication to “undetectable” levels, which thwarts disease progression and reduces the probability of viral transmission to a new host. As Professor Max Essex, Mary Woodward Lasker Professor of Health Sciences and head of the Harvard T.H. Chan School of Public Health AIDS Initiative, says, “People without high viral load don’t transmit.”4 The idea that treatment could stop heterosexual transmission was originally disparaged. A 2008 statement submitted by the Swiss Federal Commission for HIV/AIDS described it as “appalling,” “inconclusive,” and “dangerous.”5 The results of HPTN 052 are indeed revolutionary. As UNAIDS Executive Director Michel Sidibé commented in 2011, “This breakthrough is a serious game changer and will drive the prevention revolution forward. It makes HIV treatment a new priority prevention option.”6

Working to test the TasP strategy at the national level, Essex is working with partners from the Harvard T.H. Chan School of Public Health, the Botswana Ministry of Health, the Botswana-Harvard AIDS Institute Partnership, and the U.S. Centers for Disease Control and Prevention to lead an enormous trial in Botswana called the Botswana Combination Prevention Project (BCPP). Appropriately, in Setswana, the language of Botswana, the trial is called Ya Tsie, which means “Teamwork bears more fruit than individual efforts.” The study tests the effects of combining and strengthening traditional treatment strategies, such as inhome counseling and testing, mother-tochild interventions, and voluntary circumcision of infected men.

One major innovation in this trial is targeting interventions to individuals with high viral load but with CD4+ T-helper cell counts above the level for treatment eligibility. HIV destructively weakens the immune system by targeting and destroying the CD4+ cells, which are a type of white blood cell that coordinates the immune system response to foreign invaders. Thus, the CD4 count (cells per volume) is a strong indicator of disease stage, and the World Health Organization has long based its treatment eligibility criteria around this count. Yet there is a drawback: CD4 count does not directly indicate an individual’s infectivity. Indeed, individuals with relatively high CD4 counts may actually harbor high levels of virus.

Since decreasing infectivity is a key goal of TasP, and a high viral load may portend a more rapid progression to AIDS, BCPP is testing the use of viral load—not necessarily CD4 count—as the primary indicator of when to initiate treatment. Early initiation of treatment based on viral load levels will benefit the infected individual and prevent transmission.

Another central innovation is genetically evaluating transmission networks. HIV’s strongest—and most dangerous—asset is its ability to copy itself extraordinarily quickly. If replication is not controlled, the virus can mutate regularly, producing daughter viral progenies that vary slightly from their parents. As a result, the more closely two viral sequences match each other, the more likely is is that they share a recent common host.5 In addition, unrestricted replication means high viral load, which, in turn, increases the probability of transmission per sexual act. Hence, it is possible to track the spread of the virus—and use this information to design appropriate and effective strategies to block it—a technique called phylogenetic analysis.5

Phylogenetic analysis is centered around the concept of a “transmission cluster,” which is defined as a difference of less than 1.5% in HIV genomic sequences found in two or more individuals.7 While not necessarily proving that one person infected the other, one can conclude that the sequences are closely linked evolutionarily.5 Furthermore, researchers can estimate the time of transmission because of HIV’s constant mutation rate. Researchers can then create transmission maps that depict not only the direction of spread but also the speed.5

The Ya Tsie trial includes fifteen pairs of neighboring villages—one in a pair will receive the TasP intervention, while the other will continue receiving the standard of care (with improvements in medical logistics and equipment). Researchers hope to genetically track the virus to determine the transmission network: whether certain viral strains are circulating within communities or being transported across villages. Determining the direction and extent of transmission can hence be a key marker in evaluating whether the TasP intervention was successful or not.4

This comprehensive trial is the result of years of planning and refining statistical algorithms to aid in analysis. There are still challenges to overcome—especially in ensuring the sampling density (number of samples per community) is high enough to provide a representative depiction of the HIV transmission network. Indeed, certain key populations could be disproportionately represented.4

Results of the Ya Tsie trial will provide further evidence either in support of or against TasP as an effective treatment strategy to combat the HIV/AIDS pandemic. Approximately 2 million people were newly infected with HIV in 2014.1 We must end transmission and close the gaps in care— expanding treatment programs to reach certain key populations. As Michel Sidibé says, “Never has it been more important to focus on location and population—to be at the right place for the right people.”8 If TasP proves a valuable and cost-effective tool to combat the spread of HIV—and becomes included in national HIV/AIDS treatment programs—we may be one step closer to achieving the bold UNAIDS goals of ending the epidemic by 2030.

Elliot Eton ‘19 is a freshman in Apley Court.

Works Cited

  1. World Health Organization. HIV/AIDS Fact Sheet, 2015. http://www.who.int/mediacentre/factsheets/fs360/en/ (accessed Sep. 28, 2015).
  2. UNAIDS. 90-90-90: An ambitious treatment target to help end the AIDS epidemic. UNAIDS, 2014. http://www.unaids.org/sites/default/files/media_asset/90-90-90_en_0.pdf (accessed Sep. 28, 2015).
  3. Gardner, E. et al. Clin Infect Dis, 2011, 52, 793-800.
  4. Powell, A. Viral load as an anti-AIDS hammer. Harvard Gazette, Aug. 1, 2014. http://news.harvard.edu/gazette/story/2014/08/viral-load-as-an-antiaids-hammer/ (accessed Sep. 28, 2015).
  5. Cohen, J. Breakthrough of the Year: HIV Treatment as prevention, Dec. 23, 2011. http://www.sciencemag.org/content/334/6063/1628.full (accessed Oct. 10, 2015).
  6. UNAIDS. Groundbreaking trial results confirm HIV treatment prevents transmission of HIV. UNAIDS, 2011. http://www.unaids.org/en/resources/presscentre/pressreleaseandstatementarchive/2011/may/20110512pstrialresults (accessed Oct. 10, 2015).
  7. Cohen, J. HIV family trees reveal viral spread: New studies could aid public health efforts, Jun. 12, 2015. https://www.sciencemag.org/content/348/6240/1188.short (accessed Sep. 28, 2015).
  8. UNAIDS. The Gap Report. UNAIDS, 2014. http://www.unaids.org/sites/default/files/media_asset/UNAIDS_Gap_ report_en.pdf (accessed Oct. 10, 2015).

 

 

Citizen Science and Sudden Oak Death

by Sophia Emmons-Bell

Driving down California’s Highway 101, hugging the coast and cutting through the state’s most famous nature reserves, you will pass by hundreds of diseased tanoaks, bay laurels, and California black oaks. These trees, sick with Sudden Oak Death (SOD), are bruised with red and black splotches and bleed sap from cankers on their trunks. The disease has invaded their trunks, leaves, and sap. However, you don’t need to make the coastal drive to see the disease.

In an effort to anticipate and track the spread of SOD, thousands of trees have been tested and GPS-tagged by teams of scientists and grassroots volunteers. The data is collected during intense periods called “Blitzes” that rely heavily on the manpower of the community. In the Blitzes, scientists at the University of California at Berkeley both disseminate information to those who live in at-risk areas and ask volunteers to sample and tag trees in their neighborhoods.1 The maps the volunteers produce are eerily detailed, showing the reach of the disease tree by tree.

Upwards of 2,000 trees are tagged in the Redwood Forests surrounding Humboldt County, over 3,000 in Marin County, just a few miles from San Francisco, and over 1,500 more in forests just south of the San Francisco bay.1 There are even a few scattered throughout my hometown, Berkeley, where the blitzes are organized. On the interactive map, I can see trees tagged frighteningly close to my old middle school.

These trees are both victims and models. Plagued by the plant pathogen Phytophthora ramorum, these sick trees represent the toll that invasive species and infectious plant diseases take on our ecosystem, often a result from trade and globalization.2 However, the unique response, dependent on volunteer involvement, is a novel idea that allows for the masses (a huge group with unfulfilled potential) to contribute to science. Against an invisible invader, the community promises a substantive defense.

BIOLOGICAL INVASIONS

A medical epidemic isn’t complete without the obsession over “patient zero”. Zombie movies and epidemiological journals alike track infections back to a single source, a foreign presence that made a community sick. However, when diseases are limited to plants and epidemiologic spreads are nearly impossible to track, fear and blame do not necessarily go away.

Most modern plant ailments (pathogens, invasive insects/pests, etc.) can be traced back, not to a “patient zero”, but to trade.3 Earlier in evolutionary history, species migrated slowly, constrained by large geographical barriers; oceans and mountains were not navigable until humans learned to travel.4 This allowed isolation and protection for millions of species to blossom and differentiate. However, as humans explored the world, interests in trade and scientific curiosity brought novel plants and animals across the globe. Often by accident, many pests, seeds, and spores were transported as well. Most recently, in our globalized world where trade spans time zones and continents, the rate of invasion has exploded. Agricultural and commercial plants are now circulated almost freely, bringing with them profound ecological implications.4

The United States alone is home to more than 50,000 non-native species.3 Much of the Midwest is plagued by the Emerald Ash Borer, an insect that lethally burrows into trees. Asian Cogongrass is outcompeting native plants in the Southeast, starving native animals due to its low nutrition value.3 The West Coast is plagued by SOD. These invasions are increasingly common and have the potential to destroy entire webs and ecosystems. Additionally, significant changes in climate and habitat have driven plant and animal species alike into migration.4 In this way, humans are responsible for much of the biological invasion they are now fearing.

SUDDEN OAK DEATH

First noticed in the nature-conscious counties around the Bay, Sudden Oak Death has frightened California’s scientists and citizens for more than a decade. The disease, Phytophthora ramorum, was brought into the West Coast by infected nursery plants, and has exploded along the coastline in the last quarter century. Currently reaching from Oregon to a few miles south of California’s Bay Area, the disease attacks more than 100 types of trees, not only the eponymous Oak. In most regards, this pathogen resembles a fungus. P. ramorum produce sexual and asexual spores, are parasitic to many plants, and feed on decaying plant matter.2 However, the pathogen belongs to the class Oomycetes, biologically distinct due to the presence of diploid cells (whereas fungi have haploid cells).2 P. ramorum distributes large spores that float from host to host into an ecosystem, causing the epidemic spread around coastal zones.

As P. ramorum affects many types of plants—ferns to shrubs to oaks—symptoms can vary widely. Classic afflictions are trunk ailments, such as sores and cankers, and leaf diseases like tissue death and spots.2 However, these symptoms are not unique: pests and other diseases often appear clinically similar. To diagnose with certainty, further molecular tests must be performed.2 Only within the last 8 years have scientists at the University of California at Berkeley begun to adopt responsibility of organizing groups to visually screen and molecularly test plants for SOD.

THE BLITZ

Each spring, before what’s left of California’s rainfall begins to spread SOD around the coast, a group of volunteers gathers in a classroom on UC Berkeley’s own tree-heavy campus. They are caught up on a quarter century’s worth of SOD research and almost a decade’s worth of Blitzes, receiving colorful collection materials and free coffee. Meetings like these occur throughout the greater Bay Area— Marin to Berkeley to Sebastopol—in order to prepare for a quick, concentrated sampling effort.5

“We’re using you as free labor,” Matteo Garbolleto, an adjunct professor at Berkeley and head scientist of the Blitz project, explains to volunteers in this year’s spring meetings.6 He is an animated presenter, receiving laughs throughout his hour-long meeting, but is frank about the importance of volunteers in the Blitz’s success. “We’re trying to engage you,” he says, “to determine the distribution of the disease in your county.”6 Indeed, he cannot understate the importance of the citizens he is talking to; since the spring of 2008, more than 500 volunteers have sampled over 6,000 trees along California’s coastline.5

Blitz meetings begin with a crash-course on Sudden Oak Death. Addressing veterans of the project, fellow UC Berkeley scientists and professors, and new members of the community, Blitz leaders explain why SOD has been so devastating to the ecosystem. Here, the volunteers learn about SOD’s introduction to California in the late 70’s through infected ornamental stock, and its escape in the late 80’s from nurseries to larger parts of the state’s ecosystem. They learn that it is a moisture-dependent disease, requiring consistent rainfall to proliferate. They also learn that, while many trees are susceptible to SOD, only tanoaks and bay laurels are infectious hosts. At this Berkeley meeting, volunteers laugh as Garbolleto describes the giant SOD spores as “elephants”, gliding short relatively distances because of their size.6 However, this background information is not just to entertain—it translates directly into Blitz procedures.

Because the disease is mostly transmitted via infectious tanoaks and bay laurels, Blitzes focus on tagging and testing these trees. This leads to a fairly ironic dearth of oaks tested for Sudden Oak Death. As the name “Blitz” implies, the symptoms are distilled and streamlined to fit on a 4”x8” placard placed inside every Blitz envelope. To these volunteers, the leaves are critical. Bay laurels, for examples, show classic SOD symptoms of blackened, dead leaf tissue where water often pools. These are usually on the end of the leaves, with a dark, uneven line marking the barrier between healthy and dead tissue. Depending on leaf position (and where water has pooled on specific trees), these dead spots can be seen on the bottom side of the blade or in tiny spots along the leaf. Luckily for volunteers, diseased leaves are often found in the lower canopy, where tree climbing and scrambling is not required for retrieval. In tanoak trees, volunteers look to the mid-veins of the leaves instead of the tips. Infected trees will reveal blackened tissue along the middle of their leaves, as well as strange drying patterns. Leaves are browned, dried, and distorted. When they are sent into California forests, volunteers refer back to these descriptions.5

Blitzes happen quickly, often only spanning one weekend. Volunteers are given agency to choose their location and timing. Their instructions are to collect samples, track location through GPS, and clean their clothes and boots upon their return in order to not spread the disease any further. The process is made cheery by the colorful contents of the collection packets: infected samples are put into pink envelopes, healthy trees into white envelopes, and all trees are marked by blue ribbons or aluminum tags. However, before the leaves can be taken back to UC Berkeley for further testing, a careful record has to be made of the locations of tagged trees. This is made easy as the smartphones can act as GPS devices. Ideally, volunteers will “pin” their location on the phone app that has been custom-made for the Blitzes. If they lack smartphones, volunteers can guess their location in relationship to other markers and estimate their GPS coordinates upon return.5 It is this freedom that allows volunteers to sample either their neighborhoods or locations they feel could use more illumination.

When they return, UC Berkeley tests all samples for SOD, piecing together a definite record of the disease across the map. This data can be used to identify new infestations and determine the intensity of known ones. It can illuminate areas at risk for the disease to reach, and show if attempts to fight it have been fruitful. It can even show you how close infected trees have come to your favorite coffee shop.

This year, 504 volunteers surveyed over 10,000 trees in a total of 19 weekend Blitzes. While the infection rate has reached an alltime low (3.7%), Garbolleto’s team attributes this to the severity of California’s drought.1 The data collected shows that the disease continues to spread, breaking out of its coastal confines and inching East. One new diseased tree was found on UC Berkeley’s campus, now closely monitored by those closest—physically and academically—to the disease. However, a better understanding of the topographical spread of SOD has allowed Garbolleto’s team to visualize the disease and release pointed recommendations to limit its spread.

By harnessing the power of the community, Garbolleto’s team at UC Berkeley has mapped out both SOD and a new model for large scientific undertakings. Against an invader as invisible and mysterious as P. ramorum, the community becomes a tangible defender.

Sophia Emmons-Bell ’18 is a sophomore in Eliot House.

Works Cited

  1. UC Berkeley Forest Pathology and Mycology Lab. http://nature.berkeley. edu/garbelottowp/?page_id=148 (accessed Oct. 2, 2015).
  2. Kliejunas, J. T. Sudden oak death and Phytophthora ramorum: a summary of the literature. 2010. Gen. Tech. Rep. PSW-GTR-234. Albany, CA: U.S. Department of Agriculture, Forest Service, Pacific Southwest Research Station. 181 pages.
  3. Walsh, B. Time. 2014, 184(4), 20-27.
  4. Van Driesche, J. Nature out of Place: Biological Invasions in the Global Age; Island Press; Washington, DC; 2000.
  5. UC Berkeley Forest Pathology and Mycology lab. http://nature.berkeley. edu/garbelottowp/?page_id=1275 (accessed Oct. 2, 2015)
  6. Garballoto, M. “SOD Blitzes 2015”. UC Berkeley Forest Pathology and Mycology Laboratory. May 2015. Lecture.

 

 

 

 

Skin Regeneration in Wound Repair

by Madeline Bradley

Unlike some lower invertebrates, like fish and amphibians, which can regenerate all the skin layers and appendages (epidermis, dermis, hair follicles, sebaceous glands, etc.) perfectly, human skin often forms thin scar tissue lacking in appendages.1 The deformed appearance alone can take a serious toll on the quality of life for burn patients and amputees, but scar formation also compromises the skin’s further functions as a sensory and thermoregulatory organ.2 Researchers, recognizing the enormous value in improving wound recovery, are actively investigating the area of normal growth versus wound recovery.

One lab devoted to this research exists here on Harvard’s campus. The Hsu Lab, led by principal investigator Ya-Chieh Hsu, aims to examine fundamental questions, such as how skin cells know when to start and stop proliferation after wound damage, as well as more complicated questions like why hair follicles cannot regenerate themselves after skin damage. By examining factors in normal and wound regeneration, they hope to learn more about the similarities and differences between the two types, which would lead to a better understanding of why skin does not grow back properly after injury. Ultimately, a better understanding could lead to advances in regenerating more functional skin.

The Hsu Lab primarily uses mouse models to study regeneration. Using ultrasound-guided probes, the researchers can directly inject genes of interest that act as skin-damaging factors into the developing embryos of a pregnant mouse. The mouse resultantly gives birth to mice with compromised skin, which the researchers can then study as it repairs itself. The researchers also sometimes graft human skin onto the mice to study it more directly.

The researchers have gleaned multiple insights from the experiments thus far, one of them highlighting the critical role hair follicles play in skin growth. Hsu and her team found that when hair follicles grow in normal skin, they grow downward, pushing deeper into the skin, a development that in itself acts as the driving force of skin thickening.1 Thus, when hair follicles get damaged and cannot regenerate, the skin no longer has that force pushing it to thicken. Hsu’s team believes this finding shows that hair follicle regeneration failure is why scar tissue grows thinner than normal skin.

Furthermore, hair follicles function as an integral signal in a complex communication network between the plethora of different cell types within skin, and examining how they communicate with each other is another thriving aspect of current research.3 Hsu explained that in normal skin, hair follicles act as a sort of anchor for the nerve cells, allowing them to target the hair follicles to receive information such as sensory signals. In scar tissue, however, nerve endings have no hair follicles to target, leading to an impaired sense of touch in that injured region. The critical role of hair follicles in normal skin function necessitates that skin research is intertwined with that of hair follicle regeneration.3

Keeping this research duality in mind, Hsu and her colleagues also found that overexpression of SHH, a gene that makes a protein called Sonic hedgehog, stimulates hair follicle stem cell proliferation.3 Sonic hedgehog functions as a chemical signal essential to developing the dorsal-ventral axis pattern during embryonic development, and it has other functions in cell growth and cell specialization.4 This discovery of its additional role in hair follicle stem cell proliferation marks an exciting development in the potential to improve treatment, as future research could hone in on whether SHH could be used to recover hair follicle growth in damaged skin.

The ability to improve skin functionality and appearance gives the potential for great increases in the quality of life for burn victims and other patients suffering major skin damage. Researchers will likely keep uncovering factors and pathways in normal regeneration that are missing or damaged in wound regeneration, allowing them to target areas that need improvement and come closer than ever to completely functional skin recovery.

Madeline Bradley ’18 is a sophomore in Eliot House.

Works Cited

  1. Takeo, M. et al. Perspectives in Medicine. 2015, 5(1).
  2. Hsu, Y.-C. et al. Nature Medicine. 2014, 20(8), 847–856.
  3. Hsu, Ya-Chieh. (2015 September 30). Skin Regeneration Research. Personal interview.
  4. SHH gene. Genetics Home Reference. U.S. National Library of Medicine.

 

 

 

G(ut)enetics: The Genetic Influence of Our Internal Symbionts

by Austin Valido

The human body is crowded. From the surface of our skin to the depths of our intestines, we are inundated with microscopic bacterium that aid with everything from defense to digestion. Large-scale scientific endeavors, headlined by the Human Microbiome project, have catalogued millions of species of bacteria and are just beginning to uncover the full impact of our most intimate neighbors. Within the human body is a diverse biological industry containing ten microbes for each human cell and 100 microbial genes for each unique human gene.1 The study of the microbiome has exploded in recent years and has started to cross disciplinary lines, creating a field of research that combines aspects of ecology, epidemiology, cellular biology, chemistry, and recently, human genetics. These interdisciplinary collaborations are working to answer the numerous exciting questions about the microbiome that remain unanswered. Recently, research has used population genomics to explore human genetics and the microbiome, searching for the answer to the central question of what, if anything, really controls our internal menagerie?

WHO CONTROLS WHOM: EMERGING RESEARCH ON MICROBIOME HERITABILITY

Researchers have now found a connection from the human genome to the gut microbiome: the Ley Lab at Cornell University recently conducted a twin heritability study which found a strong correlation between the presence of a single bacteria, C. minutia, and a genetic heritability for a low BMI score.2 The variability of the microbiome, from diet to a changing household environment, makes it difficult to differentiate between these various influences and prove causality for a genetic relationship. However, using the largest adult twin registry in the United Kingdom as a test population, the researchers were able to collect an enormous amount of data to analyze—78,938,079 quality filtered DNA sequences that mapped to certain species of bacteria.2 Incorporating the effects of both shared and unique environments in the analysis, the study illustrated that there was a correlation between host genetics and the content of the microbiome. Ley concludes, “Our results represent strong evidence that the abundances of specific members of the gut microbiota are influenced in part by the genetic makeup of the host.”2 The evidence was in the excrement—human genetics shape the gut microbiome.

This study develops a pathway in which phenotype is modified not through genetics but the symbioses within our gastrointestinal tract. Ley and her colleagues have uncovered a detour in the central dogma of biology. It seems our genome has evolved a way to influence phenotype—in this case metabolism—through the recruitment of a third party.

Further research has pushed this past the gut microbiome to a full body analysis. Mining data from the Human Microbiome Project, Ran Blekhman and colleagues have traced an association between genetic variation and 10 microbiome sites on the human body.3 Immunity-related pathways, including the hormone leptin, which modulates immune cells and is implicated in appetite and weight gain, seem to play an important role in microbiome control across the body.3 This research has exciting potential for translational effects, providing a possible new route to treat metabolic disorders through the analysis of the microbiome.

AN UNEXPECTED GIFT: TRANSMISSION OF MICROBIOTA

Past the important role of translational research, this developing understanding of our genetic control of the microbiome is remarkable when placed in the context of research being done on the human fetal microbiome. Important questions remain on how exactly the microbiome develops. The classic idea of a sterile birth seems to be under attack— collective observations raise the possibility that the infant may be first seeded in utero by a source such as the placenta.4 The early microbiome may begin long before an infant is introduced into the environment. Research into the relationship between genetics and the microbiome may be the first step in creating a fuller understanding of the development of the microbiome, such as deciding which bacteria are left to colonize from the mother/environment and which are purged by our immune system.

Together these studies are working to paint an image of the processes that coalesce to select, nurture, and utilize our internal bacterial symbionts. As we learn more about the biological and chemical controls our body implements to cull favorable symbiotic populations, it is important to think about the impact on medicine and human health. Can interventions be created that involve these genetic and maternal pathways to alleviate the burden of diseases such as diabetes and metabolic disorders? Only time can tell in a field boldly exploring the uncharted territory of human biology.

Austin Valido ’18 is a sophomore in Eliot House.

Works Cited

  1. Smillie, C. S., et al. Nature, 2011, 480.7376: 241-244.
  2. Ley, R. E. et al., Cell, 2014, 159.4: 789 – 799.
  3. Blekhman et al. Genome Biology, 2015, 16:191.
  4. Aagaard K. et al., 2014, Sci Transl Med 6.237: 1-10.

 

 

The Virus that Came In From the Cold

by Alissa Zhang

Are you a fan of post-apocalyptic movies? Have you watched The Day After Tomorrow? I Am Legend? Or Contagion? Sometimes, real life is almost as strange as fiction, as recently global warming has led to the discovery of a giant virus frozen for the past 30,000 years in the Siberian permafrost. While the virus is unlikely to cause a zombie apocalypse, it does have the potential to harm modern organisms, and demonstrates yet another detrimental consequence of global warming that few are aware of.  However, it could also be a key element in understanding the history of viruses and their biological processes.

The discovery of Mollivirus sibericum was announced in September 2015 by a team of French scientists from the Laboratoire Génomique et Structurale, the Laboratoire Biologie à Grande Echelle, and the Genoscope.1 Astrobiologists from Russia have taken core samples buried deep in Siberian permafrost to look for signs of life, using these as models for explorations for potential life on Mars. The French team obtained some of these samples, and under isolated conditions mixed parts of the permafrost samples with the amoeba Acanthamoeba, hoping to draw out any dormant viruses. If the amoeba died, the French scientists could know that a virus must have been present in the sample; they could then isolate it to study it.2 This discovery marks the first time that genomics, transcriptomics, proteomics, and metagenomics—all classes of analytical techniques applicable to living organisms—were simultaneously used to characterize a virus.3

The name of the virus comes from molli, a French word that roughly translates to soft or flexible, and sibericum, for the location where it was found. The virus is a roughly spherical particle, approximately 0.6 μm long, and contains a genome of approximately 650,000 base pairs coding for over 500 proteins.3 While its size barely qualifies it as a giant virus, the size of its genome is enormous compared to many modern viruses. For example, HIV has only nine genes, and Influenza A only eight. Mollivirus uses the cell nucleus to replicate in the amoeba, which makes it host-dependent like most viruses. Its method of replication and other characteristic traits, such as a deficiency in certain key enzymes that allow synthesis of its DNA building blocks, make Mollivirus similar to common modern viral types, including human pathogens like Adenovirus (which causes respiratory infection), Papillomavirus (which includes HPV), or Herpesvirus (which includes herpes, mononucleosis, and chickenpox).3

Mollivirus is not the first giant virus to be revived from the Siberian permafrost. Three other giant viruses have previously been discovered in the same samples — Megaviridae (2003), Pandoraviridae (2013), and Pithovirus (2014)—using similar techniques. However, Mollivirus differs from these families of viruses in its shape, mode of replication, and metabolism. For example, the proteins of Mollivirus and Pithovirus bear little resemblance to each other. In addition, Pithovirus only requires host cytoplasm to multiply. This makes Pithovirus more similar to Poxvirus, a family that includes the now-eradicated smallpox virus.4 The discovery of Mollivirus suggests that ancient giant viruses are not rare, and are highly diverse. It also proves that a variety of virus families, with different and potentially pathogenic characteristics, can survive in permafrost for thousands of years.

Although the analysis of the Mollivirus sample revealed very low concentrations of the virus, this discovery has important implications for public health. Only a few infectious viral particles are needed to cause the resurgence of an ancient virus. The rapid pace of global warming in recent years exacerbates this risk, as Arctic temperatures are rising at more than twice the global average, and permafrost is melting.5 Climate change has opened up new sea routes through Arctic ice, facilitating accessibility and industrial exploitation of Arctic regions that hold valuable mining and oil resources. An increasing number of companies are already mining for gold and tungsten in northern Siberia, which will lead to the excavation of millions of tons of permafrost that have been buried for thousands of years, much like the core sample that revealed these four giant viruses.2 There is no way of predicting what that volume of permafrost may contain. Without safeguards in place, there is real risk of reviving potentially pathogenic viruses thought to be long extinct. Even modern viruses may be preserved in upper layers of permafrost.5

On the other hand, these giant viruses also hold promise for advancing scientific knowledge. Scientists may be able to find new metabolic pathways and biochemical processes that can be used to produce new pharmaceuticals and biomolecules. These ancient viruses may provide insights into the history of viral evolution, or even the origin of life.2

To better understand both the risks and rewards posed by unearthing Mollivirus and other giant viruses, the French team is now digging even deeper into the Siberian permafrost, with the goal to aptly study samples from a million years ago.

Alissa Zhang ‘16 is a senior in Currier House, concentrating in Chemical and Physical Biology.

Works Cited

  1. Legendre, M. et al. PNAS 2015, 112, E5327-E5335.
  2. Christensen, J. Ancient squirrel’s nest leads to discovery of giant virus. CNN [online]. September 11, 2015. http://www.cnn.com/2015/09/11/health/ancient-squirrel-leads-to-giant-virus-discovery/ (accessed October 4, 2015).
  3. CNRS. New giant virus discovered in Siberia’s permafrost. Centre National de la Recherche Scientifique [online], September 7, 2015. http://www2.cnrs.fr/en/2617.htm (accessed October 4, 2015).
  4. Legendre, M. et al. PNAS 2014, 111, 4274-4279.
  5. Frankenvirus emerges from Siberia’s frozen wasteland. Phys.org [online], September 8, 2015. http://phys.org/news/2015-09-frankenvirus-emerges-siberia-frozen-wasteland.html (accessed October 4, 2015).

 

 

Invasion of the Brain Eaters

by Julia Canick

Meet 12-year-old Kali Hardig. Until recently, Kali was an average girl, and certainly no medical marvel. But that all changed in July 2013, when she became the third documented survivor in North America of Naegleria fowleri.1

Naegleria fowleri aren’t your typical invaders of the central nervous system. They can cross the blood-brain barrier and destroy the brain in a matter of days—but, unlike other parasites, which enter the body through methods like bug bites and accidental oral ingestion, these amoebas take the road less traveled: the nose.2

Naegleria fowleri are single-celled amoeba found in warm freshwater.2 Upon human inhalation of contaminated water, the parasites pass through the olfactory epithelium and travel along nerves that extend from the nose to the brain, where they end up in the olfactory bulb, bathed in cerebrospinal fluid.3 From there, they can enter the brain, where they digest neurons in less than a day.2 They do this by producing pore-forming proteins that lyse mammalian cells on contact and by secreting enzymes that degrade mammalian tissue.4 This infection is also known as primary amoebic meningoencephalitis (PAM), and can cause death one to eighteen days after symptoms arise.2

About five days after infection, initial PAM symptoms, such as nausea, fever, and vomiting, begin. The later symptoms include confusion, lack of attention, seizures, and hallucinations.2 The amoeba causes death soon after; though it is not evolutionary advantageous for Naegleria fowleri, human beings are a dead-end host for the parasite.

Perhaps the most disconcerting aspect of the parasite isn’t its mechanism of invasion, but its evasion of the immune response. The parasite is resistant to cytokines, which are some of the cells involved in triggering the body’s response to a foreign pathogen. N. fowleri may even use cytokine inflammation against the body of the host to worsen the disease. They are also resistant to the complement system, which helps antibodies and phagocytes clear pathogens from a host; N. fowleri can synthesize surface proteins that protect them against complement-mediated lysis.4

The amoebae actually thrive best when they are in their pathogenic state, in the human body. When in the infectious state, they are able to migrate and divide more quickly than they can when in water.4 Naegleria fowleri’s ability to infect and spread quickly gives it a whopping 97% fatality rate;2 Kali recently became the third survivor, out of 133 infected individuals.

Kali Hardig’s story is nothing short of a miracle. After swimming in several bodies of water, she developed a fever, head pain, and drowsiness.1 Her parents quickly drove her to the hospital, where a staining technique demonstrated the presence of amoebae in Kali. The doctors knew they had to act quickly, given the pathogen’s relatively short time window for survival. Doctors took multiple measures to treat her: they lowered her body temperature to reduce swelling and herniation1 and administered a combination of medications,5 including miltefosine, a drug that has been used to combat cancer. The key factor, however, was almost certainly the immediate detection of the amoeba;5 often, individuals don’t know about the infection until it’s too late.

Kali survived the infection, but others weren’t quite as lucky. This past August, Michael John Riley, Jr. encountered the parasite when he was swimming in a state park; he died within days.6 Others who have encountered the pathogen have shared similar fates.

Though infection is extremely rare, Naegleria fowleri are relatively common in the environment and were recently found in two water systems in Louisiana.7 Luckily, ingesting the amoeba orally is completely harmless; the only way to contract illness from N. fowleri is through the nose.2 Though scientists are working to find a drug that acts quickly enough to combat the infection, it is most pragmatic to focus on disease prevention. According to the Center for Disease Control, the best prevention is to avoid swimming in warm freshwater that is untreated, and to exercise caution with neti pots, ritual nasal rinsing, and public drinking water.2 Humans can’t contract the infection from a properly cleaned and maintained, disinfected pool, or by drinking infested water; the best prevention is to avoid inhaling water that could contain the parasite.

Since the body’s immune response is part of what contributes to the parasite’s lethality, researchers are searching for a two-step treatment: immunosuppressive drugs, followed by drugs that combat PAM.8 The most successful medication has been miltefosine, the aforementioned cancer drug, but researchers are on the hunt for other treatments.2 The inflammatory response triggered by the body ends up backfiring and doing more harm than good; immunosuppression would counter this inflammation and buy more time for the patient. Then, drugs that attack primary amoebic meningoencephalitis could be able to treat and, hopefully, eradicate the infection from the individual.

As N. fowleri continues to pop up in the news, scientists are understanding more and more about what they are capable of. As we come to understand more about this amoeba, we will be able to think up better ways to treat it.

Julia Canick ‘17 is a junior in Adams House, concentrating in Molecular and Cellular Biology.

Works Cited

  1. Main, D. How A 12-Year-Old Survived A Brain-Eating Amoeba Infection. Newsweek, Feb. 22, 2015. http://www.newsweek.com/how12-year-old-survived-brain-eating-amoeba-infection-308427 (accessed Oct. 4, 2015).
  2. Centers for Disease Control and Prevention. http://www.cdc.gov/parasites/naegleria/ (accessed Oct. 4, 2015).
  3. Kristensson, K. Nature Rev Neurosci. 2011, 12, 345-347.
  4. Marciano-Cabral, F.; Cabral, G.A. FEMS Immunol Med Microbiol, 2007, 51, 243-259.
  5. Linam, W. M. et al. Pediatrics 2015, 135, e744-e748.
  6. Yan, H. Brain-Eating Amoeba Kills 14-Year-Old Star Athlete. CNN, Aug. 31, 2015. http://www. cnn.com/2015/08/31/health/brain-eating-amoeba-deaths/ (accessed Oct. 4, 2015).
  7. Brain-Eating Amoeba Found In Another Louisiana Water System. CBS News, Aug. 18, 2015. http://www.cbsnews.com/news/brain-eating-amoeba-found-in-another-louisiana-water-system/ (accessed Oct. 4, 2015).
  8. Rivas, A. Suppressing Immune System Might Save People Infected By Brain-Eating Amoeba Naegleria Fowleri. Medical Daily, May 16, 2015. http://www.medicaldaily.com/suppressing-immune-system-might-save-people-infected-brain-eating-amoeba-naegleria-333686 (accessed Oct. 4, 2015).

 

 

A New Horizon in Astronomy

by Alex Zapien

While we comfortably spend our days doing work, going outside, and even watching Netflix, history is currently being made; the astronomical frontier of human exploration is being augmented to the point that it is literally out of this world.

One of the major reasons for this is the New Horizons spacecraft. New Horizons was specifically designed to fly by Pluto and its moons to collect valuable information and transmit it back to Earth. As Pluto is the only planet that has not been explored by space probes, New Horizons will allow us to explore the mysterious, icy world at the edge of our solar system.1 And it will not stop there—once it reaches Pluto, New Horizons will continue exploring objects even further than Pluto.

Before its launch on January 6, 2009, New Horizons was designed and integrated at the Johns Hopkins University Applied Physics Laboratory (APL) in Laurel, Maryland.1 Its primary structure includes a power source, a propulsion system, a command and data handling system, and an antenna. Starting with the power source, New Horizons runs, perhaps appropriately so, on plutonium, and uses less power to complete its mission to Pluto than a pair of 100-watt light bulbs. In fact, New Horizons uses so little power that, on average, each of its seven science instruments uses between two to ten watts—about the power of a night light—when turned on.1 New Horizons’ propulsion system consists of 16 small hydrazine-propellant thrusters mounted across in eight locations. These thrusters provide a mere 4.4 Newtons (1 pound) of force and are primarily used for small course corrections. The spacecraft’s “brain” is a radiation-hardened 12 Megahertz Mongoose V processor, which distributes commands to subsystems and collects and sends sequences of information back to Earth. Finally, New Horizons’ protruding 2.1 meter wide high-gain antenna allows it to communicate with mission control back on Earth.

On the day of its launch, this $700 million spacecraft left Earth with a speed of about 45 kilometers per second (100,000 mph). The sun’s gravitational pull, however, slowed the craft to just 19 kilometers per second (40,000 mph) by 2007. Fortunately, scientists managed to precisely calculate a flyby to Jupiter that would allow New Horizons to use the gas giant’s gravitational pull as a slingshot. As a result, the craft regained four kilometers per second (9,000 mph), shortening its trip by three years. After its nine and a half year, 4.8 billion kilometer (3 billion miles) journey, New Horizons whizzed by Pluto at 14 kilometers per second (30,000 mph) on July 14, 2015. What is especially impressive is that the spacecraft actually surpassed NASA’s predicted schedule, arriving 76 seconds earlier than expected.2

Many new discoveries have already been made since New Horizons’ successful journey to Pluto. For instance, Pluto was found to have distinguishable red hues all over its surface, leading it to be known by some as the “second red planet.” In addition, Pluto’s exact size is now known; it is 2,370 kilometers in diameter, much larger than originally predicted. Nonetheless, Pluto is still not large enough to once again be regarded as a planet.3 On the bright side, Pluto has also been dubbed the “loving” planet, due to a geologic feature on its surface that takes the appearance of a heart.4 And finally, nicknames aside, Pluto is now known with certainty to be the largest object beyond Neptune in our solar system.

Although New Horizons has traveled incredible distances, it is still not even close to the edge of our solar system. After Pluto, NASA expects New Horizons to continue its journey, this time to the Kuiper Belt. No specific Kuiper Belt Objects (KBUs) have been targeted, but NASA has their list narrowed down to a few KBUs that were discovered in 2014. Currently, the plan is to have New Horizons start its new journey by 2017 and explore certain KBUs until 2020. What will happen after that is still unknown, but the possibilities in the universe are literally endless. By testing the limits of space exploration, there is no doubt that we will find new horizons in the field of astronomy.

Alex Zapien ‘19 is a freshman in Canaday Hall.

Works Cited

  1. New Horizons: NASA’s Mission to Pluto. http://pluto.jhuapl.edu (accessed Sept. 21, 2015).
  2. Thompson, Amy. ArsTechnica: Scientific Method/Science & Exploration. http://arstechnica.com/science (accessed Sept. 21, 2015).
  3. Scharf, Caleb http://blogs.scientificamerican.com/life-unbounded/the-fastest-spacecraft-ever (accessed Sept. 22, 2015).
  4. Musser, George. Scientific American. http://www.scientificamerican.com/article/new-horizons-emerges-unscathed-from-pluto-flyby/ (accessed Sept. 23, 2015).

 

A Watchful Eye over Wildlife: Drone Technology & Conservation

by Caitlin Andrews

When we think of field biologists, most of us imagine scientists trekking through uncharted rainforests or across endless savannas, armed with only a notebook and a pair of binoculars. These intrepid heroes, such as Jane Goodall, have shown us how much there is to be learned when we leave behind the comforts of civilization and immerse ourselves in nature. Yet, the line between nature and civilization is becoming blurred as the human population expands and encroaches on wilderness. At the same time, technology is becoming increasingly integrated with fieldwork, particularly in the area of conservation. Whether fitting an animal with a GPS tracking device or collecting plant samples to be analyzed in the lab, field biologists rely more and more on technology to increase the scope and impact of their studies. And, with countless species facing imminent threats to their survival, biologists must also strive for efficiency in their methods, which can be challenging in remote and often dangerous research conditions.

One of the most promising technologies to emerge as a tool for conservation is a technology that has already begun to gain traction in mainstream society: the unmanned aerial vehicle. To some, UAVs—better known as “drones—still seem uncomfortably close to something out of a science fiction movie. But, drones have already proven useful in fields as wide-ranging as the military, agriculture, and cinematography. And, now that drones have entered the realm of science, it looks like they could soon change the face of conservation.

Why Drones?

If scope and efficiency are two of the main obstacles that field biologists face in their research, then drones could provide the ultimate solution. In their most basic form, field surveys involve a census of an animal population and an assessment of habitat conditions. When performed on foot, these surveys can cost hundreds of thousands of dollars to sustain for even a few years and, even then, they are so time-consuming and inefficient that only a rough estimate of population size or environmental conditions can be achieved. Manned planes or helicopters can provide an aerial view of an expanse of ocean or forest, but they pose a tremendous financial barrier which often outweighs any added benefits.1,2 These research methods can also be incredibly dangerous. Every time they go out into the field, field researchers take on tremendous risks, traversing perilous terrain, getting up close to wild animals, and, in some cases, encountering armed poachers.3 Surveys conducted by plane pose additional hazards. Far too often, well-meaning researchers put themselves—and others—in dangerous situations when they conduct low-altitude flight surveys over mountains, forests, or settled areas. From 1937 to 2000, two thirds of all job-related deaths reported among wildlife biologists working in the United States were attributed to aviation accidents—an astounding and disturbing figure.4

Drones circumvent nearly all of these risks, making them a promising choice for future studies. Compact and easy to operate, drones are relatively inexpensive when compared with manned aerial vehicles or on-foot surveys. ConservationDrones, an organization specifically aimed at developing low-cost drones for field research, has developed a drone for less than $2000.5 Miniature drones costing as little as $400 can be purchased online and later equipped with video and still cameras. Besides cameras, they can also be fitted with many types of sensors, from thermometers to pH meters to acoustic recorders; there is even the possibility that swarms of drones could function as a team, with each drone collecting specific information to be integrated into a larger dataset.6 Drones open up a range of possibilities for the scale on which data can be collected, as animals can be tracked over huge distances that no team of scientists could ever cover on foot. Perhaps most importantly, drones allow researchers to conduct their studies from a greater distance, making research conditions safer for them and for the wildlife they study.3

In the following pages, we will explore three case studies that exemplify the range of possible uses for drones in wildlife biology and conservation. At the same time, it is important to consider the challenges and ethical issues that might arise alongside this new technology as we try to assess what the future of drones and conservation might—and should—look like.

Marine Mammal Conservation Zones – Australia

In Australia, marine biologists have already had success using drones to identify which areas of the ocean would make the best marine mammal conservation zones. In a study conducted by Murdoch University’s Cetacean Research Unit,7 drones were flown over Shark Bay on Australia’s western coast. In an area of approximately 320 acres, drones took over 6,000 photographs at altitudes ranging from 500 to 1,000 feet. Researchers then analyzed each one of these pictures manually, attempting to count the number of dugongs, a marine mammal in the same order as manatees. Over 600 dugongs were reliably identified, but, even more amazingly, the researchers were able to recognize a wide array of other species, from schools of fish to whales to sea snakes. The breadth of species that could be seen, even from so high above, is promising, since only larger animals are typically distinguishable in drone photographs.

This simple census data might seem insignificant, but Murdoch University’s study serves to prove the value of drones for marine biology research. At present, the future of the technology seems limitless. Instead of having to manually identifying the animals in each photograph, researchers hope to one day have advanced computer algorithms that are able to distinguish between all species of interest and even identify individual animals.7 There is also the possibility that drones could take to the water, themselves. Human divers could be replaced by underwater robots equipped with propellers, sensors, and even sampling tools. This could be particularly exciting for those studying inaccessible deep sea vents, since underwater drones could dive down and return to the surface with samples of microorganisms for further study.6 While drone technology has a long way to go before this type of exploration is possible, the hope is that studies like those at Murdoch University will stimulate further research so that the future may not be as far off as it seems.

Ornithological Research & Drone Design – France

As drone technology begins to be applied to a range of species, many conservationists are concerned that drones may disturb—or even harm—the very animals they are trying to protect. To address these worries, a team of researchers in France conducted an extensive study on the impact of drones on birds.8 Drones are especially promising tools in the field of ornithology, since they could follow birds from the ground and into the air, perhaps even tracking their migratory routes. However, birds are also inherently susceptible to disturbance. Although drones are typically considered less disruptive than human observers, for birds, it could be more intrusive to be followed by a flying machine than to be watched by humans on the ground.

In their study, the French team exposed semi-captive and wild flamingos, mallards, and greenshanks to several drones. These drones varied in color, speed, and the angle from which they approached the birds—all factors that the authors hypothesized might impact the degree to which birds would be disturbed. Surprisingly, in 80% of trial flights, drones could get within 15 feet of the birds without any signs of distress. The birds did not appear to be affected by drone color or speed; however, they were more likely to be disturbed by drones approaching them from above, which, under natural conditions, would be indicative of a predator. While the authors advise launching drones from a distance and avoiding vertical approaches, they suggest that further research should be conducted to compare these results to the levels of disturbance elicited by human observers. It could be that drones, although foreign objects within a bird’s habitat, are actually less disruptive than one might think—which could open doors for a new phase of drone-conducted ornithological research.8

Human Land Use Changes & Orangutan Conservation – Borneo and Sumatra

The Southeast Asian islands of Borneo and Sumatra are notably the only places in the world where we can still find one of our closest relatives: the orangutan. Yet, most recently, Borneo and Sumatra are becoming notorious for having some of the highest rates of deforestation on the planet.9 Slash-and-burn deforestation is rampant, with most deforested areas being turned into oil palm farms. While there are regulations in place to protect some areas, these are largely ineffective, as many farmers start up plantations illegally in remote areas of the forest; there, the likelihood of being caught by rangers is slim to none. As the largest arboreal mammal, orangutans rely on large tracts of forest for food and protection. Having lost 80% of their forests over the past 20 years, orangutans may be doomed for extinction within the next three decades if nothing is done to slow the current rate of deforestation. Unfortunately, the likelihood of this seems very low when one considers the almost insurmountable demand for palm oil—largely used in food and cosmetic products—in the West.10

But not all have given up hope. ConservationDrones, founded by Lian Pin Koh and Serge Wich, is just one group aimed at saving orangutans, and they plan to achieve this via drone technology. Koh and Wich took one of their first prototype drones to the forests of Borneo and Sumatra to determine the feasibility of identifying orangutans and tracking human land use changes from above. Their drone—programmed to fly a specific, 25 minute route—was able to spot orangutans and their nests in the forest canopy, as well as elephants on the ground. After analyzing the photographs more closely, researchers could also clearly see which areas had been deforested and turned into farmland; they could even identify the specific crops that were being grown.5

Koh and Wich recognize the limitations of drones, including the fact that drones cannot fly below the forest canopy, which restricts which species they can be used to study. However, the possibility that drones could fly over uncharted areas of forest to document illegal land use changes has inspired them to continue their work, and they are currently working to upgrade their prototype for greater efficiency and a more diverse set of uses. They envision a time when drones could be programmed not just to fly a specific route but to fly directly to animals already fitted with radio collars. When drones find illegal oil palm plantations in the forest, they could also send GPS data back to a ranger station, which could immediately deploy a team to confront the farmers.5 The same concept could be applied to monitor poachers, and the presence of drones near endangered wildlife could hopefully act as a deterrent to illegal activities.3 Even more hopeful, some have considered ways in which drones could actually be used to reforest areas by dispersing seeds.11

Many people fear that drones may present a breach of human privacy, especially if they fly over settled areas and take data on human land use.2 As the technology advances, it will be important to consider how its use should be regulated. However, at the moment, the possible benefits of drone technology make it worthwhile to at least pursue further research. While deforestation rates are unlikely to be reversed, the fact that current trends could be slowed—even slightly—is promising, since it could give conservationists the time they need to come up with more long-lasting solutions.

Toward the Future

With studies like these, it looks as if the advancement of drone technology could help shape the future of conservation for the better. But, at the same time, the very same machines that are being used to help save animals are being employed for less noble uses, such as hunting. Many states, including Illinois and Colorado, are facing dilemmas over whether to ban drones used for the purpose of hunting; fortunately, many of them have chosen to side against the hunters, saying that these hunting methods are inhumane and unethical.12 However, this is only the beginning of the conversation. As drone technology becomes more and more common, it is bound to be applied to fields that come into conflict with one another. The question is how we, as a society, will choose to regulate these uses and what role we want drones to play in shaping our planet’s future.

Caitlin Andrews ’16 is a junior in Cabot House concentrating in Organismic and Evolutionary Biology with a secondary in Mind/Brain/Behavior.

Works Cited

  1. van Gemert, J.C. et al. European Conference on Computer Vision workshop 2014.
  2. Ogden, L.E. et al. BioScience 2013, 63(9): 776.
  3. Roden, M.; Khalli, J. UAVs Emerging as Key Addition to Wildlife Conservation Research. Robotics Tomorrow, Mar. 13, 2015. (accessed Mar. 31, 2015).
  4. Sasse, B. D. Wildlife Soc. Bulletin 2003, 31(4): 1000-1003.
  5. Koh, L. P.; Wich, S.A. Tropical Conservation Sci. 2012, 5(2): 121-132.
  6. Grémillet, D. et al. Open Journal of Ecology 2012, 2(2): 49-57.
  7. Hodgson, A. et al. PLoS ONE 2013, 8(11): e79556.
  8. Vas, E. et al. Biology Letters 2015, 11(2): 20140754.
  9. Sumatran Orangutan Conservation Programme. http://www.sumatranorangutan.org/ (accessed Mar. 31, 2015).
  10. Orangutan Conservancy. http://www.orangutan.com/ (accessed Mar. 31, 2015).
  11. Sutherland, W. J. et al. Trends in Ecology & Evolution 2013, 28(1): 16-22.
  12. Swanson, L. Proposed Bill Aims To Ban Drones Used for Hunting. Montgomery Patch. Mar. 27, 2015. (accessed Mar. 31, 2015).

Single Cell DNA Sequencing

by Jennifer Walsh

Every human grows from a single-celled embryo that contains an entire genome of determinants for what this embryo will become. For each one of us, this single cell became two, then four, and its genome became the genome of every cell in our body. However, over a lifetime of cell divisions and routine functioning, mutations in individual cells accumulate. The cells in our body, the highly infectious viruses, the bacteria in our gut, and the cells of a deadly tumor, have variable discrepancies between their genomes. Current biology excels at finding a given mutation in a host of cells that may beget a genetic disorder or lactase persistence, but we still lack the ability to find discrepancies between individual cells within a larger population. However, new technologies with single cell precision have the potential to transform research ranging from microbiology to disease genetics.

The ability to extract an entire genome from a single cell could revolutionize our ability to differentiate genotypes within a population of cells and pinpoint cells that have spontaneous mutations in their genome that separate them from the others around them. Without single cell sequencing, genetic variation among single cells is generally intractable because sequencing techniques require many cells to provide enough input to create a readout. Single cell sequencing has the potential to enhance the study of topics ranging from cancerous tumor development to neuronal differentiation in the brain.1 With this broad set of motivations, scientists in the last decade have undertaken the task of finding a method to accurately and reliably sequence the genomes of single cells as the next step in the sequencing revolution.

How Single Cell Sequencing Works

Sequencing the DNA of a single cell relies on cumulative advances in three techniques that have been dramatically improved over the last couple decades. Single cell sequencing relies on the ability to (1) isolate a single cell, (2) amplify its genome efficiently and accurately, and (3) sequence the DNA. One of the inherent difficulties as well as advantages in sequencing individual cells comes from being able to compare both small and large differences between the genomes of distinct cells. Consequently, effective ways of sorting cells are critical to achieving this goal. Current sequencing techniques have not reached the necessary sensitivity to be able to sequence DNA directly from a cell, without any artificial amplifications, where more copies of the DNA sequence must be made accessible to be parsed in the sequencing process. This amplification does not need to be perfect, and often involves multiple rounds of replication of the genome after it has been fragmented randomly. These fragments are then sequenced and a coherent, linear genome sequence is put together analytically. Most sequencing methods rely on having these smaller fragments of DNA to analyze, and for most methods, numerous copies of each fragment are necessary. The novelty of the technology entails that there is not a universal “best” method.1 However, single cell DNA sequencing, as it is happening in labs nationwide today, tends to involve each of the three steps outlined above and in more detail below, in some combination.2

Challenges Facing Single Cell Sequencing

The first problem is one of discovery – the primary goal of single cell sequencing is to find interesting differences in the DNA of individual cells within the same organism, system, or even tissue. Many inventive methods of modern DNA sequencing were developed long before the prospect of single cell sequencing was on the horizon.

Depending on the organism being examined, the most straightforward, yet unsustainably time-consuming, way to isolate a cell is often just to isolate it by hand with a micropipette. Another widely used method is single-cell fluorescence activated cell sorting (FACS), which automates the selection process on the basis of specific cellular markers. By running the cells through a very thin column, barely larger than the cells themselves, the cells can be separated into a single-file line. Then by vibrating the system, the cells separate into individual droplets that are sorted by their characteristic response to fluorescence from a laser. FACS represents only one of many imaginative strategies, and countless combinations of these tools that have different advantages and drawbacks for a given research goal. Such alternate strategies include using microfluidics,2 and different methods can be optimized for the isolation of different classes of cells.

The second major obstacle that researchers face in attempting to develop single cell sequencing technologies is the inherent limitation of the amount of DNA present in a single cell. The accuracy of current sequencing machines depends on the number of copies of a given DNA fragment, and each cell has only one copy of the desired genome. Therefore, the first and most important step of single cell sequencing is an amplification of the cell’s genome with minimal technical errors that cause inaccuracies in the DNA sequence.

This amplification problem has already been tackled for RNA sequencing, where complementary DNA (cDNA) is amplified with sufficient accuracy because every cell has multiple copies of every RNA transcript.3 RNA-sequencing processes have been optimized to require as few as 5-10 copies.2 As described below, PCR is the standard approach to amplification, but other recent advancements, like MDA, provide other advantages.

Polymerase Chain Reaction (PCR) has been the cornerstone of modern biology research and exponentially amplifies a DNA segment chosen by the researcher. Cycles of DNA strand separation and base pair addition can be repeated numerous times to achieve the desired level of amplification of the original DNA fragment.

Presented in a 2005 paper, Multiple Displacement Amplification (MDA) is a foundational amplification method for single-cell DNA sequencing. The MDA process consists of amplifying the genome non-specifically through the elongation of random primers throughout the genome. These primers then create duplicate, overlapping DNA fragments for sequencing, which can be pieced together to recreate the entire genome.4 This method is better than PCR at replicating different parts of the genome at a more consistent amplification rate because the overlapping fragments remain attached to the DNA template strand and can therefore displace one another.5

The best way to get an accurate sequencing readout would be a process that mixed PCR and MDA to maximize the number of copies and maximize their fidelity with the original genome. Multiple Annealing and Looping-Based Amplification Cycles (MALBAC) integrates both of these previous methods and is currently the most successful amplification method for single cell sequencing. Developed by Professor Sunney Xie’s lab at Harvard University, MALBAC uses both MDA and PCR to amplify the genome in a way that minimizes the discrepancies in amplification rate of different DNA fragments.6,7 MALBAC performs five cycles of MDA such that the fragments loop together so they cannot be amplified again unless the temperature is increased to denature the DNA template and looped strands.

MALBAC is currently the most effective method for detecting many genetic abnormalities, from cells having an extra chromosome to single DNA base pair changes because it produces a relatively uniformly amplified genome that can allow for specificity in interpreting the sequencing results.2

New Discoveries from Single Cell Genomics

Many discoveries have already been made as the technology for single cell sequencing continues to improve. Genetic variation in cancerous tumors represents a significant application for this technology where it is already known that tumors develop from spontaneous mutations and that tumors themselves are genetically heterogeneous.1 While introducing MALBAC, Zong et al. 2012 used this method to show that the base mutation rate of a cancer cell is ten times larger than the rate for a germline cell – a finding made possible by the reliable amplification rate. Furthermore, from analysis of the number of short, repeating DNA sequences (like a series of inserted G’s or repeated codons), scientists discovered that an early, genetically unstable state in tumor cells causes rapid tumor growth.8

This single cell sequencing can be highly valuable anywhere where there is suspected genetic heterogeneity between cells, and other fascinating new opportunities for discovery lie in places like neuroscience and the gastrointestinal system.

Improvements for Future Discoveries

The most pressing issue in the development of single cell DNA sequencing, perhaps obviously, is ensuring the accuracy of the resulting sequence. Fortunately, letting the cell grow and divide on its own, then sequencing that population of cells is a reliable check of single cell sequencing accuracy. However, the problem of implementing a method that could replicate the genome nearly perfectly and entirely still remains. MDA and PCR are biased to work on certain parts of the genome and ignore others, influencing the sequence read off after amplification. MALBAC is a first-rate attempt to suppress extra replication of some genomic regions, but there is always room for improvement. Professor Xie, in an interview with Nature Methods said, “By no means is MALBAC the end game. We’re trying to do better”.1

One path forward is to improve DNA amplification so that sequencing machines can analyze many, copies of the cell’s original DNA without too many mutations during amplification. Research to this end would be able to focus on DNA amplification techniques and utilize preexisting DNA sequencing technology. However, a second path forward is to streamline the entire process by removing the amplification step and determining the cell’s genome from its singular DNA sequence. Instead of sequencing the result of extracting and replicating the cell’s DNA, scientists could glean the cell’s DNA sequence by taking advantage of its existing cellular DNA machinery and processes.

A sequencing technique relying on the cell to amplify its own genome has already been developed for short sections of the genome.9 This technique uses DNA bases that have been fluorescently tagged based on the identity of the nitrogenous base (ATGC). Unlike other sequencing methods, the tags are cleaved off the bases as they are synthesized instead of remaining in the DNA to be later recognized by a sequencing machine. The tracking of the release of these fluorescent molecules, leaving the synthesized DNA strand intact, can then be recorded and the sequence can be determined without significantly disturbing intracellular activity.

The ability to acquire and sequence the entire genome of an individual cell promises possibly transformative understandings of biological systems and single-celled organisms, of spontaneous, somatic mutations in the human body, and even of which genes we pass on to our offspring.

The potential to acquire this entire genome has led to exciting new scientific progress already. Antibodies in ß-cells are known to be genetically heterogeneous, so sequencing of antibodies could lead to breakthroughs in our understanding of the microscopic workings of the immune system. Efficient sequencing of a single cell also promises improvements in safety and accuracy in prenatal genetic screening. Though we anticipate a wealth of knowledge becoming available with an ideal technology, our increasingly effective attempts have led to everything from disproven hypotheses to exciting new insights.

Further reading:

For applications to bacteria: Lasken, Roger S. & Jeffrey S. McLean (2014). Recent advances in genomic sequencing of microbial species from single cells. Nature Reviews: Genetics. 15(9): 577-584.

RNA Sequencing: Wang Z, Gerstein M, Snyder M (2009). RNA-Seq: a revolutionary tool for transcriptomics. Nature Reviews: Genetics, 10(1): 57-63.

Jennifer Walsh ’17 is a sophomore in Lowell House concentrating in Physics.

Works Cited

  1. Chi, K.R. Nature Methods. 2014, 11(1): 13-17.
  2. Macaulay I.C.; Voet T. PLOS Genetics. 10(1). Retrieved March 28, 2015, from http://journals.plos.org/plosgenetics/.
  3. Brady G., Iscove N.N. Methods Enzymol. 1993, 225:611–623.
  4. Lasken R. et al. Nat. Reviews Genetics. 2014, 15(9): 577-584.
  5. Nawy, T. Nature Methods, 2014, 11(1): 18.
  6. Zong C. et al. Science, 2012, 338: 1622-1626.
  7. Reuell, P. One Cell is All You Need. Harvard Gazette, 2013.
  8. Navin N. et al. Nature, 2011, 472(7341): 90-94.
  9. Coupland P. et al. BioTechniques, 2012, 53(6): 365-372.

Genome Editing: Is CRISPR/Cas the Answer?

by Jackson Allen

For roughly the last 60 years, the focus of molecular biology and genetics has been to better understand the microscopic machinery that regulates the genome of every organism. Such scientists as Matthew Meselson, Rosalind Franklin, James Watson, and Francis Crick helped pave the way for contemporary understanding of molecular genetics. These scientists were responsible for discoveries including the structure of DNA, the molecule that encodes all of the genetic information of a person. Though the field of molecular biology has grown immeasurably since then, the most important discoveries in the field remain those that elucidate the mechanisms used to regulate genetic information in living organisms.

Unsurprisingly, scientists have begun to ask how they could apply this knowledge to conquer disease, correct genetic problems, or even learn more about genetics itself. The list of developments in molecular genetics is seemingly endless: genetic therapies promise to reprogram the body’s defenses to target cancer, analysis of the human genome has given us unprecedented understanding of our own genetics, and new technology can simulate the folding of proteins to assist in the development of new pharmaceuticals. However, no genetic tool today seems to hold more power and promise than CRISPR/Cas genome editing technology.

What is CRISPR/Cas?

First observed in 1987, the CRISPR/Cas system in nature comprises the immune system of many bacteria and archaea (1). These single-celled organisms use short repeats of DNA called Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) to mark viral DNA that has been incorporated into their genome by Cas (CRISPR Associated) proteins. By incorporating the DNA sequence of attacking viruses into their own genome, these bacteria can destroy an attacking virus by cleaving its DNA with a Cas protein (2). The discovery that bacteria can easily and accurately modify their own genomes remained largely underutilized until the early 2000s, when scientists began to modify viral-resistant bacteria using spacer DNA similar to CRISPRs (3). In 2012, scientists demonstrated that, using CRISPRs and Cas9 proteins, human cells could be genetically modified with precision not seen with other genome editing methods (4). Since then, researchers have used the CRISPR/Cas9 system to modify organisms including zebrafish, plants, and mice. Last year, researchers at the Koch Institute at MIT demonstrated that CRISPR/Cas9 could be used to cure mice of a genetic disease that prevents the breakdown of the amino acid tyrosine and eventually causes liver failure. After an injection of CRISPR RNA and Cas9 paired with DNA for an enzyme that breaks down tyrosine, the mice began to produce this enzyme. Within 30 days, the mice were cured of the disease and no longer required daily medications (5).

CRISPR/Cas Uses

The CRISPR/Cas9 system can be used for gene silencing as well as gene modification. Both outcomes have the potential to make important contributions to laboratory research and disease treatment. The CRISPR/Cas9 system relies on two main components: a Cas9 endonuclease that can cut DNA in the nucleus of a cell and a guide RNA, made of CRISPR RNA and trans-activating CRISPR RNA (tracrRNA). In nature, the CRISPR and tracrRNAs are separated. However, researchers discovered that the two sequences could be combined into a single guide RNA, significantly reducing the complexity of the system (6). The CRISPR RNA directs the Cas9 endonuclease to the appropriate DNA cleavage site, while the tracrRNA stabilizes and activates the Cas9 endonuclease. When this protein is activated, it creates a double-stranded break in the target DNA, which leads to activation of cellular repair mechanisms (6). Double-stranded DNA repair often leads to random insertions or deletions in the gene, because neither strand can serve as a template for repair. These mutations often silence the affected gene and prevent binding of the guide RNA used to target the gene. Thus, the CRISPR/Cas9 system will continue to target the DNA until such a mutation is introduced. However, if a segment of single-stranded DNA that is complimentary to either strand of cleaved DNA is introduced, the cell will repair the DNA cleavage using the single-stranded DNA as a template. Scientists have demonstrated that certain genetic mutations can be corrected by introducing this single-stranded DNA template to the cell’s own repair mechanisms (6).

In addition, the use of Cas9 nickase, a specialized version of the Cas9 endonuclease, has been shown to only cleave one strand of a cell’s DNA, reducing the frequency of off-target modifications while still allowing for DNA repair from a single-stranded DNA template (7). This specificity makes the CRISPR/Cas9 system more accurate and less likely to edit DNA at an undesired location. In fact, two studies in 2013 demonstrated that off-target modifications were reduced by 50 to 1500 times when Cas9 nickase was used (8).

However, treatments using CRISPR/Cas9 may not be limited to genetic diseases. A study at the Whitehead Institute used CRISPR/Cas9 to systematically analyze the effects of silencing over 7000 different genes on resistance to chemotherapy in cancer cells (9). Finding genes that are essential to the survival of tumor cells could potentially lead to a treatment using CRISPR/Cas9 alone or in combination with other therapies. Targeted delivery of CRISPR/Cas9 therapy is also a possibility, although more difficult than IV injections that have been used in previous animal studies. Other scientists working in developmental medicine have used CRSIPRs to screen mouse embryos for genes that could provide resistance to bacterial toxins (10). More effective at silencing genes than RNA interference, screening studies using CRISPR often report gene candidates that would have gone unnoticed in other types of screens (9).

Harvard Medical School Professor of Genetics George Church is one of the leaders in a field of scientists working to expand our knowledge about the CRISPR/Cas9 system. Church’s start-up, Editas Medicine, hopes to develop real-world treatments for genetic diseases using the most recent developments in CRISPR genomic editing. Church points out that the advantages of the CRISPR/Cas9 system over other types of gene editing are crucial for the practicality of treatments based on this science (11). The CRISPR/Cas9 system avoids the problems encountered in other methods of genome editing. For example, viral-vector delivered gene therapy can leave a dysfunctional gene in place even when inserting a healthy gene (11). By contrast, the CRISPR/Cas method is focused on the correction of genes already in an organism’s genome. This approach has the added benefit of leaving the gene in its correct chromosomal location, meaning the cell retains the ability to regulate the gene (11). “Editas is poised to bring genome editing to fruition as a new therapeutic modality, essentially debugging errors in the human software that cause disease,” said Editas director, Alex Borisy, in a recent interview with the McGovern Institute for Brain Research at MIT (11).

To be sure, the successes of CRISPR/Cas9 have not gone without scrutiny in the scientific community. Though most researchers working in the field would support the use of the CRISPR/Cas 9 system for curing genetic diseases like cystic fibrosis or sickle-cell anemia, far fewer support the use of this technology for other genetic modifications like cosmetic changes. Even among those who back the use of CRISPR/Cas9 for targeting disease, the possibility of editing the human germline, cells with the potential to pass on their genetic information, has sparked a debate about the role humans should play in our own evolution. On March 19, 2015, a number of scientists, policymakers, and experts in law and ethics published an editorial in Science, calling for a moratorium on germline genome modification until the ethics of such modifications can be debated openly from a multi-disciplinary perspective (12). This group, which included one of the co-discoverers of CRISPRs, cited a need for greater transparency in discussions of the CRISPR/Cas 9 system. The editorial, resulting from a January 2015 Innovative Genomics Initiative conference on genome editing, also sought more standardized benchmarking and evaluation of off-target modifications made by the CRISPR/Cas 9 system, the effects of which remain largely unknown (12). However, these scientists were careful to point out the tremendous potential of CRISPR/Cas 9 for curing genetic disease, which might tip the balance between risk and reward in favor of responsible genome editing (12).

The Future of CRISPR/Cas

The accuracy, efficiency, and cost of CRISPR/Cas9 make it an attractive alternative to other methods of genome modification. Though they have decades of research behind them, tools such as zinc-finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs) still prove costly and complex to use. For example, CRISPR recognition of target DNA depends not on a large and complex protein that must be synthesized, but on simple Watson-Crick base pairing between target DNA and simple RNA (9). Additionally, CRISPR/Cas9 can modify several genes at once (9). For these reasons, CRISPR/Cas9 may find its greatest niche in improving ongoing laboratory research, including the generation of genetically modified animals and cell lines. Other researchers prefer to work outside of the realm of genetic diseases, finding applications for CRISPR/Cas9 in agriculture, ecology, and even the preservation of endangered species. Though the technology behind CRISPR/Cas9 is still young, scientists and start-ups alike have begun to pioneer applications—from pathogen, heat, and drought-resistant crops to the recreation of the wooly mammoth and other extinct species (9, 13).

The coming years promise to be some of the most exciting for molecular biology. Although CRSIPR/Cas9 may just be an application of the knowledge acquired by scientists in recent decades, its promises are truly groundbreaking. With potential uses from genetic diseases to the revival of extinct species, CRISPR/Cas9 could very well usher in a new age of applied molecular genetics. However, this paradigm shift in such a pioneering field of science will not be without its ethical questions and debates. The greatest struggle for science in the future may not be the capabilities of developing technology, but the restraint and responsibility necessary to use such powerful tools. For the first time, quick and efficient editing of an organism’s genome is a realistic possibility, giving humanity the power to control its own evolution at the genetic level—not to mention the ability to change the genetics of the animals and plants that inhabit our world. For the scientists developing this technology, the policymakers calling for open discussions of its ethical issues, and the millions of people who could benefit from it, the coming years will undoubtedly hold tremendous advances.

Jackson Allen ’18 is a freshman, planning to concentrate in Molecular and Cellular Biology.

Sources

  1. Ishino Y, Shinagawa H, Makino K, Amemura M and A Nakata. Nucleotide sequence of the iap gene, responsible for alkaline phosphatase isozyme conversion in Escherichia coli, and identification of the gene product. J. Bacteriology 1987, 169(12): 5429–5433.
  2. Horvath P, Barrangou R. CRISPR/Cas, the Immune System of Bacteria and Archaea. Science 2010, 327(5962): 167–170.
  3. Barrangou R, Fremaux C, Deveau H, Richards M, Boyaval P, Moineau S, et al. CRISPR provides acquired resistance against viruses in prokaryotes. Science 2007, 315: 1709–1712.
  4. Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA and E Charpentier. A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity. Science 2012, 337(6096): 816–821.
  5. Yin H, Xue W, Chen S, Bogorad RL, Benedetti E, Grompe M, et al. Genome editing with Cas9 in adult mice corrects a disease mutation and phenotype. Nat. Biotech. 2014, 32: 551–553.
  6. Jinek M, East A, Cheng A, Lin S, Ma E, and J Doudna. RNA-programmed genome editing in human cells. eLife 2013, 2: e00471.
  7. Shen B, Zhang W, Zhang J, Zhou J, Wang J, Chen L, et al. Efficient genome modification by CRISPR-Cas9 nickase with minimal off-target effects. Nat Meth.2014, 11: 399–402.
  8. Ran FA, Hsu PD, Lin C-Y, Gootenberg JS, Konermann S, Trevino AE, et al. Double nicking by RNA-guided CRISPR Cas9 for enhanced genome editing specificity. Cell 2013, 154: 1380–1389.
  9. M. Baker. Gene editing at CRISPR speed. Nat. Biotechnol. 2014, 32: 309–312
  10. Koike-Yusa H, Li Y, Tan EP, Velasco-Herrera Mdel C, Yusa K. Genome-wide recessive genetic screening in mammalian cells with a lentiviral CRISPR-guide RNA library. Nat Biotech. 2014, 32: 267–73.
  11. Wang, Brian. 28 Nov. 2013. “George Church Has New 43 Million Dollar Startup Editas Medicine to Commercialize Precise CRISPR/Cas Gene Therapy.” Next Big Future.
  12. Baltimore, D., P. Berg, M. Botchan, D. Carroll, R. Charo, G. Church, J. Corn, G. Daley, J. Doudna, M. Fenner, H. Greely, M. Jinek, G. Martin, E. Penhoet, J. Puck, S. Sternberg, J. Weissman, and K. Yamamoto. A Prudent Path Forward for Genomic Engineering and Germline Gene Modification. Science 2015, 348(6230): 36-38.
  13. Temple, J. 8 Dec. 2014. “The Time Traveler: George Church Plans to Bring Back a Creature That Went Extinct 4,000 Years Ago.” Recode.

Astromycology: The “Fungal” Frontier

by Tristan Wang

Hollywood movies and horror novels have painted extraterrestrial life as green monsters, scouring the barren grounds of Mars and shooting any intruder with photon lasers. These disturbing imaginations, while far-fetched, do hold some truth about frightening outer space life forms, but not in the ways we imagine. During its orbit as the first modular space station, the satellite Mir experienced attacks from the least suspect extraterrestrial life form: mold. Splotches of fungal hyphae covered windows and control panels and gradually ate away at the hull’s interior during the latter part of the satellite’s life, and with it, any notion of a “sterile spaceship”.1

The discipline of astrobiology attempts to answer the larger mysteries about life: its origin, necessities for survival, and presence in other worlds. But astrobiology also has practical applications in considering how biological organisms may travel through space. In particular, human space travel would greatly benefit from studying a branch of fungal biology known as astromycology: the study of earth-derived fungi in space. Fungi offer both an opportunity and threat to human space travel. Problems arising from fungal intruders are both wide and relevant, ranging from providing food and decomposing biological material to breaking down spacecrafts. Interactions of intense radiation and lack of gravity with fungal growth underlie the opportunities and threats that fungi pose to human space travel.

Ecology

Environments in orbiting spacecrafts are often different from terrestrial environments back on Earth due to the lack of gravity and higher radiation levels in outer space. Even given these obstacles, fungi seem to have found a way to inhabit space environments. The Mir spacecraft was reported to host several general of fungi including species of Aspergillus, Penicillium and Cladosporium, all of which are known to be common molds of the phylum Ascomycota.1 What makes these genera so special is their adaptability to survive in a variety of environments.  These genera are known as saprophytes (organisms that live on decaying matter) and are shown to be resilient in a relatively wide range of temperatures and humidities.2,3 Thus, food and environments are not as limiting to these opportunistic fungi, organisms able to spread quickly into uncolonized environments.

Fungi and plants have been shown to display the phenomenon of gravitropism,  growth in reaction to gravity. Being able to grow in a particular direction is important for fungal development, in particular for reproduction and spore discharge. For example, in the case of ascomycetous fungi, sexual spores are discharged into the air through tubular vesicles called asci.4 If fruiting bodies of molds release spores towards the base of the aerial hyphae, reproduction would not be optimized. Also, studies have shown that fungi tend to respond most sensitively to gravity just behind the apex of growing hyphae, which are hair-like filaments of fungi, although there seems to be a significant lag time before a notable reaction.5 It has been proposed that this bending of hyphae is due to a particular chemical growth factor that originates from the apical portion of the stem.6

Whereas gravity affects the morphological shape of fungi, radiation is more subtle in its physiological impact. Fungi seem to have peculiar adaptations to coping with stress due to radiation. Scientists have studied closely how radiation from the nuclear meltdown at Chernobyl in 1986 affected the environment and its ecology. Darkened fungal organisms were retrieved from the reactor’s walls.7 Specifically, these fungi were melanized, which means they were darkened by natural pigments.

Typically, fungi are fairly resistant to exposure to doses of ionizing radiation and it appears that the morphological adaptation of melanin reinforces this defense.8 Melanized fungi occur naturally around the world from the arctic to the Antarctic but when researchers observed Penicillum expansum in space for seven months, they observed increased presence of melanin layers.8 In fact, Cryptococcus neoformans cells exposed to radiation 500 times the background rate grew faster than irradiated non-melanized and non-irradiated counterparts.8 It is possible that melanin not only protects free radicals from harming important fungal DNA, but also provides some mechanism of energy utilization similar to how plants capture light energy.8 Indeed, some studies have shown an increased rate of metabolism for melanized fungi when irradiated. The basis for fungal resistance to ionizing radiation may be genetic. Expressions of genes related to cell cycle and DNA processing seem to be sensitive to upregulation when irradiated and may help the organism adapt better to the environment.8

Implications for Human Space Travel

Perhaps one of the better-known uses of fungi in space comes from edible mushrooms. It’s not unusual for foodies and nutritionists to have positive reviews over the nutritional and delicacy of general fungal-based foods. Typically, edible mushrooms are rich in protein, complex carbohydrates and certain vitamins such as D (in the form of ergosterols) and some B vitamins while being low in fat and simple carbohydrates.9 The Food and Drug Administration (FDA) even ranks mushrooms as “healthy foods” which raises the question of cultivating mushrooms as a sustainable food source in space.9 A popular edible mushroom Pleurotus ostreatus, also known as the oyster mushroom, has proven useful in its versatility in cultivation. While edible mushrooms are able to feed off of a variety of lignocellulosic substrates, P. ostreatus requires a shorter time to grow and has a high fruiting body to substrate ratio when developing.10 All these factors make oyster mushrooms one of the most cultivated edible mushrooms, and a good candidate for use in space.10

The greatest strength of fungi in breaking down materials also happens to be its greatest fallback. Common fungi that find homes in house-fridges also happen to be adept in forming corrosive secondary compounds. Common genera found in spacecraft and on Earth like Geotrichum, Aspergillus, and Penicillum create destructive compounds capable of hydrolysis like acetic acid.11 This chemical component, coupled with hyphe (mycelial root-like hairs of fungi) penetration, allows fungi to damage deep into surface layers like wood, stone and walls.1,11 Not surprisingly, given the variety of substrates fungi can colonize, space crafts are not impermeable to an infestation.

Conclusion

At home, molds are considered a pest growing in our bathrooms, wet corners of houses and in old refrigerators. In nature, it seems that mushrooms play an integral part in the cycle of nutrients breaking down lignin and other plant material. Out in space, however, we are just beginning to learn about fungal presence. Fungal interaction with gravity and radiation seem to come right out of a science-fiction novel, but their implication as a nutritional and yet destructive entity is real. People have looked for extra-terrestrial life for generations, and it seems that only now are we noticing the most interesting, important and fuzzy aliens so far.  Perhaps one day, scientists will find a way of incorporating fungi to aid in production of fresh food out in space and degradation of biological waste; or maybe fungi will be used in the absorption of radiation. Until then, our eyes are peeled.

Tristan Wang ’16 is a junior in Kirkland House concentrating in Organismic and Evolutionary Biology.

Works Cited

  1. Cook, Gareth. “Laura Lee News – Orbiting Spacecraft Turns out to Be Food for Aggressive Mold.” Laura Lee News – Orbiting Spacecraft Turns out to Be Food for Aggressive Mold. Conversation for Exploration, 1 Oct. 2000. Web.
  2. Pitt, John I. “Biology and Ecology of Toxigenic Penicillium Species.” Mycotoxins and Food Safety 504 (2002): 29-41.
  3. Wilson, David M., Wellington Mubatanhema, and Zelijko Jurjevic. “Biology and Ecology of Mycotoxigenic Aspergillus Species as Related to Economic and Health Concerns.” Mycotoxins and Food Safety 504 (2002): 3-17.
  4. Trail, Frances. “Fungal Cannons: Explosive Spore Discharge in the Ascomycota.” FEMS Microbiology Letters 276.1 (2007): 12-18.
  5. Moore, David, and Alvidas Stočkus. “Comparing Plant and Fungal Gravitropism Using Imitational Models Based on Reiterative Computation.” Advances in Space Research 21.8-9 (1998): 1179-182.
  6. Kher, Kavita, John P. Greening, Jason P. Hatton, Lilyann Novak Frazer, and David Moore. “Kinetics and Mechanics of Stem Gravitropism in Coprinus Cinereus.” Mycological Research 96.10 (1992): 817-24.
  7. Melville, Kate. “Chernobyl Fungus Feeds On Radiation.” Chernobyl Fungus Feeds On Radiation. Sci Gogo, 23 May 2007. Web.
  8. Dadachova, Ekaterina, and Arturo Casadevall. “Ionizing Radiation: How Fungi Cope, Adapt, and Exploit with the Help of Melanin.” Current Opinion in Microbiology 11.6 (2008): 525-31.
  9. Stamets, Paul. Mycelium Running: How Mushrooms Can Help save the World. Berkeley, CA: Ten Speed, 2005.
  10. Sánchez, Carmen. “Cultivation of Pleurotus Ostreatus and Other Edible Mushrooms.” Applied Microbiology and Biotechnology 85.5 (2010): 1321-337.
  11. Schaechter, Moselio. “Biodeterioration – Including Cultural Heritage.” Encyclopedia of Microbiology. Amsterdam: Elsevier/Academic, 2009. 191-205.

A Winning Combination Against Drug Resistance

 

by Ryan Chow

Earlier this year, President Obama announced the Precision Medicine Initiative. Proclaiming that the initiative would “lay the foundation for a new generation of lifesaving discoveries,” the President proposed setting aside $215 million to expedite the clinical translation of personalized genetics research.1 The initiative specifically highlights the development of patient-specific cancer therapies as an immediately promising area for breakthrough research. Accordingly, the National Cancer Institute (NCI) is budgeted to receive $70 million for this specific purpose.

In recent years, several next-generation cancer drugs have been approved for patients harboring certain genetic abnormalities and alterations. For instance, lung cancer patients with activating mutations in EGFR, a gene that regulates cell division, can now undergo erlotinib treatment. Similarly, patients with alterations in ALK, a gene that controls cancer progression, can receive crizotinib and ceritinib therapy.2-4 Such targeted therapeutics are designed to specifically counteract the cancer-promoting effects of these genetic mutations, largely leaving other cellular pathways intact. With their heightened specificity and efficacy profiles, these next-generation drugs have revolutionized the world of cancer therapy, improving patient prognosis and quality of life. It is no wonder, then, that there has been a strong push to fund further research into personalized cancer therapeutics.

But for all of their strengths, these new drugs often have a major caveat: cancers develop resistance to targeted therapeutics within one to two years of initial administration.5 The mechanisms underlying drug resistance have been extensively studied in vitro using established cancer cell lines.6 Although in vitro methods may not always faithfully recapitulate the progression of actual human cancers, these studies have been  critical in identifying secondary “bypass tracks” as facilitators of drug resistance; namely, a cancer may gradually evolve alternate strategies for promoting tumor growth upon pharmacological inhibition of the cancer-initiating mutation. Following this logic, one would hypothesize that combination cancer therapy could potentially overcome drug resistance by blocking both the primary oncogenic pathway as well as the secondary bypass track.

Looking for a way to efficiently identify combination therapies for drug-resistant lung cancers, researchers at Massachusetts General Hospital recently developed a screening platform to interrogate possible secondary bypass tracks within patient tumors that had already become resistant to a targeted therapeutic.7 The results of Crystal et al.’s study were striking: for many of the patient-derived cell lines, the high-throughput drug screen was able to clearly pinpoint the bypass track that the tumors had acquired, thereby uncovering a viable route for further pharmacological intervention. For instance, it was found that co-administration of MET inhibitors could resensitize tumor cells to EGFR inhibitors; importantly, statistical analysis demonstrated that the two inhibitors had synergistic effects that far exceeded the predicted efficacy if each drug was functioning independently.

With this study, the authors were able to identify a diverse range of potent, patient-specific combination therapies that successfully overcame drug resistance. Several intriguing patterns emerged from the aggregate data, revealing commonalities in the mechanisms of acquired resistance. However, one must keep in mind that these patterns, while biologically interesting, are not the key findings of the paper. Rather, the study acts as a proof-of-concept that patient-specific therapeutic strategies can be systematically discovered following the failure of first-line targeted therapies. In that sense, the work of Crystal et al. actually deemphasizes the importance of identifying general trends within the landscape of drug resistance; if we can identify combination therapies on a patient-by-patient level, there is no longer a need to make therapeutic decisions based on population-level correlations.

Paradoxically, their work also serves as a cautionary tale for personalized cancer genomics. Some of the most effective combination therapies that the authors identified through their drug screen could not have been predicted by traditional genetic analysis – that is, the combination therapies targeted pathways that did not possess any genetic alterations. The natural implication is that clinicians must not make treatment decisions based solely on patient genotypes, as the probability of missing important bypass tracks is far from negligible. One would be better off instead performing an entirely unbiased secondary drug screen on a patient-specific basis, in a manner akin to Crystal et al.

In this highly translational work, Crystal et al. have helped set the foundations for the future of personalized cancer medicine. Though their findings have not yet been tested in actual patients, the potential impact to human medicine is clear. Only time will tell if these patient-derived cell models can faithfully capture the complexities of drug resistance and thereby yield clinically effective therapeutic approaches.

Ryan Chow is a junior in Pforzheimer House, studying Human Developmental and Regenerative Biology.

Works Cited

  1. The White House: Office of the Press Secretary. “FACT Sheet: President Obama’s Precision Medicine Initiative.” The White House Briefing Room [Online], January 31, 2015. https://www.whitehouse.gov/the-press-office/2015/01/30/fact-sheet-president-obama-s-precision-medicine-initiative (accessed March 19, 2015).
  2. Tsao, M.S. et al. New England Journal of Medicine 2005, 353(2), 133–144.
  3. Shaw, A.T. et al. New England Journal of Medicine 2014, 370(13), 1189-1197.
  4. Shaw, A.T. et al. New England Journal of Medicine 2014, 371(21), 1963-1971.
  5. Chong, C.R.; Jänne, P.A. Nature Medicine 2013, 19, 1389-1400.
  6. Niederst, M.J.; Engelman, J.A. Science Signaling 2013, 6(294), re6.
  7. Crystal, A.S. et al. Science 2014, 346(6216), 1480-1486.

 

 

Mapping the Nervous System: Tracing Neural Circuits with Color Changing Proteins

by Christine Zhang

Stepping out the front door of my dorm, I am frequently greeted by a sharp gust of wind that convinces me to turn back and grab a coat. The reaction is almost instantaneous. But in that split-second, the action of turning around requires 100 billion action potentials and the signal transmits over 20 quadrillion synapses1. With the complexity of the nervous system, it can be difficult to pinpoint the origin of neural circuits. Yet these details are critical to understanding and treating neurodegenerative diseases. There has been substantial research in developing mechanisms to identify active neural circuits, and presently, research in synapses looks the most promising.

There are billions of neurons in the human body lined side-by-side to each other, separated by a microscopic gap known as a synapse. As the action potential arrives at the end of a neuron, calcium channels in that neuron open and the ion rushes in, triggering the release of neurotransmitters to carry the signal over the synapse2. The difference in calcium levels inside and outside of the neuron creates a concentration gradient that activates the neurotransmitter; thus, calcium levels and the intensity of neuron activity are strongly correlated.

At present, there are two widely accepted methods of detecting neural activity: genetically encoded calcium indicators (GECIs) and immediate early genes (IEGs). Both GECIs and IEGs monitor neuron functioning at the molecular level to give estimates of neural activity. However, the two processes are incomplete in design, hindered by complex set-ups and limited efficacy. GECIs can track calcium concentrations directly but require sophisticated machinery, physical restraint of the subject, and only provide limited scopes of view3. These complex requirements make GECIs feasible in few situations. In contrast, IEGs can be monitored in a larger time window and can be observed in free moving bodies, but cannot monitor neural activity directly. Rather than tracking calcium levels, IEGs record the expression of intermediate genes, which is at best weakly correlated with neural electrical activity3.

In light of these difficulties, Benjamin Fosque, Yi Sun, and Hod Dana, researchers in the Department of Biochemistry and Molecular Biology at the University of Chicago, developed a new mechanism to monitor neural behavior. Their idea features a fluorescent protein that changes color from green to red under violet light. Fosque, Sun, and Dana constructed a mutagenic fluorescent protein, called CaMPARI, to undergo this color change only in the presence of calcium. CaMPARI, or calcium-modulated photoactivatable ratiometric integrator, changes color 21 times faster in the presence of calcium than in its absence3. The rate of fluorescent conversion coupled with the intensity of fluorescence of CaMPARI conveys unprecedented levels of information on cell-type identification and examination and has groundbreaking potential.

When tested in Drosophila melanogsaster and larval zebrafish in vivo to track whole-brain activity and neural pathways, CaMPARI continued to prove to be highly successful3. It combined the advantages of traditional neural activity tests without the drawbacks. The method employed direct targeting of calcium as in GECIs and the flexible time window and freedom of movement of IEGs. As an additional benefit, CaMPARI also enables the possibility for follow-up experiments including electrical recordings, antigen detections, and genetic profiling of cells.

With the enhanced tracking of neural activity, neurologists can more rigorously study neurons on the molecular level and observe individual cell behaviors. Given its in vivo applications, CaMPARI can also contribute to developing personalized cures for patients with neurodegenerative diseases and to understanding the exact mechanisms in which neural diseases affect the body. With a reliable and easily employable neural monitoring mechanism, the potential scientific and medical gains are endless.

Christine Zhang ‘18 is a freshman in Thayer Hall.

Works Cited

  1. Bryant, A. What is the synaptic firing rate of the human brain? Stanford Neuroblog, Aug. 27, 2013.
  2. Sudhof, T.; Malenka, R. Neuron 2008, 60(3), 469-476.
  3. Fosque, B. et al. Science 2015, 347(6223), 755-760.

 

Neurosurgeon or the Next Monet?

by Alexandra Rojek

The mysteries of the brain and its functioning are both an object of fascination for researchers, for the ordinary human, but are also the source of much difficulty in cases of brain cancer. To make a surgeon’s job of removing a cancerous tumor even more difficult than it already inherently is, tumors found in the brain often do not have defined boundaries. In a different type of tumor removal when the surgeon might remove an extra margin of tissue to ensure the complete and successful removal of the tumor, this becomes a much riskier technique for brain tumors when removing such an extra margin since removing any healthy tissue could mean fundamental changes to an individual’s personality or identity, or could lead to severe deficits in their physical functioning.

A neurosurgeon’s task of tumor removal may become much more defined very soon, with the development of Tumor Paint – a dye derived from deathstalker scorpion venom that binds only to cancerous cells, allowing a neurosurgeon to directly see the boundaries of a tumor and to not guess at where the boundary between healthy and cancerous tissue lies.

This tumor paint dye is a fusion of a fluorescent protein, Cy5, with a non-toxic component of this deathstalker scorpion venom, chlorotoxin – together this fusion of the tumor-binding chlorotoxin and fluorescent protein glow when exposed to near-infrared light. The dye can bind to malignant gliomas, medulloblastomas, prostate cancer, intestinal cancer, and sarcoma in mouse models. Incredibly, it can detect small numbers of metastatic cells in the lymph channels, presenting the opportunity to perhaps detect metastases before they localize to secondary sites and become established tumors.1

Beyond being able to ‘paint’ tumor boundaries and aid the difficult and sensitive tasks of surgeons, the promise of chlorotoxin also expands to the delivery of nanoparticles as drug delivery vehicles specifically to cancerous tissue. Chemotherapy is based on the principle that since cancer cells grow and divide faster than healthy cells, they will be more affected by the negative effects of such drugs, but healthy cells are still affected regardless – resulting in the undesirable side-effects of traditional chemotherapy. Being able to deliver drugs, such as through nanoparticles, directly to cancer cells would allow drug discovery to pursue more potent compounds since side-effects would be minimized.

Chlorotoxin has the ability to target cancer cells on its own, and has even been developed as an imaging agent for use in MRI scans to identify tumors, when conjugated with superparamagnetic nanoparticles.2 In further work, it has also been shown that this nanoparticle-conjugated chlorotoxin can inhibit the invasive capacity of tumor cells, suggesting potential therapeutic benefit. The chlorotoxin conjugate was also observed to bind more strongly to cells that exhibited markers correlated with more aggressive and invasive stages of tumors, showing even more promise for its therapeutic applicability.3

What started from deadly scorpion venom inspired development of an injectable dye to identify tumors may be allow the neurosurgeon to become a highly precise painter and hunter of cancerous tissue, holding the promise of transforming some patients’ prognoses – and the boundaries of what it might inspire beyond that are just beginning to be visible on the horizon.

Alexandra Rojek ‘15 is a senior in Currier House, concentrating in Chemical and Physical Biology.

Works Cited

  1. Veiseh, M. et al. Cancer Res. 2007. 67, 6882-6888.
  2. Veiseh, O. et al. Nano Lett. 2005. 5,1003-1008.
  3. Veiseh, O. et al. Small. 2009. 5, 256-264.

Precision Medicine: Revamping the “One-Size-Fits-All” Approach to Healthcare

by Eleni Apostolatos

“There’s no gene for fate,” declares Vincent, the main character of the 1997 science fiction thriller, GATTACA, after he decides to disguise his imperfect DNA and assume the genetic identity of Jerome Morrow—a man whose flawless genetic makeup makes him apt for space travel. While Vincent’s argument that genes do not necessarily determine fate is valid, we are entering an age in which Vincent’s statement can be interpreted in a new light: in healthcare, the study of genes is reshaping patients’ fates.

The correct identification of genes linked to diseases and the prescription of tailored prevention mechanisms and treatments to meet specific needs—a practice known as precision medicine—can alter patients’ futures. Physicians involved with precision medicine aim to redefine the use of personal genomics, the study of individuals’ DNA, in healthcare by detecting genetically rooted illnesses and predicting personalized responses to potential treatments. In recent years, precision medicine has gained increasing public attention and support. The emerging question is: how will the surge of genomics impact the medical field and the health arena, as our society’s emphasis on genomics draws closer to that of GATTACA´s genetically-oriented society?

Precision Medicine from the Start  

“It’s far more important to know what person the disease has than what disease the person has.” – Hippocrates

The notion of personalized medicine can be traced back several centuries to the time of Hippocrates. Some 2,500 years ago, Hippocrates wrote about the importance of presenting “different [drugs] to different patients, for the sweet ones do not benefit everyone, nor do the astringent ones, nor are all the patients able to drink the same things”.1 He arrived at the remarkable notion that every human is biologically unique. However, this notion was not significantly put into practice until much later, in the 20th and 21st centuries.

The traditional standardized care relied on a “one-size-fits-all” approach that proved to be ineffective in the treatment of many diseases, including diabetes and cystic fibrosis. In standardized care, doctors are less prepared to anticipate their patients´ susceptibility to certain diseases and their responses to prescribed treatment. Doses are prescribed on the basis of statistically gathered data and later adapted to each patient´s specific response.2 As Dawn McMullan wrote in her article, this form of care “doesn’t have all that much to do with you specifically”3 —the diagnosis is a general one that is usually met with a pre-constructed and impersonal treatment. And even though for almost half a century “scientists and clinicians suspected that a person’s genes could play a vital role in the response to medicines, genetic technology was not advanced enough to reveal which genes and which variations of those genes were relevant,”2 as claimed in the study “Personalized Medicine versus era of ‘Trial and Error.’”

In the 20th century, developments in science and technology enabled more focused healthcare; with the emergence of genetics, imaging and data mining, the medical device and pharmaceutical industries began to critically grow. The sequencing of the human genome at the outset of the 21st century made way for the sustained growth of personalized medicine: from being a theory that doctors tried to understand, personalized medicine began to be practiced. While scientists today still can’t pin down the reasons why patients have different manifestations of disease— in particular, of cancer and autoimmune illness—recent developments in genomics and in genetic technology have allowed for more personalized diagnosis and treatment, and have eased the way for precision medicine and its revolutionary approach to personalized healthcare. The National Academy of Science (NAS) defined “personalized medicine” as “the use of genomic, epigenomic, exposure and other data to define individual patterns of disease, potentially leading to better individual treatment”4. The momentum in genomics research has increased drastically in the past thirty years—when compared to its pace in the last four centuries, as visually depicted in Figure 1—with the invention of Polymerase Chain Reaction (PCR), an incredibly useful technique that is often considered to be, as written by Stephen A. Bustin in his book The PCR Revolution, “the defining technology of our molecular age”.5

Through PCR, researchers can amplify select sequences of DNA; metaphorically, “the needle-in-a-haystack stumbling block is magically recast as a solution that creates a haystack made up of needles”,5 enabling scientists to construct multiple copies of desired segments of DNA for use in lab. PCR thus allowed for the detection of single nucleotide changes in DNA, and paved the way for the Human Genome project— which was launched only five years later in 1990.

The Human Genome Project was an international project carried out to identify the sequence of the chemical base pairs that compose human DNA. Today, we are surrounded by the fruits of the Human Genome Project, which are some of the most promising discoveries in the history of humankind. Scott T. Weiss, the scientific director at Partners HealthCare Center for Personalized Genetic Medicine at Harvard Medical School, commented about the Human Genome Project: “Probably at no time in the history of medical research…has there been more potential and promise for discovery that will benefit mankind in terms of the health of the species as where we are right now as a result of the Human Genome Project”.3 In 2015, BioMed Research International revealed the growing impact of the Human Genome Project: “Currently, more than 1800 disease genes have been identified, more than 2000 genetic tests have become available, and in conjunction with this at least 350 biotechnology- based products have been released onto the market.”6

As a result of technological advancements that followed PCR and the Human Genome Project, physicians can now practice personalized medicine and provide “the right patient with the right drug at the right dose at the right time”,4 as the U.S. Food and Drug Administration indicates. This new form of medical treatment aims to tailor all stages of care—prevention, diagnosis, treatment and follow-up—to a patient’s specific needs.

Precision Medicine and Cancer

The advancements of precision medicine have been most noteworthy in the detection and treatment of cancer. Patients’ and tumors’ DNA are helping researchers and physicians determine the ideal treatment pathway for cancer patients. Patients with cancers such as melanomas, leukemias, breast cancers, and lung cancers, go through molecular testing and provide physicians with sufficiently specific details about their condition to enable personalized and precise treatment, generally minimizing adverse side effects and unnecessary exposure to certain treatments.7

“If you didn’t know what happened to her, and you saw her now, you would have no idea what she has been through,” says Emily Whitehead’s mom of her daughter’s formidable recovery. After being told by doctors at age 6 that she could no longer go through any more aggressive chemotherapy treatment as it would no longer help her acute lymphoblastic leukemia (ALL), Emily was enrolled into a phase 1 clinical trial at the Children’s Hospital of Philadelphia, where she was the first pediatric patient to be treated with a novel type of cancer immunotherapy: genetically modified T-cells, taken from her blood and re-engineered to detect a protein on her leukemia cells, were used to fight her specific leukemia condition.8 Recognized as a 2013 Breakthrough of the Year by Science Magazine, the genetically engineered T-cells, which Emily aptly dubbed “ninja warriors,” successfully treated the type of leukemia Emily had. After only 28 days of treatment, Emily recovered, and was in remission—and has been for the past three years. If she had stayed with chemotherapy, the more general form of cancer treatment, these results wouldn’t have been possible.8

Precision medicine has revolutionized diagnosis in a number of cancers. For instance, physicians used to diagnose non-small-cell lung cancer by simply assessing the tumor’s location. However, due to advances in precision medicine, physicians can also use genetic mutations in the tumor to determine a diagnosis. More specific diagnosis due to precision medicine also helps personalize therapy for increased effectiveness. An example is non-small-cell lung cancer: a driver mutation occurs in the gene encoding the signaling protein EGFR, which normally promotes cell proliferation; this driver mutation causes the EGFR to have increased activity and tumors to have unrestricted growth. Gefitinib and Erlotinib are drugs that block the abnormal activity of the EGFR that are inflicted with the driver mutation in non-small-cell lung cancer. These two drugs, however, are only useful in about 10 percent of patients with non-small-cell lung cancer because they only help the patients who have the specific EGFR driver mutation.9 Consequently, diagnosis of this type of disease can facilitate its treatment, as confirmed by the National Academy of Science (NAS) who, in their report “Improving Diagnosis Through Precision Medicine” prove how useful and efficient it is to focus on the molecular profile of each individual patient as opposed to only assessing the general characteristics of the disease or location of the cancer.9 By arriving at more specific diagnosis, treatment is more effective. Currently, specific mechanisms that detect the “molecular signatures” of individual manifestations of cancers are replacing traditional tools for diagnosis.9 While “this knowledge is starting to revolutionize the way medicine is practiced and is most vividly seen today in the diagnosis and treatment of patients with cancer,” there is great faith that “where we have yet to go next may prove the most useful and transformational development in medicine to date”,10 as elucidated in Davos 2015: The New Global Context.

How Precision Medicine is Already Changing the U.S. Healthcare System

In his State of the Union Address on January 20th, 2015, President Obama supported precision medicine. He announced the new Precision Medicine Initiative, that aims to provide correct personalized treatment to patients and families and promises to provide physicians with the necessary resources to treat the specific illnesses of their patients.

President Obama’s 2016 Budget dedicates $215 million to the National Institutes of Health (NIH), the Food and Drug Administration (FDA) and the Office of the National Coordinator for Health Information Technology (ONC) to increase research in precision medicine. The Precision Medicine Initiative plans to put together a comprehensive database of research findings, with genetic information and medical data from about a million Americans.11 The FDA has assigned the largest fraction of the investment, $130 million, to NIH for national research that hopes to increase understanding of disease and health with volunteers. Additionally, the FDA is investing a large sum of the money in the National Cancer Institute (NCI), in order to increase knowledge of genomic drivers and produce effective treatments for cancers.2

The White House’s website states that “America is well-positioned to lead in a new era of medicine, as the country that eliminated polio and mapped the human genome”.2 This heralds the question: is the rest of the world on board, or is only healthcare in the U.S. undergoing the change towards precision medicine?

Currently, the U.S. is the country that promotes precision medicine and its research the most, but this personalized outlook on medicine has the potential to catch on elsewhere. The main difficulty posed by precision medicine in the global scene is the sharing of knowledge and procedures. As written in Davos 2015: The New Global Context: “Today, two people receiving world-class care in different parts of the world, both suffering from a known and identifiable cancer, may experience two very different outcomes. One person will succumb to a cancer for which the other has been successfully treated.”10 This is unfortunately “a tragedy of modern medicine, made even worse since this disparity may often be solvable, the information to do so is at hand.”10 Discussions and conferences are taking place to resolve the issue of shared information—such as the 2nd Precision Medicine Congress that will be held by Global Engage later this year in London.

Projections to the Future

While the cost of performing gene mapping is steadily decreasing , its feasibility and accessibility increase. “It cost us $400 million for that first genome,” Dr. Francis S. Collins, the director of the National Institutes of Health, claims. “Now a genome can be sequenced for a cost approximating $1,000”.12 Not only is the cost for DNA sequencing decreasing—with many companies in the market already offering services related to sequencing for around $100—but also, “it is argued that, by increasing treatment effectiveness in specific individuals and reducing risk and expenditure associated with treating patients with an inappropriate drug, such approaches herald a new era of cheaper, more effective healthcare”.13

Additionally, precision medicine can also be used as a preemptive mechanism. Through precision medicine, healthcare can be veered more towards prevention than treatment, as indicated by the European Science Foundation: “Personalised medicine has the potential to embrace a truly pro-active and preventive approach to the health and wellbeing of all citizens”.13

However, some fears about precision medicine are also being put to voice. “I don’t think anybody disagrees with the fact that we [patients] are different and we respond differently. But it’s hard to make changes,”3 explains Edward Abrahams, the president of the Personalized Medicine Coalition in Washington, D.C. “You want to see evidence before you’re willing to move away from one-size-fits-all traditional medicine.”3 What frightens some is precisely that precision medicine is tailored for them, and thus few people have tried the exact prescribed treatment.

Additionally, the public is expressing ethical concerns with the privacy of genetic profiles. What exactly is done with the genetic information? How will it be stored—- and who will have access to it? These are some of the questions that are currently being asked. Some patients are concerned that insurance companies or employers might genetically discriminate. Also, genetic correlations indicating increased intelligence or better behavior might affect individuals’ identities. Maintaining privacy is an ongoing issue: GATTACA displays the potentially stifling effects of genomics; in its fantastical world, the privacy of the citizens is completely infringed.

Yet, despite the fears and concerns, many believe that precision medicine will continue to grow in popularity, as Ralph Snyderman, M.D., Chancellor Emeritus at Duke University relates in the The Case for Personalized Medicine 2014: “Health care today is in crisis as it is expensive, reactive, inefficient, and focused largely on one-sizefits- all treatments for events of late stage disease. An answer is personalized, predictive, preventive, and participatory medicine.”14 Similarly, both Margaret Hamburg, M.D., Commissioner of the U.S. Food and Drug Administration, and Francis Collins, M.D., Ph.D., Director of the National Institutes of Health, jointly commented optimistically in The Case for Personalized Medicine 2014, “As the field advances, we expect to see more efficient clinical trials based on a more thorough understanding of the genetic basis of disease. We also anticipate that some previously failed medications will be recognized as safe and effective and will be approved for subgroups of patients with specific genetic markers.”14

How will precision medicine develop in the following years? Will Obama’s Precision Medicine Initiative revolutionize healthcare in the United States? While current opinions and projections give us an idea of precision medicine’s promising future, only time will tell the role precision medicine will actually play in healthcare and society.

Eleni Apostolatos ‘18 is a freshman in Greenough Hall.

Works Cited

  1. Holst, L. “The Precision Medicine Initiative: Data-Driven Treatments as Unique as Your Own Body.” The White House. The White House, 30 Jan. 2015. Web. 7 Apr. 2015.
  2. Najeeb, Q et al. “Personalized Medicine versus Era Of ‘Trial and Error.’ ” Journal of Pharmaceutical and Biomedical Sciences, 2012. Web. 13 Apr. 2015.
  3. McMullan, D. “What Is Personalized Medicine?” Genome Magazine. N.p., n.d. Web. 5 Apr. 2015.
  4. “Paving the Way for Personalized Medicine.” (2013): n. pag. ES U.S. Food and Drug Administration, Oct. 2013. Web. 5 Apr. 2015.
  5. Bustin, S.A. The PCR Revolution: Basic Technologies and Applications. Cambridge: Cambridge UP, 2010. Web. 13 Apr. 2015.
  6. Durmaz, A.A. et al. “Evolution of Genetic Techniques: Past, Present, and Beyond,” BioMed Research Intl, vol. 2015, Article ID 461524, 7 pages, 2015. doi:10.1155/2015/461524
  7. “FACT SHEET: President Obama’s Precision Medicine Initiative.” The White House. The White House, 30 Jan. 2015. Web. 2 Apr. 2015.
  8. “About Emily.” My Journey Fighting Leukemia. N.p., 2015. Web. 4 Apr. 2015.
  9. Insel, T. “Director’s Blog: Improving Diagnosis Through Precision Medicine.” National Institute of Mental Health. U.S. Department of Health and Human Services, 15 Nov. 2011. Web. 5 Apr. 2015.
  10. Pellini, M. “Not Knowing the Knowable.” Davos 2015: The New Global Context. World Economic Forum, 23 Jan. 2015. Web. 4 Apr. 2015.
  11. Powledge, T.M. “That ‘Precision Medicine’ Initiative? A Reality Check.” Genetic Literacy Project, 3 Feb. 2015. Web. 5 Apr. 2015.
  12. Pear, R. “U.S. to Collect Genetic Data to Hone Care.” The New York Times. 30 Jan. 2015. Web. 4 Apr. 2015.
  13. Look, E.F. “Personalised Medicine for the European Citizen.” European Science Foundation (2012): n. pag. European Science Foundation. Web. 5 Apr. 2015.
  14. The Case for Personalized Medicine (2014): n. pag. Personalized Medicine Coalition. Web. 7 Apr. 2014.

A Watchful Eye over Wildlife: Drone Technology & Conservation

by Caitlin Andrews

When we think of field biologists, most of us imagine scientists trekking through uncharted rainforests or across endless savannas, armed with only a notebook and a pair of binoculars. These intrepid heroes, such as Jane Goodall, have shown us how much there is to be learned when we leave behind the comforts of civilization and immerse ourselves in nature. Yet, the line between nature and civilization is becoming blurred as the human population expands and encroaches on wilderness. At the same time, technology is becoming increasingly integrated with fieldwork, particularly in the area of conservation. Whether fitting an animal with a GPS tracking device or collecting plant samples to be analyzed in the lab, field biologists rely more and more on technology to increase the scope and impact of their studies. And, with countless species facing imminent threats to their survival, biologists must also strive for efficiency in their methods, which can be challenging in remote and often dangerous research conditions.

One of the most promising technologies to emerge as a tool for conservation is a technology that has already begun to gain traction in mainstream society: the unmanned aerial vehicle. To some, UAVs—better known as “drones—still seem uncomfortably close to something out of a science fiction movie. But, drones have already proven useful in fields as wide-ranging as the military, agriculture, and cinematography. And, now that drones have entered the realm of science, it looks like they could soon change the face of conservation.

Why Drones?

If scope and efficiency are two of the main obstacles that field biologists face in their research, then drones could provide the ultimate solution. In their most basic form, field surveys involve a census of an animal population and an assessment of habitat conditions. When performed on foot, these surveys can cost hundreds of thousands of dollars to sustain for even a few years and, even then, they are so time-consuming and inefficient that only a rough estimate of population size or environmental conditions can be achieved. Manned planes or helicopters can provide an aerial view of an expanse of ocean or forest, but they pose a tremendous financial barrier which often outweighs any added benefits.1,2 These research methods can also be incredibly dangerous. Every time they go out into the field, field researchers take on tremendous risks, traversing perilous terrain, getting up close to wild animals, and, in some cases, encountering armed poachers.3 Surveys conducted by plane pose additional hazards. Far too often, well-meaning researchers put themselves—and others—in dangerous situations when they conduct low-altitude flight surveys over mountains, forests, or settled areas. From 1937 to 2000, two thirds of all job-related deaths reported among wildlife biologists working in the United States were attributed to aviation accidents—an astounding and disturbing figure.4

Drones circumvent nearly all of these risks, making them a promising choice for future studies. Compact and easy to operate, drones are relatively inexpensive when compared with manned aerial vehicles or on-foot surveys. ConservationDrones, an organization specifically aimed at developing low-cost drones for field research, has developed a drone for less than $2000.5 Miniature drones costing as little as $400 can be purchased online and later equipped with video and still cameras. Besides cameras, they can also be fitted with many types of sensors, from thermometers to pH meters to acoustic recorders; there is even the possibility that swarms of drones could function as a team, with each drone collecting specific information to be integrated into a larger dataset.6 Drones open up a range of possibilities for the scale on which data can be collected, as animals can be tracked over huge distances that no team of scientists could ever cover on foot. Perhaps most importantly, drones allow researchers to conduct their studies from a greater distance, making research conditions safer for them and for the wildlife they study.3

In the following pages, we will explore three case studies that exemplify the range of possible uses for drones in wildlife biology and conservation. At the same time, it is important to consider the challenges and ethical issues that might arise alongside this new technology as we try to assess what the future of drones and conservation might—and should—look like.

Marine Mammal Conservation Zones – Australia

In Australia, marine biologists have already had success using drones to identify which areas of the ocean would make the best marine mammal conservation zones. In a study conducted by Murdoch University’s Cetacean Research Unit,7 drones were flown over Shark Bay on Australia’s western coast. In an area of approximately 320 acres, drones took over 6,000 photographs at altitudes ranging from 500 to 1,000 feet. Researchers then analyzed each one of these pictures manually, attempting to count the number of dugongs, a marine mammal in the same order as manatees. Over 600 dugongs were reliably identified, but, even more amazingly, the researchers were able to recognize a wide array of other species, from schools of fish to whales to sea snakes. The breadth of species that could be seen, even from so high above, is promising, since only larger animals are typically distinguishable in drone photographs.

This simple census data might seem insignificant, but Murdoch University’s study serves to prove the value of drones for marine biology research. At present, the future of the technology seems limitless. Instead of having to manually identifying the animals in each photograph, researchers hope to one day have advanced computer algorithms that are able to distinguish between all species of interest and even identify individual animals.7 There is also the possibility that drones could take to the water, themselves. Human divers could be replaced by underwater robots equipped with propellers, sensors, and even sampling tools. This could be particularly exciting for those studying inaccessible deep sea vents, since underwater drones could dive down and return to the surface with samples of microorganisms for further study.6 While drone technology has a long way to go before this type of exploration is possible, the hope is that studies like those at Murdoch University will stimulate further research so that the future may not be as far off as it seems.

Ornithological Research & Drone Design – France

As drone technology begins to be applied to a range of species, many conservationists are concerned that drones may disturb—or even harm—the very animals they are trying to protect. To address these worries, a team of researchers in France conducted an extensive study on the impact of drones on birds.8 Drones are especially promising tools in the field of ornithology, since they could follow birds from the ground and into the air, perhaps even tracking their migratory routes. However, birds are also inherently susceptible to disturbance. Although drones are typically considered less disruptive than human observers, for birds, it could be more intrusive to be followed by a flying machine than to be watched by humans on the ground.

In their study, the French team exposed semi-captive and wild flamingos, mallards, and greenshanks to several drones. These drones varied in color, speed, and the angle from which they approached the birds—all factors that the authors hypothesized might impact the degree to which birds would be disturbed. Surprisingly, in 80% of trial flights, drones could get within 15 feet of the birds without any signs of distress. The birds did not appear to be affected by drone color or speed; however, they were more likely to be disturbed by drones approaching them from above, which, under natural conditions, would be indicative of a predator. While the authors advise launching drones from a distance and avoiding vertical approaches, they suggest that further research should be conducted to compare these results to the levels of disturbance elicited by human observers. It could be that drones, although foreign objects within a bird’s habitat, are actually less disruptive than one might think—which could open doors for a new phase of drone-conducted ornithological research.8

Human Land Use Changes & Orangutan Conservation – Borneo and Sumatra

The Southeast Asian islands of Borneo and Sumatra are notably the only places in the world where we can still find one of our closest relatives: the orangutan. Yet, most recently, Borneo and Sumatra are becoming notorious for having some of the highest rates of deforestation on the planet.9 Slash-and-burn deforestation is rampant, with most deforested areas being turned into oil palm farms. While there are regulations in place to protect some areas, these are largely ineffective, as many farmers start up plantations illegally in remote areas of the forest; there, the likelihood of being caught by rangers is slim to none. As the largest arboreal mammal, orangutans rely on large tracts of forest for food and protection. Having lost 80% of their forests over the past 20 years, orangutans may be doomed for extinction within the next three decades if nothing is done to slow the current rate of deforestation. Unfortunately, the likelihood of this seems very low when one considers the almost insurmountable demand for palm oil—largely used in food and cosmetic products—in the West.10

But not all have given up hope. ConservationDrones, founded by Lian Pin Koh and Serge Wich, is just one group aimed at saving orangutans, and they plan to achieve this via drone technology. Koh and Wich took one of their first prototype drones to the forests of Borneo and Sumatra to determine the feasibility of identifying orangutans and tracking human land use changes from above. Their drone—programmed to fly a specific, 25 minute route—was able to spot orangutans and their nests in the forest canopy, as well as elephants on the ground. After analyzing the photographs more closely, researchers could also clearly see which areas had been deforested and turned into farmland; they could even identify the specific crops that were being grown.5

Koh and Wich recognize the limitations of drones, including the fact that drones cannot fly below the forest canopy, which restricts which species they can be used to study. However, the possibility that drones could fly over uncharted areas of forest to document illegal land use changes has inspired them to continue their work, and they are currently working to upgrade their prototype for greater efficiency and a more diverse set of uses. They envision a time when drones could be programmed not just to fly a specific route but to fly directly to animals already fitted with radio collars. When drones find illegal oil palm plantations in the forest, they could also send GPS data back to a ranger station, which could immediately deploy a team to confront the farmers.5 The same concept could be applied to monitor poachers, and the presence of drones near endangered wildlife could hopefully act as a deterrent to illegal activities.3 Even more hopeful, some have considered ways in which drones could actually be used to reforest areas by dispersing seeds.11

Many people fear that drones may present a breach of human privacy, especially if they fly over settled areas and take data on human land use.2 As the technology advances, it will be important to consider how its use should be regulated. However, at the moment, the possible benefits of drone technology make it worthwhile to at least pursue further research. While deforestation rates are unlikely to be reversed, the fact that current trends could be slowed—even slightly—is promising, since it could give conservationists the time they need to come up with more long-lasting solutions.

Toward the Future

With studies like these, it looks as if the advancement of drone technology could help shape the future of conservation for the better. But, at the same time, the very same machines that are being used to help save animals are being employed for less noble uses, such as hunting. Many states, including Illinois and Colorado, are facing dilemmas over whether to ban drones used for the purpose of hunting; fortunately, many of them have chosen to side against the hunters, saying that these hunting methods are inhumane and unethical.12 However, this is only the beginning of the conversation. As drone technology becomes more and more common, it is bound to be applied to fields that come into conflict with one another. The question is how we, as a society, will choose to regulate these uses and what role we want drones to play in shaping our planet’s future.

Caitlin Andrews ’16 is a junior in Cabot House concentrating in Organismic and Evolutionary Biology with a secondary in Mind/Brain/Behavior.

Works Cited

  1. van Gemert, J.C. et al. European Conference on Computer Vision workshop 2014.
  2. Ogden, L.E. et al. BioScience 2013, 63(9): 776.
  3. Roden, M.; Khalli, J. UAVs Emerging as Key Addition to Wildlife Conservation Research. Robotics Tomorrow, Mar. 13, 2015. (accessed Mar. 31, 2015).
  4. Sasse, B. D. Wildlife Soc. Bulletin 2003, 31(4): 1000-1003.
  5. Koh, L. P.; Wich, S.A. Tropical Conservation Sci. 2012, 5(2): 121-132.
  6. Grémillet, D. et al. Open Journal of Ecology 2012, 2(2): 49-57.
  7. Hodgson, A. et al. PLoS ONE 2013, 8(11): e79556.
  8. Vas, E. et al. Biology Letters 2015, 11(2): 20140754.
  9. Sumatran Orangutan Conservation Programme. http://www.sumatranorangutan.org/ (accessed Mar. 31, 2015).
  10. Orangutan Conservancy. http://www.orangutan.com/ (accessed Mar. 31, 2015).
  11. Sutherland, W. J. et al. Trends in Ecology & Evolution 2013, 28(1): 16-22.
  12. Swanson, L. Proposed Bill Aims To Ban Drones Used for Hunting. Montgomery Patch. Mar. 27, 2015. (accessed Mar. 31, 2015).

Single Cell DNA Sequencing

by Jennifer Walsh

Every human grows from a single-celled embryo that contains an entire genome of determinants for what this embryo will become. For each one of us, this single cell became two, then four, and its genome became the genome of every cell in our body. However, over a lifetime of cell divisions and routine functioning, mutations in individual cells accumulate. The cells in our body, the highly infectious viruses, the bacteria in our gut, and the cells of a deadly tumor, have variable discrepancies between their genomes. Current biology excels at finding a given mutation in a host of cells that may beget a genetic disorder or lactase persistence, but we still lack the ability to find discrepancies between individual cells within a larger population. However, new technologies with single cell precision have the potential to transform research ranging from microbiology to disease genetics.

The ability to extract an entire genome from a single cell could revolutionize our ability to differentiate genotypes within a population of cells and pinpoint cells that have spontaneous mutations in their genome that separate them from the others around them. Without single cell sequencing, genetic variation among single cells is generally intractable because sequencing techniques require many cells to provide enough input to create a readout. Single cell sequencing has the potential to enhance the study of topics ranging from cancerous tumor development to neuronal differentiation in the brain.1 With this broad set of motivations, scientists in the last decade have undertaken the task of finding a method to accurately and reliably sequence the genomes of single cells as the next step in the sequencing revolution.

How Single Cell Sequencing Works

Sequencing the DNA of a single cell relies on cumulative advances in three techniques that have been dramatically improved over the last couple decades. Single cell sequencing relies on the ability to (1) isolate a single cell, (2) amplify its genome efficiently and accurately, and (3) sequence the DNA. One of the inherent difficulties as well as advantages in sequencing individual cells comes from being able to compare both small and large differences between the genomes of distinct cells. Consequently, effective ways of sorting cells are critical to achieving this goal. Current sequencing techniques have not reached the necessary sensitivity to be able to sequence DNA directly from a cell, without any artificial amplifications, where more copies of the DNA sequence must be made accessible to be parsed in the sequencing process. This amplification does not need to be perfect, and often involves multiple rounds of replication of the genome after it has been fragmented randomly. These fragments are then sequenced and a coherent, linear genome sequence is put together analytically. Most sequencing methods rely on having these smaller fragments of DNA to analyze, and for most methods, numerous copies of each fragment are necessary. The novelty of the technology entails that there is not a universal “best” method.1 However, single cell DNA sequencing, as it is happening in labs nationwide today, tends to involve each of the three steps outlined above and in more detail below, in some combination.2

Challenges Facing Single Cell Sequencing

The first problem is one of discovery – the primary goal of single cell sequencing is to find interesting differences in the DNA of individual cells within the same organism, system, or even tissue. Many inventive methods of modern DNA sequencing were developed long before the prospect of single cell sequencing was on the horizon.

Depending on the organism being examined, the most straightforward, yet unsustainably time-consuming, way to isolate a cell is often just to isolate it by hand with a micropipette. Another widely used method is single-cell fluorescence activated cell sorting (FACS), which automates the selection process on the basis of specific cellular markers. By running the cells through a very thin column, barely larger than the cells themselves, the cells can be separated into a single-file line. Then by vibrating the system, the cells separate into individual droplets that are sorted by their characteristic response to fluorescence from a laser. FACS represents only one of many imaginative strategies, and countless combinations of these tools that have different advantages and drawbacks for a given research goal. Such alternate strategies include using microfluidics,2 and different methods can be optimized for the isolation of different classes of cells.

The second major obstacle that researchers face in attempting to develop single cell sequencing technologies is the inherent limitation of the amount of DNA present in a single cell. The accuracy of current sequencing machines depends on the number of copies of a given DNA fragment, and each cell has only one copy of the desired genome. Therefore, the first and most important step of single cell sequencing is an amplification of the cell’s genome with minimal technical errors that cause inaccuracies in the DNA sequence.

This amplification problem has already been tackled for RNA sequencing, where complementary DNA (cDNA) is amplified with sufficient accuracy because every cell has multiple copies of every RNA transcript.3 RNA-sequencing processes have been optimized to require as few as 5-10 copies.2 As described below, PCR is the standard approach to amplification, but other recent advancements, like MDA, provide other advantages.

Polymerase Chain Reaction (PCR) has been the cornerstone of modern biology research and exponentially amplifies a DNA segment chosen by the researcher. Cycles of DNA strand separation and base pair addition can be repeated numerous times to achieve the desired level of amplification of the original DNA fragment.

Presented in a 2005 paper, Multiple Displacement Amplification (MDA) is a foundational amplification method for single-cell DNA sequencing. The MDA process consists of amplifying the genome non-specifically through the elongation of random primers throughout the genome. These primers then create duplicate, overlapping DNA fragments for sequencing, which can be pieced together to recreate the entire genome.4 This method is better than PCR at replicating different parts of the genome at a more consistent amplification rate because the overlapping fragments remain attached to the DNA template strand and can therefore displace one another.5

The best way to get an accurate sequencing readout would be a process that mixed PCR and MDA to maximize the number of copies and maximize their fidelity with the original genome. Multiple Annealing and Looping-Based Amplification Cycles (MALBAC) integrates both of these previous methods and is currently the most successful amplification method for single cell sequencing. Developed by Professor Sunney Xie’s lab at Harvard University, MALBAC uses both MDA and PCR to amplify the genome in a way that minimizes the discrepancies in amplification rate of different DNA fragments.6,7 MALBAC performs five cycles of MDA such that the fragments loop together so they cannot be amplified again unless the temperature is increased to denature the DNA template and looped strands.

MALBAC is currently the most effective method for detecting many genetic abnormalities, from cells having an extra chromosome to single DNA base pair changes because it produces a relatively uniformly amplified genome that can allow for specificity in interpreting the sequencing results.2

New Discoveries from Single Cell Genomics

Many discoveries have already been made as the technology for single cell sequencing continues to improve. Genetic variation in cancerous tumors represents a significant application for this technology where it is already known that tumors develop from spontaneous mutations and that tumors themselves are genetically heterogeneous.1 While introducing MALBAC, Zong et al. 2012 used this method to show that the base mutation rate of a cancer cell is ten times larger than the rate for a germline cell – a finding made possible by the reliable amplification rate. Furthermore, from analysis of the number of short, repeating DNA sequences (like a series of inserted G’s or repeated codons), scientists discovered that an early, genetically unstable state in tumor cells causes rapid tumor growth.8

This single cell sequencing can be highly valuable anywhere where there is suspected genetic heterogeneity between cells, and other fascinating new opportunities for discovery lie in places like neuroscience and the gastrointestinal system.

Improvements for Future Discoveries

The most pressing issue in the development of single cell DNA sequencing, perhaps obviously, is ensuring the accuracy of the resulting sequence. Fortunately, letting the cell grow and divide on its own, then sequencing that population of cells is a reliable check of single cell sequencing accuracy. However, the problem of implementing a method that could replicate the genome nearly perfectly and entirely still remains. MDA and PCR are biased to work on certain parts of the genome and ignore others, influencing the sequence read off after amplification. MALBAC is a first-rate attempt to suppress extra replication of some genomic regions, but there is always room for improvement. Professor Xie, in an interview with Nature Methods said, “By no means is MALBAC the end game. We’re trying to do better”.1

One path forward is to improve DNA amplification so that sequencing machines can analyze many, copies of the cell’s original DNA without too many mutations during amplification. Research to this end would be able to focus on DNA amplification techniques and utilize preexisting DNA sequencing technology. However, a second path forward is to streamline the entire process by removing the amplification step and determining the cell’s genome from its singular DNA sequence. Instead of sequencing the result of extracting and replicating the cell’s DNA, scientists could glean the cell’s DNA sequence by taking advantage of its existing cellular DNA machinery and processes.

A sequencing technique relying on the cell to amplify its own genome has already been developed for short sections of the genome.9 This technique uses DNA bases that have been fluorescently tagged based on the identity of the nitrogenous base (ATGC). Unlike other sequencing methods, the tags are cleaved off the bases as they are synthesized instead of remaining in the DNA to be later recognized by a sequencing machine. The tracking of the release of these fluorescent molecules, leaving the synthesized DNA strand intact, can then be recorded and the sequence can be determined without significantly disturbing intracellular activity.

The ability to acquire and sequence the entire genome of an individual cell promises possibly transformative understandings of biological systems and single-celled organisms, of spontaneous, somatic mutations in the human body, and even of which genes we pass on to our offspring.

The potential to acquire this entire genome has led to exciting new scientific progress already. Antibodies in ß-cells are known to be genetically heterogeneous, so sequencing of antibodies could lead to breakthroughs in our understanding of the microscopic workings of the immune system. Efficient sequencing of a single cell also promises improvements in safety and accuracy in prenatal genetic screening. Though we anticipate a wealth of knowledge becoming available with an ideal technology, our increasingly effective attempts have led to everything from disproven hypotheses to exciting new insights.

Further reading:

For applications to bacteria: Lasken, Roger S. & Jeffrey S. McLean (2014). Recent advances in genomic sequencing of microbial species from single cells. Nature Reviews: Genetics. 15(9): 577-584.

RNA Sequencing: Wang Z, Gerstein M, Snyder M (2009). RNA-Seq: a revolutionary tool for transcriptomics. Nature Reviews: Genetics, 10(1): 57-63.

Jennifer Walsh ’17 is a sophomore in Lowell House concentrating in Physics.

Works Cited

  1. Chi, K.R. Nature Methods. 2014, 11(1): 13-17.
  2. Macaulay I.C.; Voet T. PLOS Genetics. 10(1). Retrieved March 28, 2015, from http://journals.plos.org/plosgenetics/.
  3. Brady G., Iscove N.N. Methods Enzymol. 1993, 225:611–623.
  4. Lasken R. et al. Nat. Reviews Genetics. 2014, 15(9): 577-584.
  5. Nawy, T. Nature Methods, 2014, 11(1): 18.
  6. Zong C. et al. Science, 2012, 338: 1622-1626.
  7. Reuell, P. One Cell is All You Need. Harvard Gazette, 2013.
  8. Navin N. et al. Nature, 2011, 472(7341): 90-94.
  9. Coupland P. et al. BioTechniques, 2012, 53(6): 365-372.

Genome Editing: Is CRISPR/Cas the Answer?

by Jackson Allen

For roughly the last 60 years, the focus of molecular biology and genetics has been to better understand the microscopic machinery that regulates the genome of every organism. Such scientists as Matthew Meselson, Rosalind Franklin, James Watson, and Francis Crick helped pave the way for contemporary understanding of molecular genetics. These scientists were responsible for discoveries including the structure of DNA, the molecule that encodes all of the genetic information of a person. Though the field of molecular biology has grown immeasurably since then, the most important discoveries in the field remain those that elucidate the mechanisms used to regulate genetic information in living organisms.

Unsurprisingly, scientists have begun to ask how they could apply this knowledge to conquer disease, correct genetic problems, or even learn more about genetics itself. The list of developments in molecular genetics is seemingly endless: genetic therapies promise to reprogram the body’s defenses to target cancer, analysis of the human genome has given us unprecedented understanding of our own genetics, and new technology can simulate the folding of proteins to assist in the development of new pharmaceuticals. However, no genetic tool today seems to hold more power and promise than CRISPR/Cas genome editing technology.

What is CRISPR/Cas?

First observed in 1987, the CRISPR/Cas system in nature comprises the immune system of many bacteria and archaea (1). These single-celled organisms use short repeats of DNA called Clustered Regularly Interspaced Short Palindromic Repeats (CRISPRs) to mark viral DNA that has been incorporated into their genome by Cas (CRISPR Associated) proteins. By incorporating the DNA sequence of attacking viruses into their own genome, these bacteria can destroy an attacking virus by cleaving its DNA with a Cas protein (2). The discovery that bacteria can easily and accurately modify their own genomes remained largely underutilized until the early 2000s, when scientists began to modify viral-resistant bacteria using spacer DNA similar to CRISPRs (3). In 2012, scientists demonstrated that, using CRISPRs and Cas9 proteins, human cells could be genetically modified with precision not seen with other genome editing methods (4). Since then, researchers have used the CRISPR/Cas9 system to modify organisms including zebrafish, plants, and mice. Last year, researchers at the Koch Institute at MIT demonstrated that CRISPR/Cas9 could be used to cure mice of a genetic disease that prevents the breakdown of the amino acid tyrosine and eventually causes liver failure. After an injection of CRISPR RNA and Cas9 paired with DNA for an enzyme that breaks down tyrosine, the mice began to produce this enzyme. Within 30 days, the mice were cured of the disease and no longer required daily medications (5).

CRISPR/Cas Uses

The CRISPR/Cas9 system can be used for gene silencing as well as gene modification. Both outcomes have the potential to make important contributions to laboratory research and disease treatment. The CRISPR/Cas9 system relies on two main components: a Cas9 endonuclease that can cut DNA in the nucleus of a cell and a guide RNA, made of CRISPR RNA and trans-activating CRISPR RNA (tracrRNA). In nature, the CRISPR and tracrRNAs are separated. However, researchers discovered that the two sequences could be combined into a single guide RNA, significantly reducing the complexity of the system (6). The CRISPR RNA directs the Cas9 endonuclease to the appropriate DNA cleavage site, while the tracrRNA stabilizes and activates the Cas9 endonuclease. When this protein is activated, it creates a double-stranded break in the target DNA, which leads to activation of cellular repair mechanisms (6). Double-stranded DNA repair often leads to random insertions or deletions in the gene, because neither strand can serve as a template for repair. These mutations often silence the affected gene and prevent binding of the guide RNA used to target the gene. Thus, the CRISPR/Cas9 system will continue to target the DNA until such a mutation is introduced. However, if a segment of single-stranded DNA that is complimentary to either strand of cleaved DNA is introduced, the cell will repair the DNA cleavage using the single-stranded DNA as a template. Scientists have demonstrated that certain genetic mutations can be corrected by introducing this single-stranded DNA template to the cell’s own repair mechanisms (6).

In addition, the use of Cas9 nickase, a specialized version of the Cas9 endonuclease, has been shown to only cleave one strand of a cell’s DNA, reducing the frequency of off-target modifications while still allowing for DNA repair from a single-stranded DNA template (7). This specificity makes the CRISPR/Cas9 system more accurate and less likely to edit DNA at an undesired location. In fact, two studies in 2013 demonstrated that off-target modifications were reduced by 50 to 1500 times when Cas9 nickase was used (8).

However, treatments using CRISPR/Cas9 may not be limited to genetic diseases. A study at the Whitehead Institute used CRISPR/Cas9 to systematically analyze the effects of silencing over 7000 different genes on resistance to chemotherapy in cancer cells (9). Finding genes that are essential to the survival of tumor cells could potentially lead to a treatment using CRISPR/Cas9 alone or in combination with other therapies. Targeted delivery of CRISPR/Cas9 therapy is also a possibility, although more difficult than IV injections that have been used in previous animal studies. Other scientists working in developmental medicine have used CRSIPRs to screen mouse embryos for genes that could provide resistance to bacterial toxins (10). More effective at silencing genes than RNA interference, screening studies using CRISPR often report gene candidates that would have gone unnoticed in other types of screens (9).

Harvard Medical School Professor of Genetics George Church is one of the leaders in a field of scientists working to expand our knowledge about the CRISPR/Cas9 system. Church’s start-up, Editas Medicine, hopes to develop real-world treatments for genetic diseases using the most recent developments in CRISPR genomic editing. Church points out that the advantages of the CRISPR/Cas9 system over other types of gene editing are crucial for the practicality of treatments based on this science (11). The CRISPR/Cas9 system avoids the problems encountered in other methods of genome editing. For example, viral-vector delivered gene therapy can leave a dysfunctional gene in place even when inserting a healthy gene (11). By contrast, the CRISPR/Cas method is focused on the correction of genes already in an organism’s genome. This approach has the added benefit of leaving the gene in its correct chromosomal location, meaning the cell retains the ability to regulate the gene (11). “Editas is poised to bring genome editing to fruition as a new therapeutic modality, essentially debugging errors in the human software that cause disease,” said Editas director, Alex Borisy, in a recent interview with the McGovern Institute for Brain Research at MIT (11).

To be sure, the successes of CRISPR/Cas9 have not gone without scrutiny in the scientific community. Though most researchers working in the field would support the use of the CRISPR/Cas 9 system for curing genetic diseases like cystic fibrosis or sickle-cell anemia, far fewer support the use of this technology for other genetic modifications like cosmetic changes. Even among those who back the use of CRISPR/Cas9 for targeting disease, the possibility of editing the human germline, cells with the potential to pass on their genetic information, has sparked a debate about the role humans should play in our own evolution. On March 19, 2015, a number of scientists, policymakers, and experts in law and ethics published an editorial in Science, calling for a moratorium on germline genome modification until the ethics of such modifications can be debated openly from a multi-disciplinary perspective (12). This group, which included one of the co-discoverers of CRISPRs, cited a need for greater transparency in discussions of the CRISPR/Cas 9 system. The editorial, resulting from a January 2015 Innovative Genomics Initiative conference on genome editing, also sought more standardized benchmarking and evaluation of off-target modifications made by the CRISPR/Cas 9 system, the effects of which remain largely unknown (12). However, these scientists were careful to point out the tremendous potential of CRISPR/Cas 9 for curing genetic disease, which might tip the balance between risk and reward in favor of responsible genome editing (12).

The Future of CRISPR/Cas

The accuracy, efficiency, and cost of CRISPR/Cas9 make it an attractive alternative to other methods of genome modification. Though they have decades of research behind them, tools such as zinc-finger nucleases (ZFNs) and transcription activator-like effector nucleases (TALENs) still prove costly and complex to use. For example, CRISPR recognition of target DNA depends not on a large and complex protein that must be synthesized, but on simple Watson-Crick base pairing between target DNA and simple RNA (9). Additionally, CRISPR/Cas9 can modify several genes at once (9). For these reasons, CRISPR/Cas9 may find its greatest niche in improving ongoing laboratory research, including the generation of genetically modified animals and cell lines. Other researchers prefer to work outside of the realm of genetic diseases, finding applications for CRISPR/Cas9 in agriculture, ecology, and even the preservation of endangered species. Though the technology behind CRISPR/Cas9 is still young, scientists and start-ups alike have begun to pioneer applications—from pathogen, heat, and drought-resistant crops to the recreation of the wooly mammoth and other extinct species (9, 13).

The coming years promise to be some of the most exciting for molecular biology. Although CRSIPR/Cas9 may just be an application of the knowledge acquired by scientists in recent decades, its promises are truly groundbreaking. With potential uses from genetic diseases to the revival of extinct species, CRISPR/Cas9 could very well usher in a new age of applied molecular genetics. However, this paradigm shift in such a pioneering field of science will not be without its ethical questions and debates. The greatest struggle for science in the future may not be the capabilities of developing technology, but the restraint and responsibility necessary to use such powerful tools. For the first time, quick and efficient editing of an organism’s genome is a realistic possibility, giving humanity the power to control its own evolution at the genetic level—not to mention the ability to change the genetics of the animals and plants that inhabit our world. For the scientists developing this technology, the policymakers calling for open discussions of its ethical issues, and the millions of people who could benefit from it, the coming years will undoubtedly hold tremendous advances.

Jackson Allen ’18 is a freshman, planning to concentrate in Molecular and Cellular Biology.

Sources

  1. Ishino Y, Shinagawa H, Makino K, Amemura M and A Nakata. Nucleotide sequence of the iap gene, responsible for alkaline phosphatase isozyme conversion in Escherichia coli, and identification of the gene product. J. Bacteriology 1987, 169(12): 5429–5433.
  2. Horvath P, Barrangou R. CRISPR/Cas, the Immune System of Bacteria and Archaea. Science 2010, 327(5962): 167–170.
  3. Barrangou R, Fremaux C, Deveau H, Richards M, Boyaval P, Moineau S, et al. CRISPR provides acquired resistance against viruses in prokaryotes. Science 2007, 315: 1709–1712.
  4. Jinek M, Chylinski K, Fonfara I, Hauer M, Doudna JA and E Charpentier. A programmable dual-RNA-guided DNA endonuclease in adaptive bacterial immunity. Science 2012, 337(6096): 816–821.
  5. Yin H, Xue W, Chen S, Bogorad RL, Benedetti E, Grompe M, et al. Genome editing with Cas9 in adult mice corrects a disease mutation and phenotype. Nat. Biotech. 2014, 32: 551–553.
  6. Jinek M, East A, Cheng A, Lin S, Ma E, and J Doudna. RNA-programmed genome editing in human cells. eLife 2013, 2: e00471.
  7. Shen B, Zhang W, Zhang J, Zhou J, Wang J, Chen L, et al. Efficient genome modification by CRISPR-Cas9 nickase with minimal off-target effects. Nat Meth.2014, 11: 399–402.
  8. Ran FA, Hsu PD, Lin C-Y, Gootenberg JS, Konermann S, Trevino AE, et al. Double nicking by RNA-guided CRISPR Cas9 for enhanced genome editing specificity. Cell 2013, 154: 1380–1389.
  9. M. Baker. Gene editing at CRISPR speed. Nat. Biotechnol. 2014, 32: 309–312
  10. Koike-Yusa H, Li Y, Tan EP, Velasco-Herrera Mdel C, Yusa K. Genome-wide recessive genetic screening in mammalian cells with a lentiviral CRISPR-guide RNA library. Nat Biotech. 2014, 32: 267–73.
  11. Wang, Brian. 28 Nov. 2013. “George Church Has New 43 Million Dollar Startup Editas Medicine to Commercialize Precise CRISPR/Cas Gene Therapy.” Next Big Future.
  12. Baltimore, D., P. Berg, M. Botchan, D. Carroll, R. Charo, G. Church, J. Corn, G. Daley, J. Doudna, M. Fenner, H. Greely, M. Jinek, G. Martin, E. Penhoet, J. Puck, S. Sternberg, J. Weissman, and K. Yamamoto. A Prudent Path Forward for Genomic Engineering and Germline Gene Modification. Science 2015, 348(6230): 36-38.
  13. Temple, J. 8 Dec. 2014. “The Time Traveler: George Church Plans to Bring Back a Creature That Went Extinct 4,000 Years Ago.” Recode.

Astromycology: The “Fungal” Frontier

by Tristan Wang

Hollywood movies and horror novels have painted extraterrestrial life as green monsters, scouring the barren grounds of Mars and shooting any intruder with photon lasers. These disturbing imaginations, while far-fetched, do hold some truth about frightening outer space life forms, but not in the ways we imagine. During its orbit as the first modular space station, the satellite Mir experienced attacks from the least suspect extraterrestrial life form: mold. Splotches of fungal hyphae covered windows and control panels and gradually ate away at the hull’s interior during the latter part of the satellite’s life, and with it, any notion of a “sterile spaceship”.1

The discipline of astrobiology attempts to answer the larger mysteries about life: its origin, necessities for survival, and presence in other worlds. But astrobiology also has practical applications in considering how biological organisms may travel through space. In particular, human space travel would greatly benefit from studying a branch of fungal biology known as astromycology: the study of earth-derived fungi in space. Fungi offer both an opportunity and threat to human space travel. Problems arising from fungal intruders are both wide and relevant, ranging from providing food and decomposing biological material to breaking down spacecrafts. Interactions of intense radiation and lack of gravity with fungal growth underlie the opportunities and threats that fungi pose to human space travel.

Ecology

Environments in orbiting spacecrafts are often different from terrestrial environments back on Earth due to the lack of gravity and higher radiation levels in outer space. Even given these obstacles, fungi seem to have found a way to inhabit space environments. The Mir spacecraft was reported to host several general of fungi including species of Aspergillus, Penicillium and Cladosporium, all of which are known to be common molds of the phylum Ascomycota.1 What makes these genera so special is their adaptability to survive in a variety of environments.  These genera are known as saprophytes (organisms that live on decaying matter) and are shown to be resilient in a relatively wide range of temperatures and humidities.2,3 Thus, food and environments are not as limiting to these opportunistic fungi, organisms able to spread quickly into uncolonized environments.

Fungi and plants have been shown to display the phenomenon of gravitropism,  growth in reaction to gravity. Being able to grow in a particular direction is important for fungal development, in particular for reproduction and spore discharge. For example, in the case of ascomycetous fungi, sexual spores are discharged into the air through tubular vesicles called asci.4 If fruiting bodies of molds release spores towards the base of the aerial hyphae, reproduction would not be optimized. Also, studies have shown that fungi tend to respond most sensitively to gravity just behind the apex of growing hyphae, which are hair-like filaments of fungi, although there seems to be a significant lag time before a notable reaction.5 It has been proposed that this bending of hyphae is due to a particular chemical growth factor that originates from the apical portion of the stem.6

Whereas gravity affects the morphological shape of fungi, radiation is more subtle in its physiological impact. Fungi seem to have peculiar adaptations to coping with stress due to radiation. Scientists have studied closely how radiation from the nuclear meltdown at Chernobyl in 1986 affected the environment and its ecology. Darkened fungal organisms were retrieved from the reactor’s walls.7 Specifically, these fungi were melanized, which means they were darkened by natural pigments.

Typically, fungi are fairly resistant to exposure to doses of ionizing radiation and it appears that the morphological adaptation of melanin reinforces this defense.8 Melanized fungi occur naturally around the world from the arctic to the Antarctic but when researchers observed Penicillum expansum in space for seven months, they observed increased presence of melanin layers.8 In fact, Cryptococcus neoformans cells exposed to radiation 500 times the background rate grew faster than irradiated non-melanized and non-irradiated counterparts.8 It is possible that melanin not only protects free radicals from harming important fungal DNA, but also provides some mechanism of energy utilization similar to how plants capture light energy.8 Indeed, some studies have shown an increased rate of metabolism for melanized fungi when irradiated. The basis for fungal resistance to ionizing radiation may be genetic. Expressions of genes related to cell cycle and DNA processing seem to be sensitive to upregulation when irradiated and may help the organism adapt better to the environment.8

Implications for Human Space Travel

Perhaps one of the better-known uses of fungi in space comes from edible mushrooms. It’s not unusual for foodies and nutritionists to have positive reviews over the nutritional and delicacy of general fungal-based foods. Typically, edible mushrooms are rich in protein, complex carbohydrates and certain vitamins such as D (in the form of ergosterols) and some B vitamins while being low in fat and simple carbohydrates.9 The Food and Drug Administration (FDA) even ranks mushrooms as “healthy foods” which raises the question of cultivating mushrooms as a sustainable food source in space.9 A popular edible mushroom Pleurotus ostreatus, also known as the oyster mushroom, has proven useful in its versatility in cultivation. While edible mushrooms are able to feed off of a variety of lignocellulosic substrates, P. ostreatus requires a shorter time to grow and has a high fruiting body to substrate ratio when developing.10 All these factors make oyster mushrooms one of the most cultivated edible mushrooms, and a good candidate for use in space.10

The greatest strength of fungi in breaking down materials also happens to be its greatest fallback. Common fungi that find homes in house-fridges also happen to be adept in forming corrosive secondary compounds. Common genera found in spacecraft and on Earth like Geotrichum, Aspergillus, and Penicillum create destructive compounds capable of hydrolysis like acetic acid.11 This chemical component, coupled with hyphe (mycelial root-like hairs of fungi) penetration, allows fungi to damage deep into surface layers like wood, stone and walls.1,11 Not surprisingly, given the variety of substrates fungi can colonize, space crafts are not impermeable to an infestation.

Conclusion

At home, molds are considered a pest growing in our bathrooms, wet corners of houses and in old refrigerators. In nature, it seems that mushrooms play an integral part in the cycle of nutrients breaking down lignin and other plant material. Out in space, however, we are just beginning to learn about fungal presence. Fungal interaction with gravity and radiation seem to come right out of a science-fiction novel, but their implication as a nutritional and yet destructive entity is real. People have looked for extra-terrestrial life for generations, and it seems that only now are we noticing the most interesting, important and fuzzy aliens so far.  Perhaps one day, scientists will find a way of incorporating fungi to aid in production of fresh food out in space and degradation of biological waste; or maybe fungi will be used in the absorption of radiation. Until then, our eyes are peeled.

Tristan Wang ’16 is a junior in Kirkland House concentrating in Organismic and Evolutionary Biology.

Works Cited

  1. Cook, Gareth. “Laura Lee News – Orbiting Spacecraft Turns out to Be Food for Aggressive Mold.” Laura Lee News – Orbiting Spacecraft Turns out to Be Food for Aggressive Mold. Conversation for Exploration, 1 Oct. 2000. Web.
  2. Pitt, John I. “Biology and Ecology of Toxigenic Penicillium Species.” Mycotoxins and Food Safety 504 (2002): 29-41.
  3. Wilson, David M., Wellington Mubatanhema, and Zelijko Jurjevic. “Biology and Ecology of Mycotoxigenic Aspergillus Species as Related to Economic and Health Concerns.” Mycotoxins and Food Safety 504 (2002): 3-17.
  4. Trail, Frances. “Fungal Cannons: Explosive Spore Discharge in the Ascomycota.” FEMS Microbiology Letters 276.1 (2007): 12-18.
  5. Moore, David, and Alvidas Stočkus. “Comparing Plant and Fungal Gravitropism Using Imitational Models Based on Reiterative Computation.” Advances in Space Research 21.8-9 (1998): 1179-182.
  6. Kher, Kavita, John P. Greening, Jason P. Hatton, Lilyann Novak Frazer, and David Moore. “Kinetics and Mechanics of Stem Gravitropism in Coprinus Cinereus.” Mycological Research 96.10 (1992): 817-24.
  7. Melville, Kate. “Chernobyl Fungus Feeds On Radiation.” Chernobyl Fungus Feeds On Radiation. Sci Gogo, 23 May 2007. Web.
  8. Dadachova, Ekaterina, and Arturo Casadevall. “Ionizing Radiation: How Fungi Cope, Adapt, and Exploit with the Help of Melanin.” Current Opinion in Microbiology 11.6 (2008): 525-31.
  9. Stamets, Paul. Mycelium Running: How Mushrooms Can Help save the World. Berkeley, CA: Ten Speed, 2005.
  10. Sánchez, Carmen. “Cultivation of Pleurotus Ostreatus and Other Edible Mushrooms.” Applied Microbiology and Biotechnology 85.5 (2010): 1321-337.
  11. Schaechter, Moselio. “Biodeterioration – Including Cultural Heritage.” Encyclopedia of Microbiology. Amsterdam: Elsevier/Academic, 2009. 191-205.

A Winning Combination Against Drug Resistance

 

by Ryan Chow

Earlier this year, President Obama announced the Precision Medicine Initiative. Proclaiming that the initiative would “lay the foundation for a new generation of lifesaving discoveries,” the President proposed setting aside $215 million to expedite the clinical translation of personalized genetics research.1 The initiative specifically highlights the development of patient-specific cancer therapies as an immediately promising area for breakthrough research. Accordingly, the National Cancer Institute (NCI) is budgeted to receive $70 million for this specific purpose.

In recent years, several next-generation cancer drugs have been approved for patients harboring certain genetic abnormalities and alterations. For instance, lung cancer patients with activating mutations in EGFR, a gene that regulates cell division, can now undergo erlotinib treatment. Similarly, patients with alterations in ALK, a gene that controls cancer progression, can receive crizotinib and ceritinib therapy.2-4 Such targeted therapeutics are designed to specifically counteract the cancer-promoting effects of these genetic mutations, largely leaving other cellular pathways intact. With their heightened specificity and efficacy profiles, these next-generation drugs have revolutionized the world of cancer therapy, improving patient prognosis and quality of life. It is no wonder, then, that there has been a strong push to fund further research into personalized cancer therapeutics.

But for all of their strengths, these new drugs often have a major caveat: cancers develop resistance to targeted therapeutics within one to two years of initial administration.5 The mechanisms underlying drug resistance have been extensively studied in vitro using established cancer cell lines.6 Although in vitro methods may not always faithfully recapitulate the progression of actual human cancers, these studies have been  critical in identifying secondary “bypass tracks” as facilitators of drug resistance; namely, a cancer may gradually evolve alternate strategies for promoting tumor growth upon pharmacological inhibition of the cancer-initiating mutation. Following this logic, one would hypothesize that combination cancer therapy could potentially overcome drug resistance by blocking both the primary oncogenic pathway as well as the secondary bypass track.

Looking for a way to efficiently identify combination therapies for drug-resistant lung cancers, researchers at Massachusetts General Hospital recently developed a screening platform to interrogate possible secondary bypass tracks within patient tumors that had already become resistant to a targeted therapeutic.7 The results of Crystal et al.’s study were striking: for many of the patient-derived cell lines, the high-throughput drug screen was able to clearly pinpoint the bypass track that the tumors had acquired, thereby uncovering a viable route for further pharmacological intervention. For instance, it was found that co-administration of MET inhibitors could resensitize tumor cells to EGFR inhibitors; importantly, statistical analysis demonstrated that the two inhibitors had synergistic effects that far exceeded the predicted efficacy if each drug was functioning independently.

With this study, the authors were able to identify a diverse range of potent, patient-specific combination therapies that successfully overcame drug resistance. Several intriguing patterns emerged from the aggregate data, revealing commonalities in the mechanisms of acquired resistance. However, one must keep in mind that these patterns, while biologically interesting, are not the key findings of the paper. Rather, the study acts as a proof-of-concept that patient-specific therapeutic strategies can be systematically discovered following the failure of first-line targeted therapies. In that sense, the work of Crystal et al. actually deemphasizes the importance of identifying general trends within the landscape of drug resistance; if we can identify combination therapies on a patient-by-patient level, there is no longer a need to make therapeutic decisions based on population-level correlations.

Paradoxically, their work also serves as a cautionary tale for personalized cancer genomics. Some of the most effective combination therapies that the authors identified through their drug screen could not have been predicted by traditional genetic analysis – that is, the combination therapies targeted pathways that did not possess any genetic alterations. The natural implication is that clinicians must not make treatment decisions based solely on patient genotypes, as the probability of missing important bypass tracks is far from negligible. One would be better off instead performing an entirely unbiased secondary drug screen on a patient-specific basis, in a manner akin to Crystal et al.

In this highly translational work, Crystal et al. have helped set the foundations for the future of personalized cancer medicine. Though their findings have not yet been tested in actual patients, the potential impact to human medicine is clear. Only time will tell if these patient-derived cell models can faithfully capture the complexities of drug resistance and thereby yield clinically effective therapeutic approaches.

Ryan Chow is a junior in Pforzheimer House, studying Human Developmental and Regenerative Biology.

Works Cited

  1. The White House: Office of the Press Secretary. “FACT Sheet: President Obama’s Precision Medicine Initiative.” The White House Briefing Room [Online], January 31, 2015. https://www.whitehouse.gov/the-press-office/2015/01/30/fact-sheet-president-obama-s-precision-medicine-initiative (accessed March 19, 2015).
  2. Tsao, M.S. et al. New England Journal of Medicine 2005, 353(2), 133–144.
  3. Shaw, A.T. et al. New England Journal of Medicine 2014, 370(13), 1189-1197.
  4. Shaw, A.T. et al. New England Journal of Medicine 2014, 371(21), 1963-1971.
  5. Chong, C.R.; Jänne, P.A. Nature Medicine 2013, 19, 1389-1400.
  6. Niederst, M.J.; Engelman, J.A. Science Signaling 2013, 6(294), re6.
  7. Crystal, A.S. et al. Science 2014, 346(6216), 1480-1486.

 

 

Mapping the Nervous System: Tracing Neural Circuits with Color Changing Proteins

by Christine Zhang

Stepping out the front door of my dorm, I am frequently greeted by a sharp gust of wind that convinces me to turn back and grab a coat. The reaction is almost instantaneous. But in that split-second, the action of turning around requires 100 billion action potentials and the signal transmits over 20 quadrillion synapses1. With the complexity of the nervous system, it can be difficult to pinpoint the origin of neural circuits. Yet these details are critical to understanding and treating neurodegenerative diseases. There has been substantial research in developing mechanisms to identify active neural circuits, and presently, research in synapses looks the most promising.

There are billions of neurons in the human body lined side-by-side to each other, separated by a microscopic gap known as a synapse. As the action potential arrives at the end of a neuron, calcium channels in that neuron open and the ion rushes in, triggering the release of neurotransmitters to carry the signal over the synapse2. The difference in calcium levels inside and outside of the neuron creates a concentration gradient that activates the neurotransmitter; thus, calcium levels and the intensity of neuron activity are strongly correlated.

At present, there are two widely accepted methods of detecting neural activity: genetically encoded calcium indicators (GECIs) and immediate early genes (IEGs). Both GECIs and IEGs monitor neuron functioning at the molecular level to give estimates of neural activity. However, the two processes are incomplete in design, hindered by complex set-ups and limited efficacy. GECIs can track calcium concentrations directly but require sophisticated machinery, physical restraint of the subject, and only provide limited scopes of view3. These complex requirements make GECIs feasible in few situations. In contrast, IEGs can be monitored in a larger time window and can be observed in free moving bodies, but cannot monitor neural activity directly. Rather than tracking calcium levels, IEGs record the expression of intermediate genes, which is at best weakly correlated with neural electrical activity3.

In light of these difficulties, Benjamin Fosque, Yi Sun, and Hod Dana, researchers in the Department of Biochemistry and Molecular Biology at the University of Chicago, developed a new mechanism to monitor neural behavior. Their idea features a fluorescent protein that changes color from green to red under violet light. Fosque, Sun, and Dana constructed a mutagenic fluorescent protein, called CaMPARI, to undergo this color change only in the presence of calcium. CaMPARI, or calcium-modulated photoactivatable ratiometric integrator, changes color 21 times faster in the presence of calcium than in its absence3. The rate of fluorescent conversion coupled with the intensity of fluorescence of CaMPARI conveys unprecedented levels of information on cell-type identification and examination and has groundbreaking potential.

When tested in Drosophila melanogsaster and larval zebrafish in vivo to track whole-brain activity and neural pathways, CaMPARI continued to prove to be highly successful3. It combined the advantages of traditional neural activity tests without the drawbacks. The method employed direct targeting of calcium as in GECIs and the flexible time window and freedom of movement of IEGs. As an additional benefit, CaMPARI also enables the possibility for follow-up experiments including electrical recordings, antigen detections, and genetic profiling of cells.

With the enhanced tracking of neural activity, neurologists can more rigorously study neurons on the molecular level and observe individual cell behaviors. Given its in vivo applications, CaMPARI can also contribute to developing personalized cures for patients with neurodegenerative diseases and to understanding the exact mechanisms in which neural diseases affect the body. With a reliable and easily employable neural monitoring mechanism, the potential scientific and medical gains are endless.

Christine Zhang ‘18 is a freshman in Thayer Hall.

Works Cited

  1. Bryant, A. What is the synaptic firing rate of the human brain? Stanford Neuroblog, Aug. 27, 2013.
  2. Sudhof, T.; Malenka, R. Neuron 2008, 60(3), 469-476.
  3. Fosque, B. et al. Science 2015, 347(6223), 755-760.

 

Neurosurgeon or the Next Monet?

by Alexandra Rojek

The mysteries of the brain and its functioning are both an object of fascination for researchers, for the ordinary human, but are also the source of much difficulty in cases of brain cancer. To make a surgeon’s job of removing a cancerous tumor even more difficult than it already inherently is, tumors found in the brain often do not have defined boundaries. In a different type of tumor removal when the surgeon might remove an extra margin of tissue to ensure the complete and successful removal of the tumor, this becomes a much riskier technique for brain tumors when removing such an extra margin since removing any healthy tissue could mean fundamental changes to an individual’s personality or identity, or could lead to severe deficits in their physical functioning.

A neurosurgeon’s task of tumor removal may become much more defined very soon, with the development of Tumor Paint – a dye derived from deathstalker scorpion venom that binds only to cancerous cells, allowing a neurosurgeon to directly see the boundaries of a tumor and to not guess at where the boundary between healthy and cancerous tissue lies.

This tumor paint dye is a fusion of a fluorescent protein, Cy5, with a non-toxic component of this deathstalker scorpion venom, chlorotoxin – together this fusion of the tumor-binding chlorotoxin and fluorescent protein glow when exposed to near-infrared light. The dye can bind to malignant gliomas, medulloblastomas, prostate cancer, intestinal cancer, and sarcoma in mouse models. Incredibly, it can detect small numbers of metastatic cells in the lymph channels, presenting the opportunity to perhaps detect metastases before they localize to secondary sites and become established tumors.1

Beyond being able to ‘paint’ tumor boundaries and aid the difficult and sensitive tasks of surgeons, the promise of chlorotoxin also expands to the delivery of nanoparticles as drug delivery vehicles specifically to cancerous tissue. Chemotherapy is based on the principle that since cancer cells grow and divide faster than healthy cells, they will be more affected by the negative effects of such drugs, but healthy cells are still affected regardless – resulting in the undesirable side-effects of traditional chemotherapy. Being able to deliver drugs, such as through nanoparticles, directly to cancer cells would allow drug discovery to pursue more potent compounds since side-effects would be minimized.

Chlorotoxin has the ability to target cancer cells on its own, and has even been developed as an imaging agent for use in MRI scans to identify tumors, when conjugated with superparamagnetic nanoparticles.2 In further work, it has also been shown that this nanoparticle-conjugated chlorotoxin can inhibit the invasive capacity of tumor cells, suggesting potential therapeutic benefit. The chlorotoxin conjugate was also observed to bind more strongly to cells that exhibited markers correlated with more aggressive and invasive stages of tumors, showing even more promise for its therapeutic applicability.3

What started from deadly scorpion venom inspired development of an injectable dye to identify tumors may be allow the neurosurgeon to become a highly precise painter and hunter of cancerous tissue, holding the promise of transforming some patients’ prognoses – and the boundaries of what it might inspire beyond that are just beginning to be visible on the horizon.

Alexandra Rojek ‘15 is a senior in Currier House, concentrating in Chemical and Physical Biology.

Works Cited

  1. Veiseh, M. et al. Cancer Res. 2007. 67, 6882-6888.
  2. Veiseh, O. et al. Nano Lett. 2005. 5,1003-1008.
  3. Veiseh, O. et al. Small. 2009. 5, 256-264.

Slime Mold: The Small, Ugly, and Extraordinary

by Tristan Wang

Slime molds are some of the world’s ancient mysteries. From the independent unicellular amoeba to the cooperation of many individuals, these globs of ooze share biological functions that few other species or even kingdoms exhibit. Even though they are not seen conspicuously in our day-to-day lives, slime molds may hold the key to understanding fundamental problems in biology, such as spatial memory and altruism, through their social behavior.

Overview of Mycetozoa

Historically, scientists have not had much luck in classifying these mysterious critters. Animal biologists liked to point out the animalistic behavior of these slug-like creatures, while plant enthusiasts noted their stalk-like structures. Mycologists, in the meantime, argued that the root-like arms of feeding slime molds resembled a feeding fungus—an argument that eventually positioned the slime molds at the base of the fungal lineage (1).

Nowadays, scientists have classified these creatures under the infraphylum of Mycetozoa within the kingdom of Amoebozoa — or amoebas — which are commonly known to be tiny slug-like creatures that move via the flow of internal fluids within cells, a process called cytoplasmic flow (1). For easier classification, Amoebozoa is often referred to as a part of the kingdom of protists, a group that excludes fungi, plants, and animals. Within Mycetozoa, slime molds are further categorized into two main groups: cellular (dictyostelic) and plasmodial (myxogastric) slime molds.

Plasmodial slime molds live most of their lives as masses of protoplasm, a form of slime that crawls around before finding a suitable food substrate, like bacteria or other organic matter (2). Often, these masses of slime can be found in cool, moist areas, but, during reproduction, they move to a more open location where their spores can be carried by the wind (2). Cellular slime molds, on the other hand, live mostly as independent amoebas that feed until there is an external stimulus, such as a change in environment or food source, which prompts the individual amoebas to congregate into a slug-like form (2). This mass of cells ultimately forms a stalk with a clump of spores sitting on top, where they can released into the air (2).

Plasmodial Slime Molds and Navigation

One of the most fascinating abilities of slime molds is their ability to solve mazes. There are several well-known videos attesting to the mold’s ability to not only navigate labyrinths but also find the most efficient route from the entrance to the exit (3). From this, it is not too far of a leap to find a use for these critters. Scientists have applied the mold’s ability to organize itself to model the shortest distance between city train stations, using a miniaturized model with oatmeal flakes to represent train stops (4). Simulations like these show that the shape in which the slime mold organizes its protoplasm often resembles actual railway networks (4).

Behind this phenomenon of connecting different parts of slime molds is the process of shuttling nutrients around the protoplasm called cytoplasmic streaming (5). These protoplasmic connections allow efficient transportation between food sources and growing parts of the organism (5). In addition, scientists have explored the processes that occur during movement of slime molds. According to a paper in the Proceedings of the National Academy of Sciences, a slime mold’s plasmodium is made up of individual “oscillating units” that are influenced by external stimuli from neighboring units and the environment (6). When food is detected by receptor molecules, the cell membrane closest to the food allows cytoplasm to flow into that area of the attractant (6). As a result, it looks like the slime mold is growing towards a specific location.

Even more interesting is the ability of slime mold to find its way to food sources, a phenomenon known as “spatial memory,” which is often only associated with organisms with higher mental capacities. When slime molds move, they leave behind a mass of extracellular slime that contains distinctive sugar polymers (6). Slime molds have been shown to be very repulsed by the slime they produce, thereby allowing the individual organisms to mark the territory that they have already traveled through (6). It has been theorized that this rudimentary spatial memory is the predecessor to the more sophisticated memory of other organisms (6).

Cellular Slime Mold and Altruism

Altruism is selfless concern for others over concerns for oneself.  In biology, however, this practice is rarely found outside of animals. Currently, it is thought that motives for altruism either involve selection for closely related kin (much like how worker bees work for the queen bee) or the expectation of reciprocation from the benefiting party (7).

When reproduction occurs in cellular slime molds, some individuals of the population must create a stalk that holds a mass of spores for dispersal (2). While the spores on top get the opportunity to reproduce, the individual amoebas that create the stalk die and do not get the chance to pass on their genes (2). However, in biology, there is always a catch.

One study that looked at pure cultures of cellular slime molds and their reproduction found that combinations of less related slime molds created a much smaller stalk relative to the spore capsule than did pure cultures of either variety (8). These results imply that when populations of differing slime molds encounter each other, neither invests heavily in a needed structure (9).

Indeed, some studies have even noted that cheating occurs with coexisting slime molds. “Cheater” cultures may sometimes invest a smaller portion of cells to the formation of the stalk when cooperating with other populations (7). As a result, these unfair cultures are in an evolutionarily more advantageous position in terms of reproductive advantage. Exactly how can a system of altruism still exist when there is an unfair advantage? It has been shown that, when this trend of cheating is allowed to occur, it leads to the development of slime molds with and without stalks (7). When both populations coexist, the stalk-less slime mold takes advantage of the stalked population, but the stalk-less population cannot persist on its own, while the stalked population can (7). Thus, in a sense, nature punishes cheaters.

Conclusion

The study of slime molds has revealed these organisms to be not just a mass of slime on the forest floor but very charismatic organisms that are able to solve mazes, modeling city planning. Not only do slime molds play an integral part of our ecology, but their social behavior also provides important insight into the inner workings of communication, movement, and altruism. At least in biology, slime molds are truly extraordinary.
Tristan Wang ’16 is an Organismic and Evolutionary Biology concentrator in Kirkland House.

Works Cited

  1. Baldauf, S. L. “Origin and Evolution of the Slime Molds (Mycetozoa).” Proceedings of the National Academy of Sciences 94.22 (1997): 12007-2012.
  2. Stephenson, Steven L., and Henry Stempen. Myxomycetes a Handbook of Slime Molds. Portland, Or.: Timberland, 2000.
  3. Nakagaki, Toshiyuki. “Smart Behavior of True Slime Mold in a Labyrinth.” Research in Microbiology 152.9 (2001): 767-70.
  4. Yong, Ed. “Slime Mould Attacks Simulates Tokyo Rail Network.” Web log post. Scienceblogs.com. ScienceBlogs, 21 Jan. 2010.
  5. Adamatzky, Andrew, and Jeff Jones. “Road Planning With Slime Mould: If Physarum Built Motorways It Would Route M6/m74 Through Newcastle.” International Journal of Bifurcation and Chaos 20.10 (2010): 3065
  6. Reid, C. R., T. Latty, A. Dussutour, and M. Beekman. “Slime Mold Uses an Externalized Spatial “memory” to Navigate in Complex Environments.” Proceedings of the National Academy of Sciences 109.43 (2012): 17490-7494.
  7. Brannstrom, A., and U. Dieckmann. “Evolutionary Dynamics of Altruism and Cheating among Social Amoebas.” Proceedings of the Royal Society B: Biological Sciences 272.1572 (2005): 1609-616.
  8. Deangelo, M.j., V.m. Kish, and S.a. Kolmes. “Altruism, Selfishness, and Heterocytosis in Cellular Slime Molds.” Ethology Ecology & Evolution 2.4 (1990): 439-43.

 

Fear vs. Fact: The Modern Anti-Vaccination Movement

by Brendan Pease

“It just seemed like it was impossible,” said Kathryn Riffenburg, a resident of nearby Chicopee, Massachusetts. “We went from sitting in the hospital day by day, waiting for him to get better for almost two weeks, to doctors telling us we had a 50/50 chance he was going to make it.” Two years ago, “he” was Brady Riffenburg — Kathryn’s newborn son who, at the beginning of his 9-week life, was born healthy. But soon after birth, Brady contracted whooping cough, a disease named after the uncontrollable coughing fits it gives those afflicted with the disease, preventing them from breathing. Complications are even more serious, ranging from brain swelling to seizures. When Brady succumbed to the disease, the disease caused swelling so severe that he was unrecognizable; his body was so ravaged by the disease that Kathryn chose to have a closed casket funeral (1).

Yet Brady’s tragic death could have been easily prevented. In fact, a vaccine that Kathryn could have gotten during pregnancy would likely have saved Brady’s life. Unfortunately, what happened to the Riffenburgs is becoming increasingly more common today, as vaccine rates for preventable diseases are dropping. For example, new measles cases in the United States tripled in 2013, with outbreaks in eight distinct communities (2). California had 14,921 kindergartners this year who were not vaccinated because of their parents’ beliefs, the highest number of any state. From January to March this year, there were 49 reported cases of measles in California; over the same time period in 2013, there were four (1). Estimates on preventable deaths caused by low vaccination rates vary greatly, but many hover at around 1,000 per year, a rate that most agree is currently increasing (3). As more and more Americans forego vaccinating their children, is there any scientific basis to the claims that vaccines are harmful?

Vaccines: An Overview

In short, no. Vaccines work in a simple and elegant way. When humans get sick, our immune systems recognize the bacteria or viruses causing the disease and produce proteins called antibodies. In addition to helping the immune system kill the foreign invaders, antibodies remain in our blood and will recognize and quickly destroy foreign invaders before they have a chance to invade further and cause sickness. The role of a vaccine is to introduce a weakened or dead pathogen so that our immune system can create antibodies without getting sick in the first place. Getting vaccinated and getting sick produce the same antibodies for each disease; vaccines simply prevent us from having to experience the disease in the first place. This is especially important for diseases with high mortality that our bodies cannot combat effectively if we do not already have immunity to them. If enough people within a population get vaccinated, they are considered to have “herd immunity,” because the disease cannot spread if so many people are immune (4).

Contrary to popular belief, vaccines have been around far longer than most of modern medicine. According to HistoryOfVaccines.org, a project by the College of Physicians of Philadelphia, there is evidence of smallpox inoculation occurring as early as 1000 CE in China. However, vaccines did not become prevalent until Edward Jenners first used cowpox material to confer smallpox immunity in 1796. Two centuries later, small pox was eradicated in the 1970s using the same scientific principle, albeit with some minor modifications. The number of vaccines burgeoned in the early and mid-1900s as biologists gained more knowledge about the DNA structure and various microbiological mechanisms (5). Today, the CDC recommends that children get vaccines to cover 16 different diseases, from Hepatitis A to Tetanus (4).

Although vaccines are overwhelmingly helpful for human health, they are not entirely without their flaws. As the CDC states on their website, “Like any medication, vaccines, can cause side effects. The most common side effects are mild. On the other hand, many vaccine-preventable disease symptoms can be serious, or even deadly.” (4). However, these side effects are so rare that they are vastly outweighed by their benefits to public health. These risks are also mitigated by guidelines set by the CDC that dictate who should not get vaccinated — namely, those who are on immunosuppressive drugs, are not feeling well, or are allergic to ingredients of the vaccine (4). Vaccines are also remarkably safer than they were 50 years ago, with adverse reactions and side effects becoming increasingly rare (2). In summation, the benefits of vaccination far outweigh the rare drawbacks.

The Anti-Vaccination Movement

Opposition to vaccines is nothing new; there are records of people harassing smallpox inoculators in the 1700s (5). However, the modern anti-vaccination movement has its origins in a study published in The Lancet, a highly influential peer-reviewed medical journal, in 1998. The study, “Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive developmental disorder in children,” suggests a link between the measles, mumps, and rubella (MMR) vaccine and autism. Following a sample of 12 children, 11 of whom were boys, the study concluded that “onset of [autism] behavioral symptoms was associated, by the parents, with measles, mumps, and rubella vaccination in eight of the 12 children, with measles infection in one child, and otitis media in another” (6). The study was widely criticized as being “filled with false and fraudulent data,” and was fully redacted in 2010 (2).

But once the study was published in the first place, the damage was already done. Vaccination rates plummeted, especially in the United Kingdom and the United States, and have stayed lower ever since, as anti-vaccination activists continue to cite the infamous study. After over a decade of lower vaccination rates, the damage to human health is evident. For example, in 2010, there were nearly 10,000 cases of whooping cough in California alone, “causing the deaths of 10 infants under the age of 1 – the most in the state since 1947” (2). From 2007 to 2014, over 100,000 people have caught preventable illnesses that have widely-available vaccines (3). Decades of progress in vaccine research has been nullified by anti-vaccination advocacy based largely on a redacted scientific study.

So if the anti-vaccination movement does not have science behind it, how exactly does it spread? One factor behind the spread of the movement is that several celebrities have participated in anti-vaccine activism. These celebrities include a number of actors, including Rob Schneider, who campaigned against a bill in California that would have made it harder to get personal exemptions from vaccine requirements (7). Another infamous anti-vaccine activist is Jenny McCarthy, who co-hosted the View for one season and spread her anti-vaccine views through the show; she has since backpedaled on some of her views (8). Another reason that the movement is able to persuade parents is because it uses a lot of fear-mongering and scare tactics to frighten parents into not vaccinating their children. These tactics range from blatant fact denial and bad science to naming ingredients in vaccines that seem harmful and unnatural to those without a scientific background (9).

These scare tactics are apparent on VacTruth.com, a popular anti-vaccination website with over 58,000 likes on Facebook. The website was founded in 2009 by Jeffry Aufderheide, whose first son stopped reaching developmental milestones around the time he was vaccinated. One page of the website lists ingredients of vaccines such as “monkey kidney tissue” and “mouse brain,” taking advantage of the “ick” factor of animal brains and kidneys rather than discussing the science behind the ingredients. Package inserts—the pages of information listing possible rare side effects and ingredients that often come with vaccines—are viewed as evidence that vaccines are unsafe, despite the fact that all medications have similar warnings. Headlines under the News section include, “Boy Gets Diagnosed with Autism After 32 Shots,” “The CDC: A Truly Corrupt and Dangerous Organization,” and “The Scary Truth About the New York City Flu Mandate.” People who support vaccination are labeled “vaccine pushers,” and scientists and CDC workers are referred to as “liars” and “evil” (10). In essence, many anti-vaccination sites such as VacTruth.com gain followers through fear mongering rather than reason.

The Pro-Vaccination Response

 Although the anti-vaccination movement continues to grow, efforts to convince parents to vaccinate their children have also increased. Websites like historyofvaccines.org—that are run by physicians and seek to convince people that vaccines are safe and natural—have gained traction online. Other websites target those who are anti-vaccination and provide support for people who were once against vaccinating but whose views have changed. For instance, the website voicesforvaccines.org cites scientific studies that prove the efficacy of vaccines and has posts written by parents who used to be anti-vaccination. In her post “Leaving the Anti-Vaccine Movement,” Megan Sandlin addresses the intense backlash she got from friends when she became pro-vaccine, saying that she lost friends and was told that her daughters were going to get autism (9). Though the internet has allowed anti-vaccine communities to grow, it has also given rise to pro-vaccine websites and communities for newly pro-vaccine parents.

In addition, celebrities who oppose vaccination are commonly called out for their false and damaging beliefs and will occasionally face professional consequences. State Farm recently dropped an ad campaign in September that featured actor and anti-vaccination activist Rob Schneider (7). When it was announced that Jenny McCarthy, who re-popularized the claim that vaccines cause autism, would co-host the seventeenth season of The View, a pro-vaccination website with statistics on how many people have died from preventable illnesses was created at the domain JennyMcCarthyBodyCount.com (3). Despite the fact that fewer parents are vaccinating their children, the vast majority of Americans thankfully continue to believe that vaccines help human health; thus, advocating against vaccines remains socially and politically unpopular.

Yet despite pro-vaccination efforts, vaccination rates are still decreasing. So what can be done to ensure that further outbreaks of diseases such as measles and rubella do not occur? From a policy standpoint, states could lower the number of vaccine exemptions they give. Currently, 19 states allow certain children to be exempt from vaccine requirements due to their parents’ beliefs (2). However, eliminating or reducing these exemptions could be viewed as excessive government intervention in medical care, potentially making these efforts politically unpopular. The simplest way to raise vaccination rates is to continue awareness campaigns so that anti-vaccination celebrity activists will lose their influence. At times, it can be difficult to argue and engage with people who ignore scientific consensus and use incendiary rhetoric; however, the most convincing pro-vaccination efforts have not stooped to ad hominem attacks. To raise vaccination rates, the scientific community must ultimately do what it does best: remain objective and offer recommendations based on well-tested facts.

Brendan Pease ’17 is a Molecular and Cellular Biology concentrator in Kirkland House.

Works Cited

  1. Alcinder, Y. “Anti-Vaccine Movement is Giving Diseases a 2nd life.” USA Today. (April, 2014).
  2. Pearl, R. “A Doctor’s Take on the Anti-Vaccine Movement.” Forbes. (March, 2014).
  3. “Anti-Vaccine Body Count.” (Sept, 2014). Retrieved from http://www.jennymccarthybodycount.com/Anti-Vaccine_Body_Count/Home.html.
  4. “Vaccine Fact Sheet.” Center for Disease Control and Prevention. (May, 2014). Retrieved from http://www.cdc.gov/vaccines/vpd-vac/fact-sheet-parents.html.
  5. “The History of Vaccines.” (2014). Retrieved from HistoryOfVaccines.org.
  6. Wakefield, A. et al. Ileal-lymphoid-nodular Hyperplasia, Non-specific Colitis, and Pervasive Developmental Disorder in Children. The Lancet 351, 637-641. (Feb., 1998).
  7. Blake, M. “State Farm dumps pitchman Rob Schneider over anti-vaccine views.” LA Times. (Sept, 2014).
  8. McCarthy, J. “The Gray Area on Vaccines.” Chicago Sun-Times. (April, 2014).
  9. Sandlin, M. “Leaving the Anti-vaccine Movement.” (2014). Retrieved from http://www.voicesforvaccines.org/leaving-the-anti-vaccine-movement/.
  10. “News.” (Oct, 2014). Retrieved from http://VacTruth.com/news/.

Mortality and Morality: The Ethics of Ebola

by Jackson Allen

If not controlled within sixty days, the United Nations warned recently, the current Ebola outbreak will lead to an unprecedented and unplanned situation (1). Over the past year, the epidemic has been building in three countries in West Africa: Liberia, Guinea, and Sierra Leone. Outside of public health circles, the outbreak remained largely under the radar until recently, seen as just another bout of disease in a far-off land. Beginning in March 2014, twenty-three deaths were attributed to a mysterious hemorrhagic fever (2). It was not until August that the World Health Organization (WHO) declared the outbreak in West Africa an international public health emergency, a full five months and over three thousand deaths later (3). Recently, the first cases of Ebola were confirmed in the United States and Europe, as the death toll surpassed 4,000 in West Africa (2). When coupled with a 70-90% mortality rate, the ominous CDC prediction of 1.4 million cases by January 2015 is a clear indication that the world must act (2).

This outbreak has brought to light many bioethical issues that are often not considered until an epidemic occurs. For example, much of the confusion in the initial stages of Ebola resulted from problems with the coordination of efforts to combat the disease (3). In a case like Ebola, we must ask which agency should be responsible for leading the international effort. Individually, the CDC, WHO, or groups like Doctors without Borders are unable to completely manage an epidemic of this scale (3). Clearly, there is a need for coordinated efforts by national governments, humanitarian aid organizations, pharmaceutical groups, and public health agencies like the CDC and WHO. However, this raises more questions related to power structure, ability to adjust to changing circumstances, and unified efforts for containment.

Secondly, the worldwide community must examine why quarantine—typically the most effective method of stopping disease spread—was ineffective at containing this epidemic. Ebola is not an airborne illness; it is only spread by contact with the blood or bodily fluids of those suffering from the disease. Thus, the outbreak should have been relatively easy to contain by isolating sick individuals (2). Still-increasing death tolls clearly demonstrate that something went wrong with this response. Fear of Western medicine, cultural differences, and inadequate resources all contributed to the unraveling of the situation in West Africa. Though the problem here is clear in hindsight, a better process for controlling this type of outbreak has eluded us thus far.

Finally, the world has recently seen the rise of several experimental drugs for the treatment of Ebola (4). The rush to produce an effective treatment raises many ethical questions. And if the world’s scientists were to find an effective treatment that is considered safe after very limited human trials, it remains unclear to whom the vaccine would be distributed, and how. Suggestions include vaccinating healthcare workers and military personnel first (5). Other arguments can be made that citizens of the affected countries should be among the first vaccinated. However, there are serious ethical pitfalls with either route.

Quarantine

The primary public health procedures for treating a disease like Ebola focus on quarantine, which requires health care workers to isolate those affected for up to a few weeks (2). This is typically most effective in countries where governments and public health experts are able to take control swiftly and without opposition. In the chaos of this epidemic, however, quarantine has been largely ineffective (2). Frequently, medical clinics are overrun by those fearing foreign intervention and medical practices that are not well explained or understood (6). The violence and fear accompanying the outbreak create a very difficult environment in which to manage an epidemic (7). The geography of West Africa also makes widespread quarantine efforts more difficult, as rural villages often have no resources for medical treatment, and people are unable or too afraid to seek treatment (3, 7). Doctors Without Borders, an international humanitarian organization, found that their resources to contain the outbreak were wholly insufficient, even after expanding their operations (7). As the only available medical treatments for Ebola focus on quarantine and giving fluids, lack of supplies is clearly a major problem. In August, a Doctors Without Borders report noted, “It is not currently possible to administer intravenous treatments” (7). The report also cited lack of safety equipment and problems with disposing of bodies as grave concerns for the future (7). As a result, many medical triages have been forced to shut down or drastically reduce their ability to care for patients.

A swift response in the early stages of the outbreak would likely have contained the virus while building trust among those living in affected countries. But by now, the outbreak is too large for quarantine alone to constitute a sufficient response. Even so, the world must continue its efforts to slow the spread of disease and save lives. The best method for doing so, however, remains unclear. Quarantine enforced by military personnel is one option, albeit one that would be costly, slow, and likely to further alienate the very people needing treatment. The world is unlikely to support such action in a sovereign country. Other methods of enforcing quarantine come with similar problems. For example, humanitarian organizations lack the power and jurisdiction to enforce quarantines. Additionally, containment efforts must not ignore the human side of this epidemic. In light of reports that those brought to medical centers often disappear into quarantine and are never seen again, we must place a high importance on communication with the families of those affected (7). The Agence France-Presse put a face on this aspect of the tragedy in a report from outside a Red Cross clinic in Monrovia, the capital of Liberia (7). A mother in the crowd described her worst fears over her son in the clinic: “We get no record from the authorities. They always say we should wait. I come here every day. I want to see my son! Maybe he is already dead” (7).

Chain of Command

Another serious problem with the initial response to the outbreak stems from confusion over the role of different leading health authorities. The CDC, for example, was very active in the initial stages of the epidemic but has since taken a less active role as WHO and others joined the fight against Ebola (3). For its part, WHO was lackluster in its initial response to the outbreak. “There’s no doubt we’ve not been as quick and as powerful as we might have been,” noted WHO Assistant Director General Dr. Marie-Paule Kieny (3).

Budget cuts in recent years have forced reallocation of emergency preparedness funds to other WHO programs, limiting the response of the primary health agency of the United Nations, which recently estimated $1 billion would be necessary to stamp out the disease in West Africa (7). Recently the epidemic response department of WHO, which included anthropologists working to overcome cultural differences during outbreaks, was closed due to lack of funding (3). Much of the mistrust of Western healthcare workers results from the limited ability of the countries affected—primarily Guinea, Sierra Leone, and Liberia—to contribute mightily to the containment effort. As the world’s health needs vastly outpace WHO funding, Director General Dr. Margaret Chan has called on the countries affected to take primary responsibility for management of the outbreak (3). WHO simply lacks the resources necessary to take control in the manner the world would like to see during an epidemic. This problem becomes much graver when epidemics hit countries that cannot contribute much to the public health response, as is the case with Ebola. The entire annual budget of Guinea, for example, is only $1.8 billion, less than WHO’s (8).

Experimental Drugs

In the months since the first cases of Ebola, the world has seen an incredibly swift response from scientists rushing to produce any potential vaccine. The leading drug candidate, called ZMapp was rushed into use before even beginning human clinical safety trials with the FDA (5). The first doses of ZMapp were given to two American aid workers. The two Americans recovered, as did two Liberian doctors and a British nurse who also received the experimental therapy. Two other patients treated with ZMapp died from Ebola (5). The release of this news provoked outrage in both the West and in the countries currently affected by the outbreak. Three leading experts on Ebola, including Peter Piot, co-discoverer of the virus, called for the treatment to be given to patients in West Africa, invoking notions of Western insensitivity from many Africans on the front lines of the outbreak (4). While the decision to give the experimental drug to Westerners first may seem questionable, the argument can also be made that “experimenting” on African patients before examining the safety of ZMapp is just as slippery a slope. It would be tremendously morally reprehensible to give it to under-informed citizens who lack knowledge of the drug’s adverse side effects.

However, the use of ZMapp in a few cases raises further ethical questions as Mapp Biopharmaceuticals ramps up its production capacity (9). As the world’s supply of ZMapp is built up over the coming months, a system for allocation of the drug must be worked out. Healthcare workers are most likely to receive ZMapp first if they fall ill with Ebola, following the logic that these people are vital if other patients are to continue being treated (4). American military or government personnel would likely be very high on the list as well, especially given that development of ZMapp and other experimental treatments was partially funded by the National Institutes of Health (9). Though supplies of an experimental therapy would be very limited, giving a working treatment to as many people in Africa as possible represents the hope of building trust in Western medicine in Liberia, Sierra Leone, and Guinea. Though supplies of an experimental therapy would be very limited, there could be tremendous benefit from treating as many people in West Africa as possible. Currently, the CDC estimates that Ebola cases total two and a half times the official figure (2). Demonstrating that doctors can provide curative treatment could lead to increased reporting of Ebola cases and cooperation with efforts to quarantine the sick. Then again, one must carefully consider how people would react to the knowledge that a very limited supply of treatments exists.

In the near future, modern science will likely produce therapies to combat the Ebola virus. Soon after will come the arguments for and against every conceivable group receiving treatment first. Doing so invokes parallels to Seattle’s “God committee,” which in the 1960s infamously used social criteria like church membership and earnings to justify the distribution of extremely limited kidney dialysis treatments to terminally ill patients at Seattle’s Swedish Hospital (2).

The Future

The future for this year’s Ebola outbreak is foreboding, but the challenge of containment is not insurmountable. Recently, WHO reported to the United Nations that drastic changes to the worldwide response must be seen within sixty days, or the world may “face an entirely unprecedented situation for which we do not have a plan” (1). The epidemic could grow to 10,000 new cases per week (1). The death rate has also risen simultaneously and may continue to climb as the infrastructure for providing medical care becomes overwhelmed (1).

In the coming months, the world’s scientists, humanitarian organizations, and leaders must find ways to shift the tide of the epidemic. This response must primarily focus on new treatments, safety procedures, and the expansion of access to medical care in West Africa. The world’s efforts will focus entirely on reducing R0, or the basic reproduction number, associated with the Ebola virus. The current R0 indicates that a contagious person can infect about two new people with Ebola (10). Bringing R0 below one new person per infected individual will cause the disease to die out (10). This can be accomplished through treatments, vaccines, quarantine, sanitation, as well as other factors.

Yet, in these efforts to combat the outbreak, the world must not lose sight of ethical principles. Bringing the ethical issues of an epidemic to public discourse is crucial to our management of outbreaks. With all of the resources the developed world can—and presumably will—bring to bear on Ebola, we cannot ignore the common responsibility to do our best to assist those in need. Lackluster responses to even the first Ebola deaths in Africa have been the greatest ethical pitfall of the outbreak thus far. Looking toward the future, it is imperative that the worldwide response avoids repeating mistakes already made and works to overcome dilemmas already foreseen.

Jackson Allen ’18 is currently a freshman.

Works Cited

  1. Sixty Days To Beat Ebola, United Nations Warns. Sky News. 15 October 2014.
  2. I. Meltzer, C. Y. Atkins, S. Santibanez, B. Knust, B. W. Petersen, et al., Estimating the Future Number of Cases in the Ebola Epidemic — Liberia and Sierra Leone, 2014–2015. Centers ford Disease Control and Prevention. Morbidity and Mortality Weekly Report 63(03); 1-14. 26 Sept. 2014.
  3. Fink, Cuts at W.H.O. Hurt Response to Ebola Crisis. New York Times. 03 Sept. 2014.
  4. Dyxon, Africans, three Ebola experts call for access to trial drug. Los Angeles Times. 06 Aug. 2014.
  5. Dickenson, The Ethics of Ebola. Project Syndicate. 03 Sept. 2014.
  6. Report: Armed men attack Liberia Ebola clinic, freeing patients. CBS News. 17 Aug. 2014.
  7. Beds scarce, staff scarcer, in Liberia’s overrun Ebola wards. Agence France-Presse. 28 Sept. 2014.
  8. Guinea. The CIA World Factbook. 22 Jun. 2014.
  9. Questions and Answers on Experimental Treatments and Vaccines for Ebola. Centers for Disease Control and Prevention. 29 Aug. 2014.
  10. Doucleff. No, Seriously, How Contagious Is Ebola? National Public Radio. Shots: Health News from NPR. 2 Oct. 2014.

Monarch Butterflies and the Plight of Migratory Species

by Caitlin Andrews

Each year, on the last day of October, people in Mexico honor their ancestors and deceased loved ones during the holiday of Day of the Dead. Over three days of celebration, they march in parades wearing colorful masks and costumes, build ornate altars, and decorate gravestones with orange marigolds—gifts to the departed. Everywhere, there is color, and nowhere is this as true as in forests of central Mexico, where, each year, the holiday coincides with the arrival of millions of monarch butterflies. At the end of a two month, nearly 3,000 mile journey taking them from across the United States and Canada to the same patch of Mexican forest each year, the monarchs have always appeared with such regularity that the local villagers have incorporated them into their traditions. These butterflies, or mariposas, the people say, are the souls of departed loved ones descending from the heavens and returning for a visit to Earth (1).

Yet, in 2013, as Day of the Dead came and passed, there was something noticeably missing from the festivities. At first, people assumed that the monarchs’ absence meant that there was simply a delay in their migration cycle, since a cold spring that year had prevented the monarchs from returning to the north on time (2). But, while 33 million monarchs eventually arrived at their winter roosts in the forest, this number paled in comparison with the astounding 60 million butterflies estimated in the 2012 migration (3, 4). It was a record low both in terms of population size and forest coverage, with butterflies spanning only 1.5 acres, or approximately half of the previous year’s coverage. Most troubling of all, these numbers merely lent support to a long-observed pattern of decline in the butterflies’ populations, suggesting that this was not a matter of normal fluctuation but a sign that there were serious threats facing the monarch butterfly—an icon of Mexico’s ecology and culture (2).

THE PLIGHT OF MIGRATORY SPECIES

In the natural world, there are few sights more breathtaking than the migration: whether one is witnessing flocks of birds navigating over the Appalachians, pods of whales coming up for air on their tireless journeys across oceans, or herds of zebras traversing the African plains. When we witness events like these, it is as if we are temporarily immune to the problems of our world. After all, it can be difficult to associate these throngs of animals with thoughts of endangered species or ecological destruction. However, we may be too quick to assume that animals that come in the thousands or millions, like monarch butterflies, are not worth our time or in need of protection. Recently, a group of conservationists has been calling for a reexamination of the threats to migratory species.

This movement could not have come at a better time, as migratory and non-migratory species, alike, are facing increasing pressures from a multitude of sources, including habitat loss, urbanization, overexploitation, and climate change. In some ways, migratory species are able to reap the benefits of their mobility, especially in light of rising global temperatures. As warm temperatures move northward, it may be easier for migratory species to shift their ranges in response, while non-migratory species might find it difficult to cover the necessary distances. However, migratory species also face unique challenges as a result of their lifestyles. First, climate change can disrupt important signals that animals use to decide when to begin their migrations, such as seasonal light and temperature changes (5). Additionally, because they travel over long distances, migratory animals often pass through many different habitats; thus, they may be more vulnerable to a range of threats at each stop along the way, while species with smaller ranges are likely exposed to a more limited set of dangers (6). Urbanization introduces new obstacles, which can be physical or chemical. On land, migration can be severely impeded by fences and dams, while marine and freshwater animals can face salinity changes and chemical pollutants. These barriers not only act to slow or prevent movement but can also cause serious injury and even death to animals who come across them (7). Also, when animals gather in large herds or flocks during their migratory season, it is an open invitation for hunters and poachers to exploit these large assemblies for mass killings—and quick profits (6).

However, perhaps the greatest challenge facing migratory species is the fact that, unlike humans, animals do not recognize park boundaries or international borders. Even if animals are protected within a park or country, their migrations will likely take them outside of these areas, into territories where they may or may not be safe. For example, American bison frequently stray out of Yellowstone National Park during their winter migrations; beyond the protection of the park, they may be killed by livestock herders, who fear that the bison will transmit diseases to their cattle (5). In some cases, it can be difficult to determine who is responsible for the protection of migratory species. This is especially true for oceanic species, as it can be unclear who “owns” the ocean and has the duty and authority to lead marine conservation efforts. More often than not, however, the greatest issue is coordination. When a species migrates across many borders, it can be difficult—if not impossible—to get every country within its range to cooperate, particularly if there are preexisting political disputes between them. Unfortunately, whenever politics are involved, conservation is bound to be pushed to the side (5).

“AN ENDANGERED PHENOMENON”: A REBRANDING OF CONSERVATION

Fortunately for the monarch butterflies and other migratory animals, a new understanding of the plight of these species has begun to take hold in the conservation movement. These efforts have largely been spearheaded by a group called The Convention on the Conservation of Migratory Species of Wild Animals (“CMS”). Backed by the United Nations, CMS states its mission as bringing together “the States through which migratory animals pass…[and laying] the legal foundation for internationally coordinated conservation measures throughout a migratory range” (8). CMS is the only global convention focusing entirely on the conservation of migratory species, and, since its establishment in 1979, it has made great strides in uniting countries behind conservation efforts. One of its greatest successes came just recently, with the enactment of “The Gorilla Agreement” in 2008. This legally-binding treaty was the first of its kind to focus on issues of gorilla conservation across all ten range nations, most notably the Democratic Republic of Congo and Rwanda. It was unclear whether these two nations would be willing to put aside their long-term political conflicts for the sake of the gorillas, but they ultimately did, and the agreement will ensure that all range nations agree to work toward improving the conservation of these incredible animals going forward (8).

Beyond issues of politics, there is another issue that CMS and other organizations must face, and that is an issue of the image of migratory species. In his article “Animal Migration: A Migratory Phenomenon,” Princeton Professor David Wilcove explains that migratory species, like the monarch butterfly, may be on the decline but are not necessarily “endangered” in terms of their overall numbers; therefore, they are often overlooked by conservationists, who tend to focus on animals facing imminent extinction. However, Wilcove writes, “migration is fundamentally a phenomenon of abundance and must be protected as such” (6). He advocates for a rebranding of “endangered species” to include a separate scale for migratory animals, in which species are ranked based on patterns of population decline and habitat loss, and not simply based on numbers. By describing animal migration as an “endangered phenomenon,” Wilcove believes, conservationists may begin to rethink what it means for a species to be worth conserving today.

A RETURN TO MEXICO: THE HISTORY OF MONARCH BUTTERFLY CONSERVATION

While the monarch butterfly is commonly viewed as one of the most iconic migratory species, the US, Canada, and Mexico have failed to launch a unified effort to conserve the habitats along the monarchs’ migratory route. For many years, the majority of the threats facing monarchs appeared to be in Mexico, with illegal logging decimating the pine and fir tree forests where the monarchs roost each winter (2). Tree coverage is not only crucial for providing the monarchs with places to roost, but it also keeps the understory relatively warm and dry, which is essential during winters in the high altitude Mexican forests (1, 2). Fortunately, in the past decade, the Mexican government has recognized the value of the monarch butterfly, both ecologically and economically, as huge numbers of tourists flock to Mexico to witness the arrival of the butterflies each year. Large-scale illegal logging peaked in 2007 and has nearly become a non-issue, so it is with great bewilderment that Mexicans have continued to observe a decline in the number of monarchs returning to the forests each winter (2).

The fact of the matter is that Mexico can do all that it can to conserve the monarchs, but, if the butterflies are not protected along the rest of their 3,000 mile route, then a winter safe haven is just that—a temporary, seasonal sanctuary. All throughout the United States, monarchs are facing a threat arguably more potent than deforestation. In the Midwest in particular, industrial farmers are using herbicides that kill all ground cover except for their crops, which have built-in, engineered immunity. As a result, the abundance of milkweed—the plant that sustains newly-hatched monarch caterpillars—has drastically declined, with some states seeing as much as a 90% reduction in milkweed (3). Since 2000, farming has destroyed approximately 100 million acres of monarch habitat in the Midwest, and this number is only growing (2).

There have been some efforts to replant milkweed in the US, but most have failed. First, replanting is typically a losing battle against the ever-growing industrial farms that just end up spraying more herbicide each growing season. Also, a campaign encouraging residents to plant milkweed in their backyards ended up backfiring when people began to plant tropical varieties of milkweed. Because these varieties persisted through the winter, monarchs failed to recognize when it was time to migrate south, and many populations faced devastating losses as the weather grew too cold (2). Other campaigns have initiated citizen science projects for monitoring monarch populations, but this has only provided estimates of population size—and, as of yet, no solutions (1).

It is unclear what lasting effects decreasing populations will actually have on the monarchs’ migration, but we can only assume that, if current trends continue, the future of these butterflies and the ecosystems they inhabit may be grim. One of the most amazing aspects of the monarch migration is that the northward journey is undertaken by four successive generations of butterflies, each of which completes one leg of the trip. This means that the individuals making the journey south for the winter are not the same butterflies that previously roosted in Mexico. So, how do they find their way to a patch of forest that they have never been to? Some hypothesize that these individuals use chemical signals left behind by their predecessors. But, with fewer butterflies leaving Mexico at the end of the winter, there will be weaker signals left behind, and the monarchs going to Mexico the next year may have trouble finding their way without a strong signal to guide them home (4).

TOWARD THE FUTURE

With the passing of another year and another Day of the Dead in Mexico, it remains to be seen whether the monarch butterfly population has continued to dwindle. However, it is important to remember that, even after the monarchs have departed for their winter roosts, our obligation to the monarchs—and to Mexico—does not end. Even though the monarchs only bring color to our backyards for part of the year, it does not mean that our responsibility leaves with them. We must remember to hold up our end of the bargain for the sake of the monarchs, and for the sake of the communities and ecosystems all along their migratory route.

The same holds true for other migratory species, both in the US and around the world. While range countries, in particular, have a duty to protect the species that pass through their land, the threats facing migratory animals are not local issues. Climate change and many other environmental problems are global threats, and, therefore, are the responsibilities of all of us. We have to ask ourselves what the world would look like without great migrations. Would we feel the same sense of awe at the sight of a lone wildebeest crossing the African plains or a single starling flying overhead as we would if an entire herd or flock of these animals passed before us? How colorless would our lives become if schools of salmon never made their upriver journey or the monarchs never returned in springtime? While this might seem a far-off fantasy, we must protect these species now if we wish to preserve these sights—not only for ourselves but for future generations, as well.

Caitlin Andrews ’16 is a junior in Cabot House concentrating in Organismic and Evolutionary Biology.

Works Cited

  1. “Why Fewer Monarch Butterflies Are Surviving Their Winter Migration to Mexico.” PBS, 24 Dec. 2013. Web.
  2. Wade, Lizzie. “Monarch Numbers in Mexico Fall to Record Low.” Science/AAAS, 29 Jan. 2014. Web.
  3. Robbins, Jim. “The Year the Monarch Didn’t Appear.” NY Times 22 Nov. 2013. Web.
  4. Plumer, Brad. “Monarch Butterflies Keep Disappearing. Here’s Why.” Washington Post 29 Jan. 2014. Web.
  5. Wilcove, David S., and Martin Wikelski. “Going, Going, Gone: Is Animal Migration Disappearing.” PLoS Biology 6.7 (2008): E188. Web.
  6. Wilcove, David S. “Animal Migration: An Endangered Phenomenon?” Issues in Science and Technology (Spring 2008). Web.
  7. Wolff, Wim. “The Significance of Artificial Barriers to Migration Across International Borders.” Convention on the Conservation of Migratory Species of Wild Animals: 14th Meeting of the CMS Scientific Council (2007). Web.
  8. Convention on the Conservation of Migratory Species of Wild Animals. Web. <http://www.cms.int/&gt;.

Defeating Malaria: A Vision for the Future

by Serena Blacklow

“I was visited by a mother, Tibangwa Sarah, whose daughter had a severe fever. The malaria rapid diagnostic test was negative, so I wrote her a referral to the health centre for more tests. Instead, because she didn’t trust me, she went to the drug shop, bought the wrong drugs and the child died.”

– Katusabe Beatrice, a community health worker in Uganda (25)

About 207 million people were diagnosed with malaria in 2012 (1). An estimated 627,000 deaths occurred that year due to malaria, 90% of which occurred in sub-Saharan Africa (2).  Yet, these numbers mark a 42% decrease in mortality rate since 2000 (1).  Globally, 40% of the population is at risk of contracting malaria (1).  Both the devastation and the global scale of the malaria crisis necessitate that international efforts be expended on conducting research and sending aid to quell the flow of cases.

Malaria thrives in tropical and subtropical environments, especially Sub-Saharan Africa, Asia and South America, where muggy weather allows for ideal mosquito breeding conditions. Children under the age of 5 and pregnant women are the most susceptible subpopulations to contracting the disease.  In Africa, malaria kills one child per minute (1).

Infection

Malaria is caused by a parasitic microorganism from the genus Plasmodium. There are 4 main species that infect humans: P. falciparum, P. vivax, P. ovale, and P. malariae. Mosquitoes successfully infect an individual during a bite by transferring parasites into the bloodstream. The parasites—at this point called sporozoites—travel to the liver, where they mature and reproduce (3).  At this stage, they reenter the bloodstream as merozoites, which further infect red blood cells and reproduce inside them. When these red blood cells burst, usually within 48-72 hours of infection, they release more pathogens into the body, and the cycle continues.
Symptoms range from feverishness to organ failure and commonly include headaches, nausea, vomiting, aches, and an enlarged liver. However, in more severe cases—such as an infection by P. falciparum—other health problems can arise.  Anemia (decreased amount of red blood cells in the bloodstream), respiratory difficulties, low blood pressure, kidney failure, hypoglycemia (low blood sugar levels), and cognitive dysfunction are other potential symptoms.  Malaria can also result in death (4).

Treatments and Resistance

The first effective treatment for malaria was quinine. Quinine remained the primary treatment until the 1920s, when researchers began developing and testing new drugs. By the 1940s, chloroquine replaced quinine as the go-to treatment for a malarial infection. Resistance to chloroquine developed quickly, however, so its use declined, whereas quinine continues to be used as a first- or second-line treatment (5).

More recently, artemisinin derivatives have been incorporated as active ingredients into treatments.  Experts recommend artemisinin-based combination therapy (ACT) as a first-line treatment for malaria when it is available because of fear of rising resistance to pure artemisinin. The four recommended ACTs, according to the World Health Organization, are artemether-lumefantrine, artesunate-amodiaquine, artesunate-mefloquine, and artesunate-sulfadoxine-pyrimethamine (6).

There are official, unofficial, and counterfeit drugs distributed as malaria treatments.  Official drugs are those endorsed by institutions such as the CDC and administered by the WHO to the Ministries of Health in countries of need. Unofficial drugs are provided by countries such as China, Brazil, and India in “exchange” for something else in the future (7). On the other hand, locals provide counterfeit drugs, and these drugs are usually composed of a mix of other drugs plus a small amount of active anti-malarial drug such as artemisinin. These low-quality “treatments” can result in health issues due to the other drugs that are mixed in.  Even more significantly, they encourage drug resistance; the small amount of active ingredient is not enough to treat the patient, but it is just enough to expose the parasite for long enough to adapt resistance (7).

Prevention Methods

The most common prevention methods for malaria include indoor residual spraying and mosquito netting.  These relatively inexpensive solutions disincentivize mosquitoes from biting by creating either a chemical or physical barrier between the mosquito and the vulnerable human.  Since a mosquito bite is the primary means of malarial transmission and mosquitoes are most active between dusk and dawn, these methods are among the most successful prevention methods.

In a modern compilation of these methods, Insecticide-Treated Bed Nets (ITNs) are now being endorsed by the Center for Disease Control (CDC) as effective protection for sleeping families in high-risk areas (8). The newest development related to ITNs are long-lasting insecticide-treated bed nets (LLINs).  LLINs are nets that have pyrethroid insecticide woven into their fabric, and they last three to five years before they need to be replaced, with each one costing $2.10 (9, 10). Though the use of these nets has been effective, resistance to pyrethroid is an emerging concern, since other insecticides have not yet been tried with nets and are certain to increase costs (10).

In sub-Saharan Africa, a total of 450 million nets are needed to protect all at-risk households, which corresponds to new net donations of 150 million annually, since they last about three years. In 2004, manufacturers distributed only 6 million nets, but, in 2010, they distributed 145 million. In 2011 and 2012, however, these numbers fell substantially—to 92 and 70 million respectively, though numbers began  to rise again, to 136 million in 2013 and a projected 200 million in 2014 (2).  These delivery numbers need to remain high to ensure continual universal coverage. Any sag in coverage leaves the population vulnerable to malaria resurgence, since the use of nets is one of the most useful forms of protection.

Immunity and Vaccines

Natural immunity provides another form of protection to some individuals. There is remarkable overlap between malaria-dense regions and those populated by sickle-cell heterozygous individuals. Those who are heterozygous for the sickle-cell trait do not have the same symptoms as diseased homozygous individuals, but they have the ability to produce sickled red blood cells, and malarial parasite infection can catalyze this event. Sickled cells are more quickly removed from circulation, and this quick clearing provides natural immunity; the parasites are unable to mature and reproduce in the short time that the sickle red blood cells are in the bloodstream (11). Though fully developed sickle cell anemia has its own health concerns, this temporary response works in an individual’s favor in the case of malaria infection.

Natural immunity is not universal, and the work towards eradication requires more than just administering treatments. Vaccines are also necessary to ensure protection for future generations. There is one malaria vaccine candidate finishing its Phase III clinical trial: RTS,S. The Malaria Vaccine Initiative, run by PATH, reports that “over 18 months of follow-up, RTS,S was shown to almost halve the number of malaria cases in young children (aged 5-17 months at first vaccination) and to reduce by around a quarter the malaria cases in infants (aged 6-12 weeks at first vaccination)” (12). The exact mechanism of RTS,S is unclear, however.  The WHO could recommend RTS,S for use as early as 2015 (12).

Distribution and Reporting

Distribution of bed nets and treatments from the World Health Organization goes through the Ministries of Health. Limitation of this central supply is an obstacle in providing communities with the treatment and protection they need, but there are larger supply chain issues that interfere with efficient distribution. Though treatments are delivered free in many places, the availability of those treatments in the places of need is a focal issue. There are many reasons for this issue. Distributors will not provide treatments until a test has returned positive, and delivery timelines are not always met. Also, since a government may base its distribution on previous data about outbreak patterns in particular areas, if there is in fact a large outbreak somewhere unexpected, redistribution is not efficiently and effectively done (7).

Correct distribution of malarial drugs would be efficiently implemented if there were universal diagnostic testing. Currently, people are so accustomed to encountering malaria in their everyday lives that they sometimes incorrectly assume that anyone with a fever has malaria. Self-diagnosis results in ineffective use of antimalarial agents and unnecessary depletion of resources, undermining long term drug resistance and elimination visions (7). Implementing diagnostic tests, such as the Rapid Diagnostic Test (RDT), and opening their availability to the private sector will help with self-diagnosis and alleviate this strain on already-scarce resources.

Village health care workers ensure compliance and work in delivery teams. They are usually localized grassroots organizations.  Health care workers and volunteers use cell phones to report cases back to clinics using rapid reporting systems. These volunteers are not paid, so the Ministries of Health and partners such as PATH reward them with free cell phones and bikes (13). In the future, rewards should include free cell phone minutes, which will help them make more reporting calls for free.

Costs and Global Initiatives

The World Health Organization estimates total international funding towards malaria in 2012 was the equivalent of 2.5 billion U.S. dollars. Universal malaria control and care would require at least $5.1 billion in global resources per year between 2011 and 2020, so current efforts are falling short by at least 50% (2).

A web of organizations lays the foundation for the international combat against malaria.  Below is a list of initiatives that have been taken to help reduce the threat of malaria worldwide:

Roll Back Malaria: Roll Back Malaria is a global partnership comprising more than 500 foundations, organizations, and institutions from both malaria-endemic countries and their supporters (14). Its overall strategy toward fighting malaria is outlined in the Global Malaria Action Plan, which details goals to reduce malaria mortality and reach elimination, as well strategies and funding needs to reach those goals.

The Global Fund: The Global Fund to Fight AIDS, Tuberculosis, and Malaria is an international nonprofit organization founded in 2002 that has since provided the equivalent of $8.8 billion for malaria programs in 97 countries. It has many partners, including the World Health Organization and Roll Back Malaria (15).

Global Health Initiative: President Obama announced the Global Health Initiative in 2009 in an effort to address gender equity but also to strengthen health systems, encourage partnerships, and promote research (16). The President’s Malaria Initiative is a core component.

The President’s Malaria Initiative (PMI): The President’s Malaria Initiative was established in 2005 with the goal to cut in half the number of malaria cases in 70% of affected places in sub-Saharan Africa from 2000 to 2014 (17).  New goals for 2015-2020 include cutting by 33% the malaria mortality rate in countries of focus (18). From 2000 baseline levels, these goals will result in an 80% reduction of mortality rates by 2020.

The World Health Organization (WHO): The World Health Organization provides Ministries of Health with antimalarial drugs and other prevention methods, such as nets. They provide detailed annual reports on funding; vector control; use of treatments, nets, and diagnostic tests; and case monitoring, among other data.

United States Agency for International Development (USAID): USAID is a U.S. government agency that supports a large spectrum of international investments in humanitarianism. Its malaria programs support Roll Back Malaria’s Global Malaria Action Plan and the President’s Malaria Initiative (19).

Malaria Consortium: The Malaria Consortium has been working against malaria since 2003. The non-profit organization, in collaboration with the Global Fund, local governments, and the President’s Malaria Initiative in Uganda, distributed 10 million LLINs in 2013, trained 13,000 community health workers, and diagnosed and treated over 1 million malaria cases (15). It is funded in part by Comic Relief, a funding source from the U.K.

PATH: PATH is an international non-profit organization that, through the Malaria Vaccine Initiative and The Malaria Control and Elimination Partnership in Africa, collaborates with other partners to come up with innovative ways of developing and delivering malarial treatments and prevention measures, from diagnostic tests to vaccines (12).

Worldwide Antimalarial Resistance Network (WWARN): The Worldwide Antimalarial Resistance Network aims to research antimalarial drug resistance and map its emergence on the world stage. It advocates investment in better surveillance methods and better tracking of the effects of counterfeit or other ineffective drugs (20).

Center for Disease Control (CDC): The CDC helps set standards for malaria policies and also monitors these policies around the world. It collaborates with the World Health Organization, the President’s Malaria Initiative, and Roll Back Malaria (21). In the US, the CDC helps ensure that malaria does not reemerge as a threat and informs travelers of the threat of malaria abroad.

Zambia: a brief case study

The President’s Malaria Initiative has provided Zambia with $127 million since 2005. In 2012, it provided 3 million RDTs, and it will supply 3.5 million more in 2014 (18). Currently, a mass drug administration is being tested in Zambia to see if this mass administration will yield a decrease in malaria cases or a change in the genetics of the parasite population (7). Hopefully, tests such as this one will pave the way for the most efficient distribution of treatments: will administration decrease case counts most effectively if the drugs are distributed en mass, by household if there is at least one infected person, or on a case-by-case basis (7)?

We are so close yet still so far from eliminating malaria as a global public health threat. Complete elimination is a vision for the future, but what we do now will have an enormous impact on that future.

Serena Blacklow ’17 is a sophomore concentrating in Engineering Sciences.

Works Cited

  1. World Health Organization (2014). Malaria. Retrieved October 2014 from http://goo.gl/vekhxg
  2. World Health Organization (2013). World Malaria Report 2013. Retrieved October 2014 from http://goo.gl/sC4rl9
  3. A.D.A.M. Medical Encyclopedia (2013). Malaria. Retrieved October 2014 from http://goo.gl/Ns7dyI
  4. Bartoloni, A, et al. (2012). Mediterr. J. Hematol. Infect Dis. 4(1).
  5. Achan, J., et. (2011). Malaria Journal. 10(144).
  6. World Health Organization (2008). World Malaria Report 2008. Retrieved October 2014 from http://goo.gl/yQp2QB
  7. Sarah Volkman (Principal Research Scientist, Dept. of Immunology and Infectious Diseases, Harvard School of Public Health; Director, Malaria Diversity Project, Broad Institute) personal communication, October 16, 2014
  8. Centers for Disease Control and Prevention (2014). Insecticide-Treated Bed Nets. Retrieved October 2014 from http://goo.gl/Zdo8NU
  9. Global Malaria Programme. Insecticide-Treated Mosquito Nets: a WHO position statement. Retrieved October 2014 from http://goo.gl/0OjlvS
  10. Toe, Kobie H., et al. (2014). Emerging Infectious Diseases. 20(10).
  11. Hedrick PW (2011). Heredity. 107(4): 283-304.
  12. Malaria Vaccine Initiative (2014). Malaria vaccine candidate reduces disease over 18 months of follow-up in late-stage study of more than 15,000 infants and young children. Retrieved October 2014 from http://goo.gl/doRz4h
  13. Cheers, Imani M. “How Cell Phones Are Helping Fight Malaria.” PBS NewsHour, Health. April 25, 2013. Retrieved October 2014 from http://goo.gl/Fi4m7Y
  14. Roll Back Malaria (2014). RBM Mandate. Retrieved October 2014 fromhttp://goo.gl/nFM1N4
  15. The Global Fund to Fight AIDS, Tuberculosis and Malaria (2014). Malaria. Retrieved October 2014 from http://goo.gl/dkQ8HE
  16. United States Department of Health and Health Services. Global Health Initiative. Retrieved October 2014 from http://goo.gl/n7hVwZ
  17. President’s Malaria Initiative (2014). About. Retrieved October 2014 from http://www.pmi.gov/about
  18. President’s Malaria Initiative (2014). President’s Malaria Initiative Malaria Strategy 2015-2020 Draft for External Review. Retrieved October 2014 from http://goo.gl/kzkIxF
  19. USAID.gov: http://goo.gl/1ePX9n
  20. Worldwide Antimalarial Resistance Network (2014). Our Work. Retrieved October 2014 from http://www.wwarn.org/about-us/our-work
  21. Centers for Disease Control and Prevention (2012). CDC’s Global Malaria Initiatives. Retrieved October 2014 from http://goo.gl/7AcZAf
  22. Kaiser Family Foundation (2013). Estimated Malaria Cases, 2012. Retrieved October 2014 from http://goo.gl/4MTEUE
  23. Dondorp, A. M., et al. (20100. Nature Reviews Microbiology. 8: 272-280.
  24. Rector and Visitors of the University of Virginia (2007). The Mosquito Net. Retrieved October 2014 from http://goo.gl/j6JwBI
  25. Malaria Consortium (2013). 2003-2013: a decade in communicable disease control and child health. Retrieved October 2014 from http://goo.gl/K5R8nX

 

Shale Gas: The Future of Energy Production?

by Eleni Apostolatos

Science classes introduce us to the rather abstract concept of energy—a system’s ability to do work. The world’s current energy dependency proves the basis of this physical fact; from charging our phones to powering our hospitals, energy drives humans’ daily activities. Regardless of where in the globe we stand, we all need energy to do work.

Energy use has increased considerably over the past decades, and it is predicted to continue escalating in years to come. With global warming on the radar, the stakes for finding environmentally clean sources have never been higher.  One energy source in particular has captured the attention of the energy sector and has recently surged in popularity: shale gas.  However, the scientific community and environmentalists alike remain divided on many key aspects of the energy source.  Is shale gas the future of energy production, or is it a hazardous energy source that is not clean or safe enough for the future?

Shale Gas: The Basics

Shale gas is a natural gas that is found within a special form of sedimentary rock termed shale rock. Sedimentary rocks result from the gradual buildup of rich organic matter at the Earth’s surface over time; these rocks contain large deposits of natural gas trapped within them. Unlike common sedimentary rocks, like sandstones and limestones, shale rocks have extremely low permeability; in other words, they restrict outward gas flow. Shale gas is thus considered an unconventional gas.

Unconventional gas was previously deemed unproductive and expensive due to the number of wells that accessing a section of sedimentary rock required. Over the past six decades, two methods to obtain shale gas have resulted in the more efficient release of the firmly contained gas: horizontal drilling and hydraulic fracturing.

Horizontal drilling, as its name suggests, consists of the 90-degree turn of drill bits to drill into the rocks. It is effective by permitting contact with greater lengths of shale without demanding as many wells (1). The second method, hydraulic fracturing, is a more elaborate and controversial process. Also known as fracking, hydraulic fracturing ruptures shale rocks by forcing thousands or millions of gallons of water and other fluids into the shale formations. Sand and other chemicals are then immediately pumped into the openings to prevent the fractures from closing. The released gas flows out of the well—its path determined by whether the wells were drilled horizontally, vertically, or directionally—and is used to generate energy. The combination of fracking with horizontal drilling is generally most efficient, as it makes previously unproductive rock units prolific sources of energy.

Like all fossil fuels, shale gas is mostly composed of hydrocarbons, or carbon-hydrogen bonds.  When fossil fuels are burned, the combustion reaction converts hydrocarbon molecules into carbon dioxide, water, and heat. The formation of these new bonds results in an output of energy that can then be expended for other purposes, while carbon dioxide typically dissipates into the air.

So how clean is the combustion of shale gas? Many environmentalists prefer natural gas to other fossil fuels, namely coal or oil, because it emits less carbon dioxide. Daniel P. Shrag, the Sturgis Hooper Professor of Geology and Professor of Environmental Science and Engineering at Harvard University, writes: “Natural gas has roughly half the carbon content of the average coal per unit energy, thus producing half as much carbon dioxide when combusted for heat or electricity.” He claims that “burning natural gas. . . results in a reduction in carbon dioxide emissions of nearly a factor of three” (2). Additionally, unlike other fossil fuels, such as oil, natural gas requires minimal processing in its preparation for use.

Natural Gas by the Trends

As global energy demand has grown through the years, the world has become increasingly reliant on natural gas. Global natural gas consumption has quadrupled from 23 trillion cubic feet (Tcf) in 1965 to a whopping 104 Tcf in 2009.  The rise of natural gas outpaced the global rise in energy consumption; the proportion of global energy consumption that derives from natural gas rose from 15.6% in 1965 to 24% today.  In a study on natural gas released by the MIT Energy Initiative in 2011, researchers found that “over the past half century, natural gas has gained market share on an almost continuous basis” (3).

One factor contributing to the rise of shale gas is its availability in many countries with high energy demands. Indeed, rock formations across the world provide the reserves necessary to support a continued reliance on natural gas use. In mid-2013, the United States Energy Information Administration (EIA) released an assessment of global shale gas resources, reporting that the estimated shale gas reserves in the U.S. and 41 other countries add up to 32% of the world’s recoverable natural gas resources (4). Of the countries analyzed for the purposes of the report, China, Argentina, Algeria, the U.S., and Canada rank as the five with the most shale gas reserves; North America has the most supply, with the U.S., Canada, and Mexico all leveling high.

According to MIT’s report, the increased use of natural gas around the globe is attributed to it being “one of the most cost-effective means by which to maintain energy supplies while reducing CO2 emissions. In a carbon-constrained economy, the relative importance of natural gas is likely to increase even further” (3). With the wide availability of shale gas, there is great potential for a transition to natural gas.

Criticisms of Shale Gas

However, the hard data on carbon dioxide emissions and worldwide trends do not tell the whole story of shale gas; some residents near fracking sites have claimed that the chemicals used in the process are contaminating their water and soil. In contrast to common assumptions, natural gas isn’t the main subject of controversy—the handling of natural gas is. Most concerns relate to the chemicals used in fracking, which can contaminate underground aquifers, polluting water and leaking into places where human activity is likely. Additionally, the contact of these toxic chemicals with fish or farming areas can prove damaging, as unknown and possibly harmful substances can enter the food chain (5). The wastewater produced, which is around 30-50% of the initial fracking fluid, can be radioactive depending on the chemicals that are injected into the rock formations (6). Many times, the composition of the fracturing fluids is kept hidden by companies that do not want to disclose this knowledge due to competitiveness (5).

For example, consider the experiences of Pam Judy, a resident of Carmichaels, Pennyslvania, who claims that shale gas operations near her home polluted her water and property with dangerous chemicals. Judy noticed that, after activity began on a shale gas compressor station built 780 feet from her family’s house, they began to suffer from “extreme headaches, runny noses, sore/scratchy throats, muscle aches and a constant feeling of fatigue.” She tells of her two children’s nose bleeds and her own “dizziness, vomiting and vertigo to the point that [she] couldn’t stand and was taken to an emergency room.” Her daughter commented that, at times, she felt as though she had “cement in her bones” (7).

To understand these symptoms, Judy performed blood and urine tests, which revealed that her body contained “measurable levels of benzene and phenol.” She then convinced the Pennsylvania Data Execution Prevention (DEP) to perform air quality studies of the organic compounds found in her yard. After a 24 hour canister air sampling of four days, the results verified the presence of “16 chemicals including benzene, styrene, toluene, xylene, hexane, heptane, acetone, acrolein, propene, carbon tetrachloride and chloromethane to name a few” (7).

A number of residents near sites handling shale gas are alarmed; many have experienced symptoms similar to the ones Judy’s family has suffered from, and others claim to be in more alarming situations. Even though it is complicated to prove causality in these cases, the conditions are strange and dangerous enough to require further analysis that protesters in different areas of the world have already begun demanding action—with “Fracktivists” having organized rallies like these in the United Kingdom, Romania, France, and Spain (8).

In response to wastewater, John H. Quigley, previous secretary of Pennsylvania’s Department of Conservation and Natural Resources, declared, “In shifting away from coal and toward natural gas, we’re trying for cleaner air, but we’re producing massive amounts of toxic wastewater with salts and naturally occurring radioactive materials, and it’s not clear we have a plan for properly handling this waste” (5).  In the past, companies have disposed of the wastewater by keeping it stored in wells under impermeable rock or isolated basins, leaving it out to evaporate, or simply filtrating it to later dump it into the sea. The risk for leaks is very high and, according to some confidential studies by the United States Environmental Protection Agency (EPA) and the drilling industry, “radioactivity in drilling waste cannot be fully diluted in rivers and other waterways”(5).

Another concern with the fracking of shale gas is that methane gas can inadvertently escape during the extraction and transfer of shale gas. Shale gas contains high levels of methane gas, and methane leaks can be disastrous—as it is 10 times more potent as a climate-altering agent than carbon dioxide (6).  Scientists at Cornell University investigated this question, considering the methane emissions in the production, distribution, and consumption of natural gas. They conclude that the production of shale gas causes methane to leak at as much as twice the rate of conventional gas wells (2). They explain that the leak is a result of the completion phase that finalizes fracking.

On the other hand, a number of tests managed by the EPA and the Ground Water Protection Council (GWPC) “have confirmed no direct link between hydraulic fracturing operations and groundwater contamination” (9). Additionally, an in-depth study conducted by MIT concluded that, “with 20,000 shale wells drilled in the last 10 years, the environmental record of shale-gas development is for the most part a good one” (10).

Different studies often provide conflicting results and conclusions. Some say that wastewater from fracking can contaminate water supplies easily and cause devastating medical effects, while others argue that the process is safe and that claims of pollution are a result of fear-mongering and defective science. Most argue more research must be done before an accurate judgment on the safety of fracking can be made.  According to Durham University geoscientist Professor Andrew Aplin, “We need more data before we can decide whether shale gas could, or should, be part of our energy mix” (11). While some countries and regions agree with this logic and have placed a moratorium on fracking, others have already given shale gas the pass and have adapted procedures and regulations as more tests are being performed.

The Future of Shale Gas

The discovery of large shale gas reservoirs has prompted some to refer to North America as “the next Middle East” (12); the U.S.’s leadership as the innovative producer of shale gas in recent years has established it as a possible catalyzer for global energy change.  The University of Pennsylvania’s Wharton School published the statement, “U.S. companies have sparked a shale gas revolution: U.S. shale gas production climbed from virtually zero in 2000 to a level where it is contributing a quarter of U.S. natural gas today” (13). The Wharton School’s claim was made in 2012. Over a third of natural gas consumed by the U.S. now consists of shale gas—and the transition into natural gas is anticipated to continue (14).

The U.S. has a number of major shale plays, including the Eagle Ford Play, which runs 400 miles from Southwest Texas to East Texas; the Bakken Shale Play in the Williston Basin, around the Montana and North Dakota area; and the Marcellus Shale Play, in West Virginia and Pennsylvania, which in July was recorded to account for 40% of the U.S. shale gas production (15). Judy’s personal anecdote refers to shale gas handling in the latter region.

In its Annual Energy Outlook 2014 projections report, the EIA estimates that the U.S.’s total natural gas consumption will increase from 25.6 trillion cubic feet (Tcf) in 2012 to 31.6 TcF in 2040. Along with consuming a greater amount of natural gas, the U.S. is also estimated to become a net exporter due to “production [levels growing] faster than use” (15).  Projections indicate that the U.S. will go from being a net importer of 1.5 TcF of natural gas in 2012 to a net exporter of 5.8 TcF by 2040, with 56% of the increase in natural gas production resulting from the growing development of shale gas. According to the EIA, “Shale gas provides the largest supply of growth in U.S. natural gas supply” (16).

Through its greater reliance on natural gas and its shale gas initiatives, the U.S. has already begun prompting other countries to venture into shale gas drilling; about a dozen countries have performed experimental tests on shale gas wells, and some, such as Canada and China, have already begun to register commercial production of shale gas (17).

“The role of natural gas in the world is likely to continue to expand under almost all circumstances, as a result of its availability, its utility and its comparatively low cost,” predicts MIT’s report (3). KPMG International Cooperative, a Swiss entity, reiterates the same point in one of their publications on global energy use: “Shale gas has the potential to turn the world’s energy industry on its head. It’s abundant. It’s cheap. It burns cleaner than fossil fuels. And it’s being found almost everywhere” (18). Even though it is still early to declare with full certainty that shale gas will have larger stakes in global energy production in future years, it is safe to say that it has the potential to promote a new trend and revolutionize the energy industry. As more studies are still aiming to confirm shale gas’s benefits and ensure that properly-regulated production will not lead to negative environmental consequences, their results will be crucial in determining shale gas’s presence in the near and far future.

Eleni Apostolatos ’18 is currently a freshman in Greenough Hall.

Works Cited

  1. Blackmon, David. “Horizontal Drilling: A Technological Marvel Ignored.” Forbes. Forbes Magazine, 28 Jan. 2013. Web. Oct. 2014.
  2. Schrag, Daniel P. “Is Shale Gas Good for Climate Change?” Dædalus, the Journal of the American Academy of Arts & Sciences (n.d.): 72-80. American Academy of Arts & Sciences, Spring 2012. Web. Oct. 2014.
  3. Mcrae, Gregory Sustainable, and Carolyn Ruppel. The Future of Natural Gas . N.p.: n.p., n.d. MIT Study on the Future of Natural Gas. Massachusetts Institute of Technology, June 2010. Web. Oct. 2014.
  4. “Technically Recoverable Shale Oil and Shale Gas Resources: An Assessment of 137 Shale Formations in 41 Countries Outside the United States.” U.S. Energy Information Administration – EIA – Independent Statistics and Analysis. U.S. Department of Energy, 13 June 2013. Web. Oct. 2014.
  5. Urbina, Ian. “Regulation Lax as Gas Wells’ Tainted Water Hits Rivers.” The New York Times. The New York Times, 26 Feb. 2011. Web. Oct. 2014.
  6. “Between a Rock and a Hard Place: Fracking Is the Most Efficient Method for Accessing Deep Reserves.” The Report: Oman 2013. N.p.: n.p., n.d. N. pag. Oxford Business Group. Oxford Business Group, 8 Jan. 2013. Web. 25 Oct. 2014.
  7. Judy, Pam. “Personal Account from the Marcellus Shale.” Marcellus-Shale.us. Marcellus-Shale.us, 20 July 2011. Web. Oct. 2014.
  8. “Global Frackdown: World Protests Shale Gas Production.” RT News. Autonomous Nonprofit Organization “TV-Novosti”, 24 Dec. 2013. Web. Oct. 2014.
  9. “Executive Summary.” US Environment Protection Agency. N.p., June 2004. Web. 14 Feb. 2012.
  10. Brooks, David. “Shale Gas Revolution.” The New York Times – The Opinion Pages. The New York Times, 03 Nov. 2011. Web. Oct. 2014.
  11. Anscombe, Nadya. “Fracking for Shale Gas: Geologists Demand More Data for UK.” Editorial. Engineering & Technology Magazine n.d.: n. pag. The Institution of Engineering and Technology, June 2014. Web. Oct. 2014.
  12. Ng, Zhi Y. “Is North America the Next Middle East for Energy?” CNBC. CNBC LLC, 21 Mar. 2012. Web. Oct. 2014.
  13. “The Once and Future U.S. Shale Gas Revolution.” Knowledge@Wharton. Wharton School of the University of Pennsylvania., 29 Aug. 2012. Web. Oct. 2014.
  14. Kelly, Ross. “No Shale-Gas Revolution Yet for Australia.” The Wall Street Journal. Dow Jones & Company, 25 Sept. 2014. Web. Oct. 2014.
  15. Nickelson, Ron. “The Seven Major U.S. Shale Plays.” (n.d.): n. pag. Clover Global Solutions, 2012. Web. Oct. 2014.
  16. “Annual Energy Outlook 2014.” (n.d.): n. pag. U.S. Energy Information Administration, Apr. 2014. Web. Oct. 2014.
  17. “North America Leads the World in Production of Shale Gas.” U.S. Energy Information Administration – Today in Energy. U.S. Department of Energy, 23 Oct. 2013. Web. Oct. 2014.
  18. International, Kpmg. “Shale Gas – A Global Perspective.” (n.d.): n. pag. KPMG International Cooperative, 2011. Web. Oct. 2014.

Chimeras and the Making of Human Organs

by Francisco Galdos

For centuries, humans have marveled at the ancient myths of chimeras—from Homeric references to half lion-half goat beasts, to Kafka’s frightening tale of the metamorphosis of a man into an insect, and to the ancient Greek legend of the horse Pegasus, whose winged horse body allowed the hero Bellerophon to kill the wretched chimera (ironic, since Pegasus himself was a chimeric animal!). Indeed, if one looks at animals in nature, it does not seem all that unreasonable to believe that similarities between animals could indeed make way for fantastic chimeric beasts such as Homer’s chimera. Those very similarities between species was what eventually led Charles Darwin to draft his own theory of evolution, and it is how evolutionary biologists today are able to find molecular evidence towards identifying common ancestors of various species.

In biology, the term “chimera” refers to an individual that has cells or tissues from a different individual. Today, even human chimeras exist. Patients with failing heart valves often have their human valves replaced by pig valves, a process known as xenotransplantation (1). For patients with failing organs, scientists and physicians have even proposed to go a step further toward transplanting whole organs from pigs into humans with organ failure. A wide range of complications comes with this idea, however. For one, even transplants between genetically non-identical humans pose risks of organ rejection by the immune system, so having transplants between species has an even greater risk of immune rejection   According to the United States Organ Procurement and Transplantation Network, more than 123,000 men, women, and children currently need lifesaving transplants, with another name being added to the list every 10 minutes. Moreover, organ demand surpasses organ availability, putting at risk the lives of many patients in need of these lifesaving organs (2).

With the rise of the era of regenerative medicine, scientists have begun to ask questions as to whether it will be possible to use a patient’s own cells to regenerate entire organs. In 2006, Nobel Prize winner Shinya Yamanaka discovered a way to make pluripotent stem cells from differentiated cells, enabling scientists to use a patient’s own adult cells to make pluripotent cells that can differentiate into any cell of the body (3). With the rise of this technology, scientists have begun exploring the possibility of generating entire organs for transplantation, a task that aims to recapitulate the way the body makes our organs during embryonic development. With this goal in mind, however, Hiro Nakauchi’s group at Stanford University is taking a different approach from trying to build organs outside of the context of normal development. Each organ in the body fulfills a particular role and develops within a highly regulated environment, which makes the engineering of organs an enormous and perhaps impractical feat. To overcome the barrier that making whole organs outside of a developmental context may pose, Nakauchi’s ultimate goal is to grow human organs in an animal that is anatomically quite similar to us—the pig.

Several questions arise from Nakauchi’s work. How do you grow a human organ in a pig? Don’t pigs have organs for themselves? Importantly, how do you solve issues of pig cells getting into the human organ? Wouldn’t this pose the same problems as xenotransplantation? To answer these questions, Nakauchi’s group published a paper in 2013 that aimed to make chimeras between two genetically distinct pigs. During embryonic development, a fertilized egg begins to divide into exponentially increasing numbers of cells. Eventually, a structure known as a blastocyst forms, where a small mass of pluripotent cells inside this spherical blastocyst go on to develop into any cell of the body. For organs to develop, certain genes must be turned on to allow for these pluripotent cells to start changing their identity into the specific cells needed to make particular organs. Take for example, the pancreas. A protein known as Pdx1 is required in order to turn on genes that allow for the development of the whole pancreas. Without Pdx1, an animal can be born without a pancreas (2). Importantly, some genetic programs override others; for example, the protein Hes1 turns on the genes that make the biliary system, and this developmental path overrides the pancreatic path that Pdx1 specifies. To take advantage of this system, Nakauchi’s team generated pig embryos that had a gene that enabled the Pdx1 protein, itself, to turn on the production of Hes1. This caused the embryo to be unable to produce a pancreas, since Hes1 overrides the pancreatic program in order to produce the biliary system. The next step was to see whether the injection of pluripotent cells from a normal pig without this Pdx1-Hes1 system could complement an embryo that was unable to produce a pancreas. They injected the normal pig’s pluripotent cells into the embryo of the pancreas-defective pigs and saw that the normal pluripotent cells were able to give rise to a fully functioning pancreas (4). This process, known as blastocyst complementation, occurs because each organ in the body has a specific function and, thus, occupies a unique niche. Since normal pluripotent cells can develop into any cell and, thus, fill any niche in the body, transplanting pluripotent cells from a normal embryo into an organ-deficient embryo can help replace the missing organ (2).

This remarkable process produces chimeric pigs, since the cells of the pig are composed of two different individual pigs. As Nakauchi reports, they have even gone on to do this with rats and mice, where mice pluripotent cells are transplanted into the embryos of rats that are unable to form a pancreas. In their review, they report that they were able to see the full development of the mouse pancreas in the rat (2). The cross-species boundary is one that, if crossed, could allow for human pluripotent cells to be transplanted into pancreas-deficient pig blastocysts. Since all of the cells of the pancreas would come from the human cells, this would enable embryonic development to guide these human cells towards making a fully functional pancreas, which could perhaps be transplanted into patients with diabetes to effectively cure their disease. Moreover, if this system can be made for organs such as the heart, it would be possible to generate fully functional human hearts in pigs that could then be transplanted into patients with heart failure (2).

As Nakauchi’s group reports, there are various problems that still need to be solved. One problem includes the possible mixing of pig cells with the human organs grown in the pig, which may cause immune problems once the organ is transplanted into humans. Another possible problem is that these transplanted human pluripotent cells could give rise to cells other than the organ of interest, which may cause the pigs to have human neurons or other human cells, raising ethical questions about whether this process “humanizes” the pig’s brain or body. If these problems are addressed, it may be possible to generate fully functional organs from patient-derived pluripotent cells that could allow for these organs to be transplanted from pigs back into these patients, one day providing a solution to the organ shortage problem. If this is achieved, then these pigs would be acting much like Pegasus—as chimeras majestically conquering disease in the human body.

Francisco Galdos ’15 is a Human Developmental and Regenerative Biology concentrator in Quincy House.

Works Cited

  1. Manji, R. A., Menkis, A. H., Ekser, B. & Cooper, D. K. Porcine bioprosthetic heart valves: The next generation. American heart journal 164, 177-185, doi:10.1016/j.ahj.2012.05.011 (2012).
  2. Rashid, T., Kobayashi, T. & Nakauchi, H. Revisiting the Flight of Icarus: Making Human Organs from PSCs with Large Animal Chimeras. Cell stem cell 15, 406-409, doi:10.1016/j.stem.2014.09.013 (2014).
  3. Takahashi, K. & Yamanaka, S. Induction of pluripotent stem cells from mouse embryonic and adult fibroblast cultures by defined factors. Cell 126, 663-676, doi:10.1016/j.cell.2006.07.024 (2006).
  4. Matsunari, H. et al. Blastocyst complementation generates exogenic pancreas in vivo in apancreatic cloned pigs. Proceedings of the National Academy of Sciences of the United States of America 110, 4557-4562, doi:10.1073/pnas.1222902110 (2013).

The Making of a Beta Cell: Toward a Cure for Diabetes

by Francisco Galdos

Two blocks from my apartment in Bogotá is a small bakery that specializes in making a typical Colombian bread known as a roscón. Roscones are made with a sweet bread that is filled with a caramel like spread known as arequipe and is baked in the shape of a large donut. Fresh out of the oven, the roscón is sprinkled with sugar that melts perfectly at the edges of the delicacy. If eaten with a fresh cup of Colombian coffee, the sugar from the roscón complements the smoky taste of the coffee to create an unforgettable experience, and yet, this is all so easy. I go to the bakery, sit with my coffee, my roscón, and simply eat. I don’t really have to think too much about the process of eating. In fact, no one really has to think about what goes on when we eat; our digestive system simply seems to take care of the whole thing. The sugars from the roscón are broken down in the mouth by salivary enzymes, and further broken down in the intestines, and a spew of glucose enters the blood stream to circulate throughout the body. As the levels of glucose rise, a cell in the pancreas known as the beta cell detects the rising levels and secretes insulin, allowing my cells to uptake the glucose in my blood that came from my roscón. The balance is perfect. I can eat ten roscones in one sitting, and still my beta cells work to keep my blood glucose within an ideal range. I simply go on with my life letting my cells do all of the work.

Imagine for a day, if I had to do the work of my beta cell. As soon as I eat the roscón, glucose begins to be absorbed in my blood, and if the glucose goes too high above the ideal range, anything from blindness to severe nerve damage would eventually take hold. What is the solution? I take insulin. But wait! I gave myself too much—now my cells have eaten up too much of my blood glucose, and I’m about to go into a coma! You give me a banana, and here we are again—my glucose is too high. Somehow, the beta cell is able to take care of this tedious job, and yet, in 2010, more than 25.8 million Americans had to play the role of their beta cells because of the loss or dysfunction of these cells due to a disease known as diabetes.

There are two types of diabetes: type 1 and type 2. In type 1, patients’ immune systems attack their own beta cells, leaving them with fewer beta cells and a lifelong dependence on insulin shots. In type 2, patients’ bodies become resistant to insulin, and the beta cell often overworks itself to the point that it begins to malfunction and die, again leaving patients dependent on insulin. Since Charles Best and Frederich Banting discovered insulin in 1921, scientists have long tried to find treatments and cures for diabetes. Just last month, scientists published efforts to create an artificial pancreas that could effectively replace the beta cell. Even with such refined technology, however, the pancreatic beta cell still does a better job than our glucose-sensing computers and insulin pumps. Physicians have attempted to transplant beta cells from cadavers into patients with diabetes, and although temporarily successful, they still only see limited success in getting patients to be insulin-independent. Stem cell biologists have studied the development of beta cells and have attempted to generate them in vitro for transplantation into patients. Many names were given to the cells that came out of these efforts: beta-like cells, polyhormonal cells, fetal-like beta cells, and the list goes on (1). From all of these attempts to differentiate pluripotent stem cells into beta cells, many cells did not end up secreting insulin according to the amount of glucose they were exposed to, and some produced hormones that were not normally made in beta cells. Essential factors were missing from the process, and the final step in making mature beta cells like those found in healthy patients was missing.

On October 9, 2014, after more than a decade of work that aimed to generate fully functional beta cells, a team led by Douglas Melton at the Harvard Stem Cell Institute was able to generate beta cells from pluripotent stem cells. When transplanted into mice with diabetes, these beta cells were able to better regulate their blood glucose levels, providing what could be a possible cure for type 1 diabetes (2). How was Melton’s group able to achieve such a remarkable success? A good place to start is an overview of where beta cells come from. During embryonic development, the cells of the zygote—the product of the fertilized egg—begin to divide and eventually make a structure known as a blastocyst, which has a group of cells clustered together to form the inner cell mass. This blastocyst implants in the uterine wall, and, as development continues, the cells of the inner cell mass continue to divide. Eventually, gastrulation occurs, where the cells begin to take up their positions and differentiate into three layers called the “germ layers.” Each of these germ layers creates specific parts of the body, with the endoderm specifically making tissues of the gut, as well as our cell of interest—the beta cell (1). During each step in the development of a beta cell, specific genes are expressed that are responsible for specifying the cell’s function and the next step in the cell’s developmental path. In the case of the beta cell, the pluripotent cell of the early embryo develops into definitive endoderm. Cells from neighboring tissues and cells send signals to instruct the definitive endoderm to differentiate into the next step in the developmental pathway to becoming a pancreatic cell (1). Signals from a structure called the notochord allow for definitive endoderm to differentiate into pancreatic endoderm, which begins to turn on key pancreatic genes necessary for the function of pancreatic cells. At this point, cells called “pancreatic progenitors” are made, which are capable of making all of the cells that make up the pancreatic islets of Langerhans—home of the beta cell and the neighboring cells that are important for maintaining glucose regulation and metabolism. Prior to Melton’s discovery, scientists were able to make these progenitors and were even to able to make insulin-producing cells; however, these cells were not able to secrete insulin according to the amount of glucose they were given—a key function of a beta cell—and they often expressed genes that are not normally expressed in beta cells.

Scientists have used small molecules and proteins in vitro in order to drive the generation of these various intermediate steps in the development of beta cells. These small molecules or proteins often interact with other proteins in a cell that may activate or inhibit certain signaling pathways, eventually leading to changes in gene expression. As a result of these changes, the cell will differentiate and change its function, making it challenging to figure out the right combination of factors necessary to obtain the right cell. The right cocktail to get from the pancreatic progenitor to the fully functional beta cell was missing from previous studies, and after testing over 150 combinations of over 70 compounds, Melton’s group found 11 factors that allowed them to get to the final step of making a functional beta cell (2). More importantly, when they compared their derived cell to cells taken from the pancreata of human cadavers and to the polyhormonal cells that were previously derived, the stem cell-derived beta cells responded comparably to the cells from the human pancreata. This indicated that they were secreting insulin according to how they would in beta cells in the human body. Gene expression analysis also found that the derived beta cells had similar patterns to that of the primary cells from human cadavers. As Melton discusses in his paper, work remains to be done to understand the molecular biology of how the factors that they identified are allowing for the progenitors to differentiate into beta cells. Importantly, if pancreatic progenitors are transplanted into a mouse over a period of six months, they are able to differentiate and mature into fully functional beta cells, but the process still remains unknown. Melton’s group is now beginning trials on non-human primates. If these trials are effective in controlling glucose levels in primates, it may well be possible to see the beginning of human clinical trials that may lead to a treatment and perhaps an effective cure for diabetes. With this exciting finding, the field can now begin to dissect how a beta cell is able to regulate glucose with such fine-tuned precision, and scientists may be able to find a solution to one of the most prevalent healthcare problems in the world.

Francisco Galdos ’15 is a Human Developmental and Regenerative Biology concentrator in Quincy House.

Works Cited

  1. Pagliuca, F. & Melton, D. How to make a functional β-cell. Development (Cambridge, England) 140, 2472-2483, doi:10.1242/dev.093187 (2013).
  2. Pagliuca, F. W. et al. Generation of Functional Human Pancreatic β Cells In Vitro. Cell 159, 428-439, doi:10.1016/j.cell.2014.09.040 (2014).

Current Developments in Ebola

by Carrie Sha

CarrieInfographic

By March of 2014, it was clear that something was wrong. In treatment centers throughout Guéckédou, a small town of 300,000 in Southern Guinea, a number of patients had high fevers, diarrhea, and unexplainable pain (1). Medical responders in Médecins Sans Frontières (Doctors Without Borders) who were already stationed in Guinea for an earlier project were puzzled. After initial reports were sent back to Europe, these responders sent blood samples to European labs that eventually sequenced the virus. After a full-length genome sequencing, the virus was identified as Zaire ebolavirus (EBOV) (2).

The Zaire virus was by no means new to Africa. As early as 1976, Zaire outbreaks had an 88% case fatality in the Democratic Republic of Congo (3). Yet, the 2014 outbreak featured a novel strain of Ebola, one that had evolved in parallel to strains in the Congo and belonged to a separate clade. Through the work of epidemiologists, the strain was traced back to December 2013 and the virus’s host to the bat family Pteropodidae (2). This was only the beginning of what, as of now, has been seven months of grueling effort in the Côte d’Ivoire to contain the Ebola spread. While the efforts of humanitarians in Liberia, Sierra Leone, and Guinea form the backbone of the containment process, researchers around the globe have naturally looked to drugs to find a way to stop the outbreak.

The Power of Antibody Cocktails

After nearly a decade of experimentation in anti-Ebola drugs, the most effective treatments against EBOV turn out to be cocktails of monoclonal antibodies (4). The body produces antibodies in response to antigens—foreign materials, such as viruses. These antibodies are attached to the surface of B-cells, which trigger the immune response. Whereas polyclonal antibodies are a mixed bag of receptors that all respond to the same expressed antigen, monoclonal antibodies are produced by the same B-cell (5). One of these antibody treatments is MB-003, which has been tested somewhat successfully in non-human primates, such as rhesus macaques. While in past control studies, the monkeys all died approximately ten days post-infection with EBOV, the experimental monkeys had a 43% survival rate, meaning that three out of seven survived (6). Out of the three that survived, two of the monkeys showed a near-complete recovery, whereas the third still had symptoms of the Ebola infection (6). Another antibody drug developed in 2013, ZMAb, provides a similar effect, allowing two out of four cynomolgus macaques to survive after EBOV infection (7). But this time, the researchers experimented with other non-antibody drugs in combination with the ZMAb. They combined ZMAb with Ad-IFN, an adenovirus vector previously shown to be effective against Western equine encephalitis virus in 2007, with a 100% survival rate (8). Although Ad-IFN alone without ZMAb has no potency against Ebola, a combination of the two drugs allowed three out of four cynomolgus macaques and four out of four rhesus macaques to survive post-infection (7). While the exact mechanism of how these drugs interact to enhance survival remains unknown, the cocktail treatments surely had their effect, increasing survival from 50% to 75% and 100%.

What is ZMapp?

The use of a cocktail drug is exactly the basis for the current drug development against Ebola, ZMapp. This project, led by researchers around the world, screens for the best MAb components, seeking to combine the benefits of both ZMAb/Ad-FN and MB-003 (4). In doing so, the researchers also want to extend the number of days post-infection that the drug must be administered in order to mimic current difficulties in identifying the infection in West Africa. By testing first on guinea pigs, they identified c13C6 as the essential component in MB-003. Then, they combined c13C6 with a number of different components found in ZMAb, creating three cocktails: Zmapp1, Zmapp2, and Zmapp3. The guinea pigs who had received the Zmapp1 treatment, which contains c2G4 and c4G7 components from ZMAb, showed the greatest survival at four out of six. Zmapp2 treatment offered moderate survival, with three out of six guinea pigs surviving. Both Zmapp1 and Zmapp2 were than used on rhesus monkeys. The results: six out of six monkeys survived after Zmapp1, and five out of six survived after Zmapp2 (4).

Facing the Future

Despite this improvement in drug discovery in less than a year, the future of these drugs remains doubtful. One of the major problems facing the implementation of antibody cocktails in the field is the rigorous process and standards of drug testing. In the words of Armand Sprecher, a representative of Doctors without Borders, “There are rumors that we are spreading disease, harvesting organs, and other horrible things. Bringing in unlicensed things to experiment on people could be very counterproductive.” (9).

Carrie Sha ’17 is currently a sophomore in Mather House.

Works Cited

Infographic:

  1. World Health Organization (2014). Ebola virus disease. Retrieved from http://www.who.int/mediacentre/factsheets/fs103/en/.
  2. Center for Disease Control and Prevention (2014).  Review of Human-to-Human Transmission of Ebola Virus. Retrieved from http://www.cdc.gov/vhf/ebola/transmission/human-transmission.html.
  3. Boston Children’s Hospital and Harvard Medical School (10.8.2014). Modeling Ebola in West Africa: Cumulative Cases by Date of Reporting. Retrieved from http://healthmap.org/ebola/#projection.
  4. Center for Disease Control and Prevention (2014). 2014 Ebola Outbreak in West Africa – Outbreak Distribution Map. Retrieved from http://www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa/distribution-map.html.
  5. Vogel, G. (2014). How deadly is Ebola? Statistical challenges may be inflating survival rate. Science Insider. Retrieved from http://news.sciencemag.org/africa/2014/09/how-deadly-ebola-statistical-challenges-may-be-inflating-survival-rate.
  6. Center for Disease Control and Prevention (2014). CDC Media Briefing: update on the Ebola outbreak in West Africa. Retrieved from http://www.cdc.gov/media/releases/2014/t0902-ebola-outbreak.html.

Article:

  1. Baize et al. (2014). Emergence of Zaire Ebola Virus Disease in Guinea. N Engl J Med, 371, 1418-1425; City Population (2014). Guéckédou (Prefecture). Retrieved from http://www.citypopulation.de/php/guinea-admin.php?adm1id=82.
  2. Baize, S. et al. (2014). Emergence of Zaire Ebola Virus Disease in Guinea. N Engl J Med, 371, 1418-1425.
  3. World Health Organization (2014). Ebola virus disease. Retrieved from http://www.who.int/mediacentre/factsheets/fs103/en/. Randox Life Sciences. Polyclonal Antibodies vs. Monoclonal Antibodies. Retrieved from http://www.randox-lifesciences.com/articles/57?articleSectionId=1.
  4. Pettitt et al. (2013). Therapeutic Intervention of Ebola Virus Infection in Rhesus Macaques with the MB-003 Monoclonal Antibody Cocktail. Sci. Transl. Med., 5(199).
  5. Qiu et al. (2014). Reversion of advanced Ebola virus disease in nonhuman primates with ZMapp. Nature, 514, 47–53.
  6. Qiu et al. (2013). mAbs and Ad-Vectored IFN-α Therapy Rescue Ebola-Infected Nonhuman Primates When Administered After the Detection of Viremia and Symptoms. Sci. Transl. Med., 5(207).
  7. Wu et al. (2007). Pre- and post-exposure protection against Western equine encephalitis virus after single inoculation with adenovirus vector expressing interferon alpha. Virology 369, 206–213.
  8. Enserink, M. (2014). Ebola drugs still stuck in lab. Scienc,e 345(6195), 364-365.

To Infinity and Beyond: The Launch of SpaceX’s First Reusable Rocket

by Shree Bose

In a generation where rapid innovations have reshaped the way we interact, learn, and work, Silicon Valley technologist, Elon Musk, is attempting to revolutionize a field out of this world – the way we explore space. Since the 1950s, with international tensions soaring high, space has been a stage for national competition, with the space race between the United States and the Soviet Union defining space exploration for a number of years. However, in the last few years, with the lack of government funding for nationally affiliated organizations like NASA, international collaboration has flourished and allowed the development of progressive steps like the International Space Station (ISS).

In this new landscape, the previously dominant national space organizations are beginning to be challenged by private sector competition from companies like Musk’s SpaceX or even Sir Richard Branson’s Virgin Galactic. As human civilization inches forward to milestones of progress previously only considered in science fiction – creating colonies on Mars or understanding biological viability in space – these private companies have begun to accelerate the process and innovate at an unprecedented rate to change the face of space exploration forever.

One such step came recently with the launch of SpaceX’s Falcon 9 rocket, one designed to be reusable by landing back on a floating platform 200 miles off the Florida Coast in the Atlantic Ocean. Along with a Dragon capsule meant to shuttle almost 5,200 pounds of science experiments, spare parts, food, water, and other supplies, the combined Falcon 9 and Dragon capsule are meant to separate from one another. While the Dragon capsule blasts off to ISS, the Falcon 9 rocket is designed to use GPS triangulation to make its descent before landing again without damage.

While perhaps conceptually simple, this reusability radically advances traditional rocket design, reducing costs by almost a factor of 100 according to Musk. While each SpaceX resupply mission previously used a separate disposable rocket with a price tag of almost $60 million each, this new design could allow for reusability. So on January 10, 2015 at 4:47 AM EST, the SpaceX Falcon 9 launched from Florida’s Cape Canaveral Air Force Station, with 50-50 odds of success.

Ultimately, the 14-story rocket did make it back to the spaceport, a major success in itself; however, a hard landing meant the rocket was no longer usable again without new pieces of equipment. In his tweet about the outcome, Musk mentioned the entire mission “bodes well for the future”. And in a world where progress can be made quickly and efficiently, this first attempt does point to a future of cheaper and better space travel. Through these innovative strategies for cutting costs and optimizing design, maybe in a few years we can see pushes towards manned Mars missions, colonizing different planets, and progress for humankind as a whole.

Shree Bose ’16 is a Molecular and Cellular Biology concentrator in Dunster House.

Works Cited

  1. Klotz, Irene. “SpaceX Rocket Nails Launch but Narrowly Misses Landing Test.” Reuters. Thomson Reuters, 10 Jan. 2015. Web. 10 Jan. 2015.http://goo.gl/qzJ7GY
  2. Musk, Elon. “Rocket Made It to Drone Spaceport Ship, but Landed Hard. Close but No Cigar This Time. Bodes Well for the Future Tho.” Twitter. Twitter, 10 Jan. 2015. Web.10 Jan. 2015. <https://twitter.com/elonmusk/status/553855109114101760&gt;.

Humans Computers And Everything in Between: Towards Synthetic Telepathy

by Linda Xu

When you imagine telepathy, your mind probably jumps immediately to science fiction: the Vulcans of Star Trek, Legilimency in Harry Potter, or the huge variety of superheroes and super-villains who possess powers of telekinesis or mind control. Twenty years ago, these concepts would have been mere fictional speculation, but today, in neuroscience labs around the world, new research is turning the startling possibility of brain-to-brain communication into reality.

Imagine this: a man wearing a strange polka-dot cap sits in front of a computer screen, watching an animated rocket fly from a pirate ship toward a city. In the game, the only defense against the rocket is a cannon, which can be fired by pressing a key on a keyboard in an adjacent room. As the man watches the rocket make its first flight from the ship, he thinks about moving his finger, without actually moving anything at all.

In the adjacent room, another capped man — his cap connected to the first man’s through wires, a computer, and the internet — sits with his hand relaxed over the keyboard, unable to see the first man and oblivious to the impending doom of his animated city. Suddenly, his brain receives a jolt of electrical stimulation, and his finger involuntarily jerks down on the key, bringing down the rocket. Together, these two men have successfully saved the city, and more importantly, they have achieved the once unthinkable task of direct brain-to-brain communication (1).

If you haven’t been keeping up with the current neuro-technology scene, the above scenario may sound like nothing more than a tale of scientific fancy. However, incredibly enough, this exact experiment was completed just a few months ago at the University of Washington and is only the latest in a long string of milestones toward “synthetic telepathy.” In this article, we will touch upon each of these milestones, and most notably on the development of the brain computer interface, a key stepping stone in the path to the brain-to-brain interface. After a brief discussion of the research itself, we will then take a look at the ethics of the technology and what these advancements really mean for the rest of us.

From Synapses to Sensation

In order to fully appreciate brain interface technology, one must go back to the basics of neuroscience. The nervous system (consisting of the spinal cord and the brain) is made up of cells called neurons, which have the unique characteristic of being able to communicate with each other using electrical signals, through connections called neural synapses (2). Using this communication system, one neuron can send an electrical “message” to another neuron or even to an entire network of neurons, allowing for an immense number of possible firing patterns. This complexity is the primary reason why, to this day, the exact mechanisms by which neural firing patterns create phenomena like memory, consciousness, sensory experience, and motor action remain largely unknown.

Despite this obstacle, scientists have discovered ways to manipulate the brain without completely understanding the mechanisms behind its activity. Twentieth-century neurosurgeon Wilder Penfield was at the forefront of this advancement, earning the title as the “greatest living Canadian” for his famous neural stimulation experiments (3). During his surgeries, Penfield probed the brains of his numbed but conscious patients and observed what effect probing a certain area of the brain had on the patient. Remarkably, Penfield found that stimulation of specific areas of the brain correlated with very specific functions and areas of the body; for instance, probing the temporal lobes would cause the patient to undergo a vivid memory recall, while probing another area of the brain might cause the feeling of being rubbed on the stomach or pinched on the left foot (3).

This rudimentary “brain-poking” experiment was what eventually led scientists to the theory that thought and behavior could be predicted by measuring patterns of activity in the brain. In other words, Penfield’s observations implied that someone could get an idea of what you were doing or thinking simply by taking relatively basic measurements of the changes in your brain activity. From this theory, the path was paved for the development of the first brain computer interface, a device that could connect a brain to a computer that would record and interpret its activity. Three basic steps would be required to achieve this landmark development: reliable detection of the electrical activity of the brain, accurate interpretation of the meaning of this activity, and prompt transformation of this interpreted activity into useful action.

The Dawn of the Brain Computer Interface 

A solution to the first step in creating the first brain computer interface (reliable detection of the electrical activity of the brain) was already on its way by 1929, when German neurologist Hans Berger recorded the first human electroencephalogram (EEG) (4). By placing external electrodes on the scalp of a 17-year-old surgical patient, Berger was able to measure the electrical potential across the electrodes and thus detect the changing electrical activity of the neurons in the patient’s brain. To this day, the EEG remains one of the most common methods of recording activity in the brain, favored over the MRI and PET scan for its cheap and relatively simple procedure. The next step, then, was to translate this recorded brain activity into meaningful information.

This step was taken in the 1970s by researchers at UCLA, funded by none other than the US Defense Advanced Research Projects Agency (DARPA), the leading national organization in military defense research. In a project rightfully named the Brain-Computer Interface project, researchers strove to develop the first interface between a brain and a computer that would not only detect brain activity but interpret it and use it for communicative or medical purposes (5). In a resulting publication from this project, researcher Jacques Vidal laid out the basics for BCI research, exploring the potential methods, limitations, and possibilities for the development of a EEG-based BCI, a device that he believed was still in the far future (5).

While laying the foundations for BCI research, Vidal could not have predicted the rapid string of breakthroughs that would follow in the next decades. Already by the late 1960s and 1970s, researchers like Eberhard Fetz were able to demonstrate that brain activity could actually be controlled to a significant extent, as demonstrated by the successful training of a monkey to deliberately increase the rate of its neural firing in specific neurons (6). In the 1980s, correlations between brain activity and motor response were specified in great detail, allowing scientists to pinpoint the exact neurons and firing patterns behind specific movements (7). In 1999, Miguel Nicolelis, who would go on to create the first animal brain-to-brain interface just fourteen years later, trained rats to control a robotic limb using only their brains (8).

By the 21st century, the media had caught up to the breakneck speed of BCI research, with the popularization of fMRI image reconstruction, touted by reporters to give scientists the ability to “pee[r] into another man’s mind” (9). In these studies, subjects viewed specific images — anything from a simple black plus sign to a fully colored landscape — and their concomitant brain activity was recorded by an fMRI machine. Based solely on this recorded brain activity, scientists were able to use computer algorithms to “reconstruct” the image viewed by the subject with startling accuracy (10). Dubbed as “mind-reading” and as a possible avenue for an accurate lie detector, fMRI image reconstruction brought the incredible possibilities of BCI research into the public eye.

The Brain-to-Brain Interface: From Reading Minds to Controlling Minds

However, where things start to get truly exciting — and truly controversial — is not in brain-computer interface research, but in the pioneering field of brain-to-brain interface (BBI) research. While BCI connects a brain to a computer that then interprets brain activity, BBI connects a brain to another brain, which can then receive information from the first brain or even be induced to perform specific behaviors, as in the example of the telepathic cannon game. Arguably, BBI is not significantly different than BCI; a brain can simply be seen as a more sophisticated, organic computer, and the BBI as just the next logical extension of the BCI. Nevertheless, there is an undeniable sense of intrigue that comes with the idea of connecting living brains.

It is difficult to overstate the sheer magnitude of possibilities in this new field of research, but it is also important to not stray too far from the raw experimental findings. Research in this area began most notably in the aforementioned Nicolelis Lab at Duke University, in a study on rat brain-to-brain interface. In the 2013 study, “A Brain-to-Brain Interface for Real-Time Sharing of Sensorimotor Information,” two rats were placed in separate cages and each given a choice of two levers — one that resulted in a reward of water and one that did not. A rat dubbed the “encoder” rat was shown a flash of light above the correct lever and was trained to learn this association. The “decoder” rat, on the other hand, was given no visual cues, but its brain received the stimulation from the cortical area of the “encoder” rat through the BBI. In a breakthrough finding, Nicolelis discovered that the “decoder” rat was able to make the correct choice of lever with over 70% accuracy, with no cues or information except for the learned knowledge “sent” by the neural activity of the “encoder” rat’s brain (11).

Within the year, researchers at Harvard Medical School were able to connect a human brain and a rat brain through BBI and move a rat’s tail with 94% accuracy, using only the deliberate neural activity of the human’s brain (12). Four months later, the experiment described in the introduction was completed at the University of Washington, demonstrating the first successful use of a human brain-to-brain interface (1).

Hopes and considerations for the future

We are only in the infancy of BCI and BBI research, and as the technology continues to take leaps and bounds into the future, more and more areas of our lives will feel the impact of these advances. In particular, prosthetic limbs, prosthetic vision devices, prosthetic hearing devices, and communication devices for paralytics are all currently being implemented as prototypes, such as the robotic arm created at Brown University in 2012, which allowed a woman who had been paralyzed for nearly 15 years to use a robotic arm to drink a bottle of coffee (13). Outside of the medical field, research in military communication technology has continued to progress, as demonstrated in the research of Gerwin Schalk, who recently published a study on the successful identification of spoken and imagined words using EEG (14).

Although advances in games and entertainment may seem trivial compared to the more “practical” developments of medicine and technology, the impact of brain interface technology in everyday life is certainly worth pondering as well. Imagine, for instance, being able to play virtual video games in which you control your character simply by thinking an action or imagining a scenario. Companies like NeuroSky, Mindflex, and Necomimi are already putting out BCI products for the public, including a game that uses “brain force” to navigate a ball through a maze and a pair of costume cat ears that wiggle, perk up, or lay flat depending on your neural activity. As research continues, devices such as these are sure to be welcomed into the entertainment world and could even be used for educational or therapeutic purposes, for adults and children alike.

Undoubtedly, brain interface technology has both the power and the potential to do incredible good. With this in mind, it is crucial to also recognize the possibility for ethical wrongdoing. Concerns with privacy, autonomy, enhancement, and consequent aggravation of social stratification are only a handful of the ethical issues on the horizon, and these concerns are only intensified by the fear of media exaggeration and inaccuracy. Furthermore, philosophical questions of human existence will become increasingly important as research progresses. What does it mean for brains to be “connected?” What kind of information can be taken and shared between living brains? What distinguishes a human from a machine, and what — if anything — distinguishes a brain from a computer? These questions may have been impossible to answer in the past, but with the advancement of brain-to-brain interface technology, we may reach satisfying answers at last.

In the end, the future of a world with brain interface technology relies on the preparation and research done today. Consideration of the ethical issues to come, as well as thorough discussion of the boundaries that must be set, will help to ensure that ethical lines are not crossed in our enthusiasm to push the limits of technology. From medicine and military technology to games and recreation, brain interfacing truly has the potential to change the world. By maintaining a judicious balance between scientific progress and ethical caution, we can ensure that these changes are for the better.

Linda Xu is a sophomore in Eliot concentrating in Neurobiology.

References

  1. R. P. N. Rao and A. Stocco. (2013). Direct brain-to-brain communication in humans: a pilot study. [Online]. Available: http://homes.cs.washington.edu/~rao/brain2brain/index.html. [2014, Feb. 24].
  2. M. F. Bear et al., Ed., Neuroscience (Lippincott Williams & Wilkins, Philadelphia, ed. 2, 2007).
  3. Wilder Penfield. PBS. [Online]. Available: http://www.pbs.org/wgbh/aso/databank/entries/ bhpenf.html. [2014, Feb. 24].
  4. L. Haas, Hans Berger (1873–1941), Richard Caton (1842–1926), and electroencephalography. J Neurol. Neurosurg. Psychiatry 74, 9 (2003).
  5. J. J. Vidal, Toward direct brain-computer communication. Annu. Rev. Biophys. Bioeng. 2, 157-180 (1973).
  6. A. P. Georgopoulous, J. T. Lurito, M. Petrides, A. B. Schwartz, J. T. Massey. Mental rotation of the neuronal population vector. Science 243, 234-236 (1989).
  7. E. E. Fetz. Operant conditioning of cortical unit activity. Science 163, 955-958 (1969).
  8. J. K. Chapin, K. A. Moxon, R. S. Markowitz, M. A. L. Nicolelis. Nature Neuroscience 2, 664-670 (1999).
  9. J. Wise. Thought Police: How Brain Scans Could Invade Your Private Life. Popular Mechanics. [Online]. Available: http://www.popularmechanics.com/science/health/nueroscience/4226614. [2014, Feb. 25].
  10. F. Tong and M. S. Pratte. Decoding patterns of human brain activity. Annual Review of Psychology 63, 483-509 (2012).
  11. M. Pais-Vieira, M. Lebedev, C. Kunicki, J. Wang, M. A. L. Nicolelis. A brain-to-brain interface for real-time sharing of sensorimotor information. Scientific Reports 3, 1-10 (2013).

S. Yoo, H. Kim, E. Filandrianos, S.J . Taghados, S. Park. Non-invasive brain-to-brain interface (BBI): establishing functional links between two brains. PLoS ONE 8, e60410 (2013).

L. R. Hochberg, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral et al. Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485, 372-375 (2012).

X. Pie, D. L. Barbour, E. C. Leuthardt, G. Schalk. Decoding vowels and consonants in spoken and imagined words using electrocorticographic signals in humans. J. Neural Eng. 8, e046028 (2011).

Brain and Language on the Fly

The Neuroscience of Linguistic Improvisation

by Elizabeth Beam

 

It’s like when I’m on the mic I can squish a 

Sucka like a vice grip, my pen put ya

In the slaughterhouse cause your style’s been butchered

I’ll spin chainsaw, take off like the blades on, my brain’s on

Hyperdrive, someone put the brakes on

-Eminem

While maintaining a beat and a storyline, Eminem rapped the above lyrics as part of a high-energy, fast-paced, yet off-the-cuff performance that lasted over eight minutes on live radio (1). Though the lines printed here are not accompanied by Eminem’s vocals, you can hear as you read them how the sounds of the words flow through the slant rhymes “squish a,” “put ya,” and “butchered.” Meanwhile, the meaning of the words mounts through the extended metaphor, comparing Eminem’s defeat of an imagined opponent to a gruesome farmhouse slaughter. As remarkable as it is, this example is just one among the routinely astonishing feats of language that freestyle rappers can accomplish off the tops of their heads.

Aside from the exceptional talent that some individuals have for freestyle, we are all capable of playing with language, and we do not need to step up to a microphone to prompt us to do so. Sociolinguist Joan Schwann observed this firsthand one afternoon while eavesdropping on a family in the park (2). As the speakers tossed scraps of food to the birds, they commented on one pigeon that was aggressively chasing the others away:

A:  He might look scruffy but he’s seen off that one over there

B:  Obviously a thug amongst pigeons

C:  Al Capigeon

D:  The godfeather

[Laughter overlaps C & D]

This brief exchange exemplifies the sort of linguistic ingenuity that speakers display in everyday conversation. After speaker B anthropomorphizes the pigeon as a “thug,” speakers C and D riff off of the idea with references to The Godfather. The puns “Al Capigeon” and “godfeather” draw attention to the phonological similarity between “pigeon” and “Capone,” between “feather” and “father.” These jokes are not just silly, but impressively clever, eliciting laughter from the other participants before C and D have finished speaking. It is also important to note that these speakers had not been holding “Al Capigeon” and “godfeather” in arsenals of puns to be dispensed at just the right moment. Rather, the puns were invented on the spot, uttered as soon as they came to mind.

When we crack jokes and craft sonically pleasing rhythms on the fly, we engage basic cognitive systems to do something extraordinarily complex. The cognitive systems involved range from language to motivation to memory to emotion to motor control—systems that are well studied on their own but rarely altogether. The trouble with breaking down verbal creativity into simpler systems is that the superordinate behavior cannot be reconstructed easily from its parts. To do so would be like trying to solve a jigsaw puzzle without looking at the picture on the box; child’s play is turned into a formidable challenge because the design on any single piece offers very little information about the image that emerges when all the pieces are put together. Furthermore, the brain is not a picture that can be laid flat on a living room table but a three-dimensional and dynamic structure comprised of upwards of 100 billion neurons that modulate one another over time.

So, why haven’t cognitive neuroscientists pushed the pieces aside and studied verbal creativity directly? It is critical to note that normal behaviors occur in settings very different from the environment inside a neuroimaging scanner. Understandably, the above exchange between picnickers could not have transpired if the fauna of the park were exchanged for the white plastic bore of an fMRI machine. Moreover, studies conducted in the scanner are not leisurely afternoons in the park; each session is comprised of timed blocks that must be short and consistent across subjects. Because neuroscientists cannot eavesdrop on the brain like a sociolinguist in the park, they have instead investigated verbal creativity through simplified assessments like anagram puzzles (3). However, while solving anagrams and advancing an interesting conversation may both rely on the generation of insightful ideas expressed through the elements of language, one would expect the obvious differences on the behavioral level between speaking and re-arranging letters to manifest themselves in the brain.

It is here that freestyle rap meets neuroscience. Whereas the tasks designed to study verbal creativity are disconnected from real behaviors, and the real behaviors of normal individuals are compromised in the scanning environment, freestyle rappers can defy these constraints because they are specially trained to do so. For freestyle rappers, spitting a few improvised lines during a timed block in the scanner is comparable to rapping live on stage within the rules of a competition. To take advantage of this, Siyuan Liu and colleagues recently assembled a cohort of 12 freestyle rappers for a neuroimaging study (4). The neuroscientists asked the rappers in some segments of the scan to deliver improvised raps on the spot and in others to recite raps they had memorized.

By daring to study verbal creativity as it occurs in the   wild, Liu et al. gained access to the brain state in which live, inventive speech unfolds. They discovered that the network of systems engaged by freestyle rap is specialized, relying more heavily on drive and memory selection than the network for rehearsed rap. The nature of control also differs in freestyle, shifting from top-down self control to faster and more fluid control by motor regions. These findings go beyond mapping anatomical correlates of verbal creativity to the brain, lending a bird’s-eye perspective to the complex and dynamic brain state that emerges when multiple networks converge in real time. Furthermore, expanding our scope from a dozen freestyle rappers to the billions of speakers conversing constantly across the globe, this study may guide us toward a more general understanding of the mind as it engages in everyday verbal creativity.

 

 

Taking Flight: Verbal Drive and Creativity

The brains of freestyle rappers must be able to shift from cruising levels of conversational speech to the high gear of a rapid and rhythmic verbal performance. Liu et al. found that, in their cohort of rappers, the medial prefrontal cortex (mPFC) was the key to this creative ignition (5). The mPFC was especially active in improvised conditions, compared to conditions in which participants recited memorized lyrics, suggesting that it plays a role in the generation of original, on-the-spot rap. The mPFC is also more active at the beginning than at the end of a segment, consistent with the idea that it helps get the rap going.

The results of this study align well with the narrative that other neuroscientists are crafting of creative drive and the brain. Like Liu et al., the neuroscientific community has flagged the mPFC as a noteworthy region for motivation. In task-based studies, mPFC activity has been found to increase as the payoffs for good performance are raised (5). The mPFC is also preferentially activated when viewing scenes that are later described with high enthusiasm, suggesting that this region may play a role in developing the urge to communicate (6). To tie these relations between drive, language, and the mPFC to artistic creativity, neuroscientists could adapt their neuroimaging tools to studying the link between mental illness and creativity, as described in accounts of numerous eminent writers. Whereas bipolar writers like Robert Lowell and Lord Byron suffered from creative block during their depressions, they were most prolific when they were manic (7). Perhaps these mood disturbances were accompanied by the changes in mPFC connectivity that have been identified in individuals with depression and bipolar disorder (8).

While these correlations are promising, they offer more questions than they answer. If it is true that the mPFC controls verbal creative drive, by what mechanism does it do so? Furthermore, thinking beyond the brain, what would this understanding of a neurobiological mechanism mean for rappers and for everyday speakers? To step towards answers to these questions and past the standard neuroimaging paradigm of mapping a behavior to a brain region, Liu et al. sought to understand how the level of mPFC activity varies with the quality of a freestyle performance. They judged the improvised raps by factors like variation in content, use of fresh language, and coherence of the rhyme scheme. Apparently, the better the rap, the greater the mPFC activation—raising the possibility that, through the mPFC, there is a relationship between creative drive and skill.

To make sense of this, like many peculiar observations in biology and human nature, it helps to turn to the principles of Charles Darwin. According to the Darwinian model for creativity, the number of ideas that are novel and useful increases proportionally with the total number of ideas (9). The model predicts that a rapper spitting rhymes is more likely to succeed as long as he or she keeps at it. The popular journalist Malcolm Gladwell has rendered this relationship formulaic, claiming that 10,000 hours is the time it takes for a person to gain mastery in a skill (10). This runs counter to the once well-accepted belief that artists are born with special talents that elude the rest of us. While freestyle rappers do seem to have an unusual skill for rapping quickly and easily in the spur of the moment, there is hope for any one of us that, with enough time and effort, we could write the next chart-topping hit.

Thinking Backwards and Forwards:  Autocueing and Memory

In isolation, the system for driving speech would be like an automobile engine without the rest of the car. There must be another system supplying the ideas that form the content of what we say. These ideas come from our memories—from the facts and stories we read in books, from the words we learn to define and say aloud, and from our day-to-day experiences living in the world. Furthermore, while a detailed and well-organized memory is required for speech, our conversations and Eminem’s raps would be rather dull if the brain could do no more than retrieve random memories exactly as they were encoded. For performances of verbal creativity, our brains must also be able to select memories and to recombine them in new and compelling ways.

Cognitive neuroscientist Merlin Donald posits that, before there was language, there was an expansion in human memory (11). More crucial than the size of memory stores, however, was the development of the ability to tap into them voluntarily. This “self-initiated access to memory,” or briefly “autocueing,” allowed us not only to retrieve items relevant to a given set of circumstances but more amazingly to retrieve irrelevant items at will. Eventually, autocueing enabled humans to invent, recall, and actively string together symbols into words, sentences, and stories. Rather than an immutable record of history, human memory is flexible, allowing for the dynamic reorganization of items from the past into novel constructs that are useful in the present and the future.

The sequence of events that occurred during the early evolution of human memory is now recapitulated in the way that humans access memory during speech. The inferior frontal gyrus (IFG) guides the retrieval of semantic knowledge from memory in a top-down manner, allowing for target memories to be selectively recalled and articulated (12). A connectivity analysis by Liu et al. (2012) reveals that, during freestyle rap, there is a strong positive correlation between activation of the IFG and the mPFC. This could mean that, when the creative drive thought to be mediated by the mPFC is increasing, so too is the pull on semantic memory stores by the IFG. Although these correlative analyses cannot establish causation, this relationship is consistent with the possibility that Donald’s autocueing system is enhanced in freestyle rap.

Working memory, often referred to as the mental “sketchpad,” is where the mind can scribble, scratch out, and rewrite the thoughts in its consciousness. One might expect working memory to be engaged during freestyle performances, allowing rappers to play with ideas and actively organize them into rhyming, rhythmic words. Curiously, Liu et al. (2012) observe the opposite trend. The dorsolateral prefrontal cortex (dlPFC), previously shown to support working memory during creative endeavors, is actually deactivated during freestyle rap (13,14). Moreover, dlPFC activity is inversely correlated with activation of the critically involved mPFC.

A reasonable explanation for this counterintuitive phenomenon is timing. Freestyle rappers speak so quickly that they do not have time to consciously evaluate and revise the content of their utterances before articulating them. As soon as items are retrieved from memory, they are incorporated into the verbal output stream. The ability to guide the search for associations between memories at high speed may be a distinguishing quality of rappers. Future studies should test this possibility by comparing the performance of freestyle rappers and normal speakers on the same memory and verbal creativity tasks. It is also possible that, during freestyle, the memory system interacts in a special way with yet another system—a system that streamlines the motor output.

The Flow:  Cognitive Control and Motor Supervision

The paradox of freestyle rap is that, as demanding as it is on the brain to produce fast-paced utterances that are not just semantically coherent but poetically and rhythmically structured, the performance can feel just as effortless to the rapper. As previously noted, the brain region involved in cognitive control and working memory is selectively deactivated during freestyle rap. Merlin Donald would contend that the apparent lack of conscious control that rappers show during their freestyle performances is the expected result after extensive practice within the genre. This “automatization” is simply “the end result of a process of repeated sessions of rehearsal and evaluation” (15). The process of automatization is not at odds with consciousness at all but rather one of its natural consequences.

To be sure, rappers do wield some form of control during performances. While Eminem unabashedly delves into the vulgar, the violent, and the bizarre in his freestyles, he rarely misses a beat. In place of conscious cognitive control, it is motor output monitoring that keeps the verbal flow steady during freestyle rap. Merlin Donald’s insights into motor control are perhaps his most notable. “My key proposal,” Donald writes in his treatise on human nature, “is that the first breakthrough in our cognitive evolution was a radical improvement in voluntary motor control that provided a new means of representing reality” (9). This representational system conveys memories through symbolic body movements. In prehistoric times, it may have yielded a form of culture through mimesis that preceded the invention of language.

The neural system that first enabled people to control body movements has been refined since the onset of mimetic culture. Modern humans are able to monitor and adjust articulatory movements during speech, thanks in part to the cingulate motor cortex (16). In a functional connectivity analysis, Liu et al. demonstrate that there is a strong positive correlation between medial prefrontal and cingulate motor activation during improvised rap. Furthermore, the authors postulate based on anatomical studies in monkeys that there is a direct functional connection between medial prefrontal and cingulate regions, and other studies in humans have shown that portions of the mPFC are continuous with the cingulate (17, 18). As information about intentions and motor plans is transmitted along this alternative route, the drive to speak associated with the mPFC bypasses the self-conscious dlPFC.

By incorporating practiced behaviors into automatic processing in analogous ways, humans can streamline processes they know well and build on top of those processes in an hierarchical fashion, creating new processes that are ever more complex. Consider language: a child first learns to associate words with objects and other referents, then works painstakingly to combine words into grammatically correct sentences, and finally is able to produce language relatively automatically, concentrating less intensely on the forms and meanings of the words and beginning instead to focus on other functions of the speech act (19). With enough practice, the child might someday rap as easily as he or she carries on a conversation.

Let It Free, Let It Fly

Having toured the systems that intersect in the network for verbal improvisation, it is at last possible to integrate the neuroscientific findings into a complete picture of the freestyle brain at work. Regions like the mPFC, IFG, and cingulate motor area that show increased activity are thought to give rise to the distinct cognitive characteristics of verbal improvisation, including enhanced motivational drive, memory access, and motor monitoring. At the same time, working memory and self-conscious control are diminished. The attributes of freestyle rap are synthesized by the brain into what has been called a “flow state” during which the performer is so intensely engaged that the words feel as if they are pouring forth involuntarily (20). It is as if the brain takes over when the rapper steps up to the microphone, producing a stream of language that is spontaneous yet poetically crafted, guttural yet rhythmic.

Considering how well-suited the brain is for freestyle, one might be tempted to conclude that it evolved for the express purpose of enabling humans to rap. However, Donald notes that, when evolution appears to proceed in favor of humans, it bears no real concern for them. “Our major cultural achievements,” Donald remarks, “have evidently been the delayed by-products of biological adaptations for something else” (9). If freestyle rap can be regarded as one of these hallmarks of the human behavioral repertoire, then what is the “something else” from which it is derived? Taking away the beat, the pace, and the dominance of the poetic function, freestyle rap bears a striking resemblance to the linguistic improvisation that characterizes our everyday conversational play. The jokes and puns we all make are the basis for the remarkable extemporaneous speech that distinguishes freestyle rap. Every time we speak, we heed the words of musician Tunde Adebimbe when he sings, “Let it follow that we let it free, let it fly” (1).

Elizabeth Beam is a research assistant in a neuroscience lab at Harvard University.

References

  1. Eminem (2009, May 23). The Tim Westwood Show. BBC Radio 1Xtra. London, UK: British Broadcasting Company.
  2. Maybin, J., & Swann, J. (2007). Everyday creativity in language: Textuality, contextuality, and critique. Applied Linguistics, 28(4), 497-517.
  3. Fink, A., Benedek, M., Grabner, R. H., Staudt, B., & Neubauer, A. C. (2007). Creativity meets neuroscience: Experimental tasks for the neuroscience of creative thinking. Methods, 42, 68-76.
  4. Liu, S., Chow, H. M., Xu, Y., Erkkinen, M. G., Swett, K. E., Eagle, M. W., Rizik-Baer, D. A., & Braun, A. R. (2012). Neural correlates of lyrical improvisation: An fMRI study of freestyle rap. Scientific Reports, 2(834), 1-8.
  5. Kouneiher, F., Charron, S., & Koechlin, E. (2009). Motivation and cognitive control in the human prefrontal cortex. Nature Neuroscience, 12, 939-945.
  6. Falk, E.B., O’Donnell, M.B., & Lieberman, M.D. (2012). Getting the word out: Neural correlates of enthusiastic message propagation. Frontiers in Human Neuroscience, 6(313), 1-14.
  7. Jamison, K. R. (1993). Touched with fire: Manic-depressive illness and the artistic temperament. New York, NY: The Free Press.
  8. Price, J. L., & Drevets, W. C. (2012). Neural circuits underlying the pathophysiology of mood disorders. Trends in Cognitive Sciences, 16(1), 61-71.
  9. Simonton, D. K. (1999). Origins of genius: Darwinian perspectives on creativity. London: Oxford University Press.
  10. Gladwell, M. (2008). Outliers: The story of success. New York, NY: Little, Brown, & Company.
  11. Donald, M. W. (2004). The definition of human nature. In Rees, D., & Rose, S. (Eds.), The new brain sciences: Perils and prospects (p. 34-53). Cambridge, UK: Cambridge University Press.
  12. Badre, D., Poldrack, R. A., Pare-Blagoev, E. J., Insler, R. Z., Wagner, A. D. (2005). Dissociable controlled retrieval and generalized selection mechanisms in ventrolateral prefrontal cortex. Neuron, 47(6), 907-918.
  13. Dietrich, A. (2004) The cognitive neuroscience of creativity. Psychonomic Bulletin & Review, 11(6), 1011-1026.
  14. Vandervert, L. R., Schimpf, P. H., & Liu, H. (2007). How working memory and the cerebellum collaborate to produce creativity and innovation. Creativity Research Journal, 19(1), 1-18.
  15. Donald, M. W. (2001). A mind so rare: The evolution of human consciousness. New York, NY: Norton.
  16. Picard, N. & Strick, P. L. (1996). Motor areas of the medial wall: a review of their location and functional activation. Cerebral Cortex, 6, 342-353.
  17. Petrides, M. & Pandya, D. N. (2007). Efferent association pathways from the rostral prefrontal cortex in the macaque monkey. Journal of Neuroscience, 27, 11573-11586.
  18. Öngür, D., Ferry, A. T., & Price, J. L. (2003). Architectonic subdivision of the human orbital and medial prefrontal cortex. Journal of Comparative Neurology, 460, 425-449.
  19. Jakobson, R. (1956). Metalanguage as a linguistic problem. In Rudy, S., & Waugh, L. R. (Ed.), Selected writings VII: Contributions to comparative mythology (p. 113-121). Berlin: Mouton.
  20. Csikszentmihalyi, M. (1996). Creativity: flow and the psychology of discovery and invention. New York, NY: Harper Collins.
  21. Adebimbe, T. (2006). Province. On Return to Cookie Mountain. Santa Monica, CA: Interscope Records.

 

A Commentary on Medical Education

by Lauren Claus

The practice of medicine is filled with intimate and delicate moments; physicians are entrusted with tasks such as delivering a painful diagnosis, encouraging a patient to embark on a weight loss program, or calming the anxieties of new parents-to-be. These situations all require strong interpersonal skills, a comforting demeanor, and a deep sense of empathy. However, the road to medicine is typically thought to require different skills such as strong standardized-testing abilities and the capability to lead large groups or organizations. Although this paradox is often overlooked, it raises important questions about the current state and future goals of medical education.

On one hand, the amount of competition that premedical students encounter seems inevitable because so many students are interested in pursuing medical studies. According to the Association of American Medical Colleges, 48,014 applications were submitted (redundant) to medical schools in the United States in 2013 (2). With so many applicants, a system of standardized evaluation becomes necessary- and thus, every premedical student feels the pressure to demonstrate aptitude and success in the Medical College Admission Test, their Grade Point Average, extracurricular leadership, and research experiences.

These requirements are certainly not opposed in spirit to the practice of medicine. Of course, the personal reflection required to craft a personal statement and the scientific knowledge required to participate in research do well to prepare students for medical school. More generally, physicians must have the ability to think and act quickly to help their patients. Also, the current system of premedical education allows students to take a gap year before beginning medical school, which can provide a necessary opportunity to reflect on the vocation of medicine outside of a competitive framework. However, perhaps the traditional path to medical school would be improved if it contained an additional component that emphasized less tangible but equally important aspects of practicing medicine, such as experiences with illness, healing, or suffering. Although premedical students typically volunteer and shadow in clinical settings, they do not usually have the opportunity to follow the trajectories of individual patients or prepare for the emotional complexities of practicing an imperfect science.

Many physicians describe their experiences in these areas through writing. Rafael Campo, a physician at the Harvard Medical School and accomplished poet, recounts encounters with patients through his verse and sees this process as a component of practicing medicine. He also indicates that such poetry can become a powerful tool for building compassion, a trait that is difficult to measure but is “what most patients seem to feel is most lacking in medicine these days” (3). To this end, Campo leads writing workshops for medical students to introduce them to ways to connect their medical experiences with broader ideals (1).

It is important to note that the position which Rafael Campo emulates is not opposed to the extensive use of highly sophisticated scientific technology and treatments to help patients, but rather is concerned with “losing sight of some of the truths of the experience of illness” (2). The problem is not the advances themselves, but the possibility that they may lead to an attitude that loses focus on the human side of such treatments. Campo provides an example of this when he says that a doctor should “perform all those technical competencies” and focus on how to best treat the patient, but still “be able to warm the hand of the patient dying in the ICU despite all the IVs and ventilator settings” (2).

In the same way, medical education is not wrong in its emphases on applying scientific knowledge quickly and accurately, but could improve with increased focus on medicine as an interpersonal career. Such a focus is difficult to provide, however, because personal qualities such as empathy, compassion, and interpersonal abilities (such as clear communication skills) are not as easily taught or measured as a traditional scientific background. Because of this inherent quality, perhaps it will continue to be individual premedical students and individual physicians who reflect the principles of empathy and compassion that are implied but not explicit in the healthcare systems in which they practice.

Lauren Claus ’16 is an English concentrator in Adams House.

References

  1. J. Davis, Rafael Campo’s Student Physicians Embrace Poetry to Hone Art of Healing (PBS, 2014).
  2. L. Ward, Medical School Applicants, Enrollments Reach All-Time Highs (Association of American Medical Colleges, 2013).
  3. R. Campo, Interview by Courtney Davis (Poets.org)

 

 

 

 

The 3D Bioprinting Revolution

by Suraj Kannan

Perhaps no technology has grown as rapidly and promised so much in the last decade as 3D printing. Although the first industrial 3D printer was built in the 1980s, improvements in design and function over the last five years have seen a dramatic rise in production and usage; indeed, forecasts predict that the sale of 3D printing products and services will reach $10.8 billion by 2021, up from $2.2 billion in 2012 (1). The customizable and fast nature of 3D printing has made it an integral tool in rapid prototyping in a variety of industrial and research settings, ranging from academic to aerospace and military. 3D printing has also increasingly seen application in producing a wide variety of objects, ranging from household tools, furniture, and utensils to cars, aircraft, and weaponry (2-4). Along the way, this new technology has prompted ethical debates in gun control and intellectual property (4, 5). With the first commercial 3D printers now appearing on the market for hobbyists, it is easy to understand the Economist’s comment that 3D printing “may have as profound an impact on the world as the coming of the factory did” (6).

A particular application of interest for 3D printing that has already shown promising leads is in the field of tissue engineering. While 3D printing has long been applied to the production of biotechnology devices, recent interest has been directed towards printing cells in customizable fashion to produce functional tissues. Taking antecedents from earlier lithographic methods as well as breakthroughs in developmental biology, bioprinting aims to develop tissues and organs that can play a role in both laboratory investigation and disease modeling as well in therapeutics. With advances coming from both large research universities such as Harvard and companies such as Organovo, bioprinting is likely to become one of the biggest areas of investment and research in this decade.

Bioprinting: A Customizable Bottom-Up Approach

The classic definition of tissue engineering, as laid out by Langer and Vacanti, is of “an interdisciplinary field that … [works] toward the development of biological substitutes that restore, maintain, or improve tissue function or a whole organ” (7). Traditionally, tissue engineering has followed a top-down approach, in which a scaffold (either synthetic, natural, or from a decellularized organ) is seeded homogeneously with cells and then matured in a bioreactor (8, 9). While the strategy has yielded some of the first clinical successes of tissue engineering, it does not allow for sufficient spatial and temporal control of cells and growth factors seeded on the scaffold. Thus, the top-down approach is limited in the amount of complexity it is able to produce in synthesized tissues.

3D bioprinting instead utilizes a bottom-up approach, in which the individual components of the tissue are patterned to allow for formation of complex tissue architecture. By utilizing computer-aided design (CAD) tools, researchers can carefully control the placement of cells, materials, and morphogens to replicate the types of organization found in the human body. These strategies often draw on the self-assembly and growth factor-driven mechanisms of cells to allow for formation of functional biomimetic tissues (8).

Perhaps the most popular form of 3D bioprinting has been extrusion printing, in which filaments are forced through a nozzle to form the 3D structure (10). Thus, in this method, there is contact between the delivery mechanism and the “bio-ink.” A contact-less approach has also been developed using thermal ink-jet printing. In this method, a pulse of current is passed through the heating element of the printhead to cause formation of small ink bubbles. The resulting change in pressure causes the bubble to collapse and the ink to be ejected from the nozzle (11). Thus, the bio-ink never comes into contact with the delivery mechanism. A number of parameters must be taken into consideration with the development of 3D printers. For example, the desired resolution plays a role in determining which type of 3D bioprinter to utilize. As tissues require both macro-scale and micro-scale control, multiple techniques must be utilized to develop both gross architecture and detailed micropatterning of cells and growth factors. Similarly, selection of material, or bio-ink, is crucial. A great deal of investigation has been devoted to the discovery and development of new bio-inks, including hydrogel mixtures (used with extrusion printers) and water-based inks (for thermal ink-jet printers). Cell viability is a third factor of interest. While extensive optimization of both extrusion and thermal ink-jet printing methods has allowed for viability of up to 90% of cells following seeding, the forces and stresses that cells are placed under throughout the printing process are a topic of current research (10-13).

Early Successes and the Challenge of Vascularization

While a great deal of effort is currently dedicated towards technical manipulations of 3D printers to ensure viability, some groups have already had successes with generating functional tissues. For example, Cui et al. at the Scripps Research Institute were able to generate synthetic cartilage consisting of human chondrocytes in a polyethylene glycol (PEG) hydrogel (11). More recently, Duan et al. from Cornell University constructed aortic valve conduits composed of multiple cell types and custom cell distribution in an alginate/gelatin hydrogel (14). While these successes have proved to be exciting for the potential of 3D bioprinting, progress with 3D-printed tissue was limited by the same challenge as other tissue engineering avenues: vascularization. Without blood vessels, nutrients, oxygen, and wastes cannot diffuse throughout thick tissues, leading to cell death throughout the construct. Previously avascular tissues produced by 3D printing were thus by necessity very thin, a constraint that prevents the generation of larger-scale organs and tissues.

A recent and astonishing breakthrough in 3D-printed tissue engineering came in February 2014 from the Lewis lab at the Harvard School of Engineering and Applied Sciences (SEAS).  The team utilized a custom-build four-inkhead bioprinter as well as several novel bio-inks, including a gelatin-based ink to provide structure for the scaffold and two cell-containing inks (15). Perhaps the most novel aspect of the investigation was the use of a Pluronic-based bio-ink that undergoes a seemingly-counterintuitive solid-to-liquid phase transition when cooled below 4°C. Thus, the researchers were able to generate 3D structures with complex networks of Pluronic ink which, upon cooling, resulted in liquification of Pluronic and production of channels within the construct. These channels were subsequently endothelialized to produce vasculature. Using this technology, the Lewis group printed structures composed of patterned human umbilical vein endothelial cells and neonatal dermal fibroblasts along with custom-built vasculature. This vasculature could be in turn be perfused in a bioreactor to allow for nutrient and oxygen flow within the construct. These results speak to the possibility of using 3D bioprinting to produce tissues of complexity far greater than that produced previously by other methods of tissue engineering.

Organ Printing and The Future

While 3D printing has a number of potential applications to research in basic science and cellular/tissue function, bioprinting has primarily captured the public imagination because of the role it could play in the clinical environment. Early clinical uses of 3D bioprinting have shown some success. For example, in 2012, physicians at the University of Michigan successfully utilized 3D printing to construct a synthetic trachea for three-month year old Kaiba Gionfriddo, who suffered from recurrent airway collapses (16). Other successes include printing bone to replace, as two case studies, patient jaw and skull (5). 3D bioprinting is ideal for physicians and patients alike – it allows for rapid production of tissues that can be personalized specifically for each patient (for example, by using patient MRI/CT data). While the limited clinical work thus far has involved avascular and sometimes even acellular tissues, innovations in vascularization in the lab suggest the possibility of future production of organs such as heart, lung, pancreas, and others.

Certainly, some progress in this regard has already been made. Viewers of TED will likely recall Dr. Anthony Atala’s talk, in which he printed a miniature kidney on-stage. Organovo, a San Diego company geared towards developing functional 3D-bioprinted organs, has made strides to release data on its printed liver by 2015, while others have predicted the completion of 3D-printed hearts within the decade (17). This research has also provoked a great deal of discussion over the ethics of 3D-printed tissues. These concerns range from general objections to tissue engineering and organ construction to worries about construct quality and the role of intellectual rights in the world of 3D bioprinting. In particular, the question of who can produce 3D organs must be addressed before further clinical developments can proceed.

In light of these challenges, it is perhaps too optimistic to suggest that 3D-bioprinted technology will be available for patients within the next decade, though as some isolated case studies have shown, such constructs have been successful when utilized. Technical optimizations, particularly in vascularization, cell viability, and resolution of printing, will allow for improved functionality and complexity in printed tissues. From the non-scientific perspective, leaders in ethics and policy will need to tackle some of the stickier issues regarding intellectual property and quality assurance in the generation and use of 3D-printed tissues. In spite of these obstacles, bioprinting remains perhaps the most promising avenue for pursuing the regenerative medicine of tomorrow.

Suraj Kannan ‘14 is a concentrator in Biomedical Engineering. 

Acknowledgements

Many thanks to Dr. Jennifer Lewis and David B. Kolesky, both of whom humoured my requests to hear everything about their magnificent research.

References

  1. TJ McCue, “3D Printing Stock Bubble? $10.8 Billion By 2021.” Forbes. December 30, 2013. http://www.forbes.com/sites/tjmccue/2013/12/30/3d-printing-stock-bubble-10-8-billion-by-2021/
  2. Alexander George, “3-D Printed Car Is as Strong as Steel, Half the Weight, and Nearing Production.” Wired. February 27 2013. http://www.wired.com/autopia/2013/02/3d-printed-car/
  3. Paul Marks, “3D printing: The world’s first printed plane.” NewScientist. August 01, 2011. http://www.newscientist.com/article/dn20737-3d-printing-the-worlds-first-printed-plane.html#.Ux4SLx_LI7x
  4. “Ready, Print, Fire: The regulatory and legal challenges posed by 3D printing of gun parts.” The Economist. February 16, 2013.
  5. John F. Hornick, 3D Printing and the Future (or Demise) of Intellectual Property. 3D Printing 1(1), 14 – 23 (2014).
  6. “Print Me a Stradivarius: How a new manufacturing technology will change the world.” The Economist. February 10, 2011.
  7. Robert Langer, Joseph Vacanti, Tissue engineering. Science 260(5110), 920–926 (1993).
  8. Raphaël Devillard et al., Cell Patterning by Laser-Assisted Bioprinting. Methods in Cell Biology 119, 159 – 174 (2014).
  9. Bertrand Guillotin, Fabien Guillemot, Cell patterning technologies for organotypic tissue fabrication . Trends in Biotechnology 29(4), 183 – 190 (2011).
  10. Cameron J. Ferris et al., Biofabrication: an overview of the approaches used for printing of living cells . Applied Microbiology and Biotechnology 97, 4243 – 4258 (2013).
  11. Xiaofeng Cui et al., Thermal Inkjet Printing in Tissue Engineering and Regenerative Medicine . Recent Patents on Drug Delivery and Formulation 6(2), 149 – 155(2012).
  12. Vladimir Mironov et al., Bioprinting: A Beginning. Tissue Engineering 12(4), 631 – 634 (2006).
  13. Phil G. Campbell, Lee E. Weiss, Tissue engineering with the aid of inkjet printers. Expert Opinion on Biological Technology 7(8), 1123 – 1127 (2007).
  14. Bin Duan et al., 3D Bioprinting of Heterogeneous Aortic Valve Conduits with Alginate/Gelatin Hydrogels . Journals of Biomedical Materials Research Part A 101(5), 1255 – 1264 (2013).
  15. David B. Kolesky et al., 3D Bioprinting of Vascularized, Heterogeneous Cell-Laden Tissue Constructs . Advanced Materials (2014).
  16. David A. Zopf et al., Bioresorbable Airway Splint Created with a Three-Dimensional Printer. New England Journal of Medicine 368(21), 2043 – 2045 (2013).
  17. Organovo Homepage. http://www.organovo.com/
  18. Liat Clark, “Bioengineer: the heart is one of the easiest organs to bioprint, we’ll do it in a decade.” Wired. November 21 2013. http://www.wired.co.uk/news/archive/2013-11/21/3d-printed-whole-heart
  19. Anthony Atala, Printing a Human Kidney. TED2011. Filmed March, 2011. http://www.ted.com/talks/anthony_atala_printing_a_human_kidney

Feature image by Jonathan Juursema and is licensed under CC BY-SA 2.0

Exploring the Avian Mind

by Caitlin Andrews

In June 1977, in a small laboratory at Purdue University, Irene Pepperberg stood with her arm outstretched toward a large bird cage, trying to coax a quivering Grey Parrot out of the cage and onto her hand. Just one year earlier, Pepperberg had received her doctorate in theoretical chemistry, having devoted years of her life to drawing up mathematical models of complex molecular structures and reactions, first as an undergraduate at MIT and then through her graduate work at Harvard (1, 2). Yet, here she was, completely spellbound by this trembling, sentient creature, whom she had named “Alex,” an acronym for the “Avian Learning Experiment,” of which he was to be the subject and star. “Here was the bird I hoped—and expected—would come to change the way people think about the minds of creatures other than ourselves,” Pepperberg writes in her memoir, Alex & Me: “Here was the bird that was going to change my life forever” (1).

From Chemistry to Cognition

To many, Irene Pepperberg’s decision to leave chemistry behind in pursuit of the new and largely uncharted field of animal cognition represented an unfathomable risk. But, looking back, Dr. Pepperberg knows that it was the right choice. “I was actually no longer intrigued by chemistry,” she says, “figuring that what was taking me years and years would soon be done in days via better computers” (2). In the late 70s, as she faced an uncertain job market, particularly for women in chemistry, she knew that she needed to find a new path. Although she had always loved animals, it was only when she began watching the NOVA television program on PBS that she realized that there were people using real science to study animals and to draw parallels between the animal and human minds.

Thinking back to her childhood in New York City, she remembered the pets with whom she had spent countless hours: a series of talking parakeets that had provided her with the type of companionship craved by a self-proclaimed shy and “nerdy” only child. As she watched TV programs about apes using sign language, dolphins exhibiting evidence of abstract thinking, and scientists unearthing the proximate mechanisms behind birdsong, Pepperberg realized that she had already encountered a subject that could provide just as much insight into the minds of animals (1). “I figured that a talking parrot would be an even better subject,” she says. “Birds and humans diverged about 280 million years ago, yet they have so many similar capacities, including vocal learning….I began reading, studying the field, and realized that, as [American zoologist] Donald Griffin said, communication was a window into the animal mind” (2).

The Alex Years

From the start, Pepperberg’s respect for animals and awareness of their needs, along with her technical background, proved to be a promising combination. She ensured that her studies would be representative of Grey parrots in general, as opposed to one particularly outstanding subject, by asking a pet store employee to select a random bird from the flock for her. When she finally coaxed Alex out of his cage at the lab, she kept careful journals of her interactions with him. And, right away, she got to work, using a two-person, interactive modeling technique to demonstrate the association between vocal words, or “labels,” and the objects that Alex encountered around the lab. In addition, each time she gave Alex one of these objects, she reinforced the label by repeating it and talking about its properties, while Alex watched and listened (1).

Over the first several weeks in the lab, Alex began to vocalize on his own, although, at first, his utterances were more “noise” than “speech.” But, gradually, Pepperberg was able to discern precise labels from Alex’s vocalizations; when shown a piece of paper, Alex would make a two-syllable sound, which Pepperberg would reward by giving him the piece of paper, until, eventually, he began to shape the sounds from ay-ah, to ay-er, to pay-er, and, finally, paper (1). Pepperberg and her assistants added more object labels—key, wood, wool—until Alex began to demonstrate an understanding that each label represented a category of objects that shared a certain property, such as shape or texture. For example, Alex could identify both a silver key and a red key as “keys,” transferring the label to a colored key that he had not encountered before. While this concept might seem simple to humans, Pepperberg knew that, for an animal like Alex, this was a significant accomplishment. As she writes in Alex & Me, “This kind of vocal cognitive ability had never before been demonstrated in nonhuman animals, not even in chimpanzees” (1).

Pepperberg often cites that interactive “model/rival” technique, which she used to train Alex, as a major reason for their success. Initially developed by German ethologist Dietmar Todt, the technique involves an animal subject and two trainers; while one trainer acts as the principal trainer and questioner, the other acts as a model for the desired behavior (e.g., labeling the object) and as a rival competing with the animal for the principal trainer’s attention. As Alex picked up more labels, adding colors and numbers to his already-extensive repertoire of object labels, it was crucial that he had humans to model proper pronunciation and label usage. Mostly, these were students who came to work in the lab. Pepperberg also found that it was important for Alex to learn that the same people did not always act as principal trainers or as models; sometimes she asked Alex questions, and sometimes she modeled correct (or incorrect) behavior and was rewarded (or not rewarded) by a student trainer (1). It was very important for the humans to make these occasional mistakes, and be scolded, so that Alex would observe the consequences of errors.

The work was not always easy. First, Dr. Pepperberg was dealing with a highly-intelligent animal who could pick and choose when he wanted to work—much more like a colleague than a research subject. Additionally, as she moved among various universities, she found that the fledgling field of animal cognition was not always met with the same enthusiasm that she had hoped was possible. But, the media picked up on Alex’s story and began to follow Pepperberg’s work (1). In his prime, with over 100 words in his vocabulary, “Alex made it clear to the scientific community that a ‘birdbrain’ could do the same things as an ape brain, and sometimes even those of a child’s brain,” Pepperberg says. “Alex and I were not the first to study avian cognition, but we had the widest impact, thanks to media coverage” (2).

Studying an animal who could communicate verbally set Pepperberg’s studies apart, because she could ask Alex questions and he could answer directly, giving insight into how he perceived the world around him. On the most basic level, he could identify an object’s material, color, and shape, and he could ask his trainers to take him somewhere (e.g. Wanna go back) or bring him something (e.g. Want banana). He also had a grasp for numbers; if shown a tray of assorted objects, Alex could answer questions about a particular subset of those objects (e.g. How many yellow wool?). He also showed evidence of being able to add small values, and, Pepperberg says, he could also “infer the cardinality of new number labels from their ordinal position on the number line—something no other nonhuman has yet accomplished” (1, 2). Alex understood concepts of “bigger” and “smaller,” as well as “same” and “different”—an important distinction, since it showed that Alex understood that several labels could be applied to a single object (1, 3). For example, given two square pieces of wood differing only in color, he could identify that the color was “different,” while the other properties were the “same”; if none of the properties differed among a pair of objects, he would indicate this by saying “none” (1).

Sometimes, Alex’s most impressive work came when it was least expected. One day, while testing number comprehension, Pepperberg presented Alex with a tray containing sets of different numbers of objects of various colors—2, 3, and 6 items. Because the sets were all different colors, she could ask Alex, “What color three?” But Alex, as he often did when he became bored with a particular study, insisted on avoiding the correct response. This time, he did so by answering “five,” even though there was no set of five objects on the tray. She repeated the question; he repeated his answer. Thinking that she could beat Alex at his own game, Pepperberg asked him, “What color five?” “None,” Alex replied, taking Pepperberg by surprise, as he transferred a concept that he had only ever used in reference to “same/different” or “bigger/smaller” to an entirely new context (1). “Western civilization didn’t have ‘zero’ until about 1600,” Pepperberg says. “And Alex transferred the ‘null’ concept himself” (2).

In her three decades of work with him, Dr. Pepperberg got to know Alex more deeply than most any researcher ever gets the chance to know her subject. Working with a single animal for such a long time is “fascinating,” she says, “because one gets to know so much about the individual—not just what is studied, but all the personality quirks and the temperament.” Some of these “quirks” were incorporated into published studies, such as how Alex spontaneously invented his own label for an apple—which he called a “banerry”—out of a combination of the labels “banana” and “cherry” (1); as Pepperberg says, this provided evidence that “Alex clearly did more than repeat what he learned vocally; he parsed his labels to make new ones, much as do humans” (2). But other examples of Alex’s quirks serve only as anecdotes to illustrate the unique individual that he was—like how he called cake “yummy bread” when he first tried it, or how he would say “You be good. I love you,” as Pepperberg left the lab each night.

These were his last words to Dr. Pepperberg, as their pioneering studies came to an abrupt halt in 2007 when Alex died suddenly at the age of 31. Although, at that point, Pepperberg’s research had involved several other parrots in addition to Alex, his death was devastating to her and many around the world. However, Pepperberg sees her work and the work of others in her field as emerging. She knows that there is a broad-ranging potential for animal cognition, in terms of its implications for animal welfare and conservation, as well as for the development of teaching methods for children with cognitive deficits. “When I started, the field barely existed; ‘animal cognition’ was almost considered an oxymoron,” she says. “Today we have journals that are specifically devoted to the field….Only by continuing to study a variety of species will we really understand the various capacities of different ‘minds’” (1, 2).

A Return to Harvard

In July 2013, Irene Pepperberg returned to the campus where, almost four decades earlier, she had received her doctorate in theoretical chemistry, not knowing the path that she would set out upon soon after graduating. While she had been a Research Associate in the Vision Lab at Harvard since 2005, and teaching classes in animal cognition and human-animal communication at the College and the Extension School, her research base had been at Brandeis for the past decade. But, after securing a lab space at Harvard, she moved to William James Hall in July, bringing with her Griffin—an 18 year old African Grey Parrot, who had lived and learned alongside Alex for much of his life.

Having had Griffin since he was only seven and a half weeks old, Dr. Pepperberg knows Griffin’s quirks just as she knew Alex’s. And, Griffin is certainly his own bird. He gets “self-conscious” when he struggles with a particular label and is more hesitant than Alex was—something Alex would sometimes take advantage of by prodding Griffin to produce the correct vocalization, while other times seemingly wanting to help by hinting at the  correct label (1). Although Griffin speaks less now that Alex is gone, he has a list of impressive accomplishments to call his own. Most recently, he showed that he had an understanding of the benefits of sharing, as he chose to share a reward rather than act selfishly so long as his partner was also willing to share (4). He has also done work with optical illusions, demonstrating an ability to recognize occluded objects, providing insight into the commonalities between how birds and humans perceive the same visual illusions (2).

But, Griffin has not been alone in the Harvard lab. In addition to a dozen human research assistants, he also gained a new companion in October. Hatched in April 2013, Athena is Dr. Pepperberg’s first female African Grey. “So far, working with Athena seems to be a cross between my early work with Alex and that with Griffin,” Pepperberg says. “We learned a lot from both of the previous birds and are implementing some of that with Athena” (2). Over Athena’s first six months in the lab, research assistants have been working with her constantly on vocal labels and even audio recording her progress, from her very first warbles to her recently more distinct-sounding “wood” and “key” labels. “What will be really interesting is that new computer techniques and analysis tools will let us track her vocal development in ways that I couldn’t manage with Alex or Griffin,” Pepperberg says (2).

While, at the moment, Griffin is still warming up to the idea of having a new “little sister” around, Pepperberg hopes that Griffin will act as a model for Athena as she begins to learn new vocal labels. With only two birds, she will not be able to draw any definite conclusions about sex differences in cognition. However, she may be able to say something about how cognitive abilities develop over the lifetime of an individual, comparing Athena’s abilities with Griffin’s, and also tracking Athena’s progress over time, much in the same way that she did with Alex.

As for her experiences at Harvard so far, Dr. Pepperberg enthusiastically says that “So far, it’s been terrific!” She sees many opportunities for collaboration between herself and other members of the Psychology Department, and she is excited by the possibilities for the future. As she writes in Alex & Me, “Alex left us as a magician might exit the stage: a blinding flash, a cloud of smoke, and the weaver of wizardry is gone, leaving us awestruck at what we’d seen, and wondering what other secrets remained hidden…wondering what else he would have done had he stayed” (1). As Dr. Pepperberg embarks on the next leg of her journey, it is clear that, while Alex is gone, perhaps these secrets may be revealed through a new set of voices.

Caitlin Andrews ‘16 is a sophomore in Eliot House concentrating in Organismic and Evolutionary Biology.

References

  1. I.M. Pepperberg, Alex & Me (HarperCollins, New York, 2008).
  2. I.M. Pepperberg, personal interview.
  3. I.M. Pepperberg, The Alex Studies (Harvard Univ Press, Cambridge, MA, 1999).
  4. F. Péron, L. Thornburg, B. Gross, S. Gray, I.M. Pepperberg, Human-Grey parrot (Psittacus erithacus) reciprocity: a follow-up study. Animal Cognition (2014).

 

Overstepping your Passion? The Science of Obsession

by Carrie Sha

The famous late nineteenth-century writer Franz Kafka once counseled, “Follow your most intense obsessions mercilessly.” Although his advice seems to be a simple call for following our passions, it can easily lead us astray. After all, Shakespeare’s Hamlet was haunted by “what dreams may come after we have shuffled off this mortal coil,” Oscar Wilde’s Dorian Gray was tormented by his fear of losing beauty, and F. Scott Fitzgerald’s Jay Gatsby was destroyed by his preoccupation with wealth and the past. Thus, obsession causes us to lose control, such that we hurt those we love, have unwanted sexual thoughts, and are driven by our need for perfection. The fast-paced, competitive 21st-century environment forces us to further question the wisdom of his words. But before we can pass judgment on the validity of Kafka’s advice, we must first define this unique phenomenon called obsession.

What is the nature of obsession? Where do we draw the line between passion and obsession? Is obsession a necessity for creativity and dedication or a mental disorder? According to the Diagnostic and Statistical Manual of Mental Disorders, an obsession is “recurrent and persistent thoughts, urges, or images that are experienced, at some time during the disturbance as intrusive and inappropriate, and that cause marked anxiety and distress.” (1) Moreover, “the person attempts to suppress or ignore such thoughts, impulses, or images or to neutralize them with some other thought or action.” (1) On the other hand, passion is “an intense desire or enthusiasm for something; the zealous pursuit of an aim.” (2) Thus, the key difference between an obsession and passion seems to be about control. While obsessive individuals are controlled by unwanted thoughts, passionate individuals make deliberate decisions based on their interests.

Since the time of Kafka, advances in medicine and neurobiology have allowed us to map the neural circuitry implicated in many common psychological disorders, including obsessive-compulsive disorder (OCD) and post-traumatic stress disorder (PTSD). Newly developing drugs may enable us to not only understand but also better control these disorders. Yet, the growing understanding of the science behind our compulsions prompts us to question the proper role of obsession in our passions, anxieties, and day-to-day “neuroticism.” So, what exactly can neurobiology tell us about these obsessive disorders and their relationship to “normal” psychology?

When Careful Becomes Compulsive: Obsessive-Compulsive Disorder (OCD) 

Evolutionarily, it may seem that double or even triple checking whether the stove is off can have its advantages. Rewind 200,000 years, and this extra attention to danger may be comparable to an early human’s increased sensitivity to the presence of predators around him. By natural selection, the careful survived while the less observant likely did not. Thus, this heightened awareness seems to increase the survival rate of individuals. But what divides the extra careful from the mentally ill? According to the International OCD Foundation, the difference is intensity (3). For example, checking the stove every hour to invalidate recurring fears of the house burning down is a phenomenon of OCD whereas checking the stove once before leaving the house is not.

This intensity is not only excessive but often harmful as well. The roughly 2.2 million Americans diagnosed with OCD exhibit behaviors that seriously undermine their quality of life (4). This common anxiety disorder can be categorized into a cluster of characteristic symptoms: double-checking, contamination, and intrusive thoughts (5). Imagine sitting in class every day terrified that you may accidentally hurt your teacher by dropping a pencil on the floor or having to use the seven minutes before class to wash your hands repeatedly in the bathroom. OCD prevents patients from rationalizing their fears and, by doing so, stands in the way of a healthy, productive life.

But what is the biological basis that shifts carefulness into compulsiveness? What all OCD patients have in common is a change in the orbitofrontal circuitry characterized predominantly by increased cerebral glucose metabolism, or a faster processing of sugars to activate brain processes (6). The orbitofrontal loop shown in Figure 1 draws a pathway from the basal ganglia, a group of nuclei in the midbrain involved with developing habits and forming emotions, to the frontal cortex. Since the basal ganglia is normally associated with developing motor patterns, research suggests that the same structures may be involved in forming repetitive thoughts, one of the hallmarks of OCD (7). More specifically, the fault in the orbitofrontal loop seems to stem from an imbalance of the neurotransmitters serotonin and dopamine, chemicals essential to forming emotions. In an unaffected person, these neurotransmitters are released to create a feeling of euphoria when something good happens. Researchers have modeled the constant “checking” behavior of OCD patients with chemicals that constantly activate both dopamine and serotonin receptors in mice, making them believe that these chemicals are overstimulated in OCD patients (8). Thus, the most common drug treatment for OCD is the use of selective serotonin reuptake inhibitors (SSRIs), which prevent neurons from regaining released serotonin, effectively increasing its extracellular concentration (6). Patients who do not respond to SSRIs (approximately 50 percent  of patients) are given analogous dopamine-targeting treatments (6). Often, these drug treatments are used in conjunction with cognitive behavorial therapy, the most common type of which is Exposure and Response Prevention (ERP) (9). Unlike a typical counseling session, OCD patients are prompted to name their fears and develop the ability to stop coping with these fears by compulsive actions. The combination of drug and cognitive behavioral therapy may ultimately reduce the amount of repetitive, “checking” behavior characteristic of OCD.

A Recurring Nightmare: Post-Traumatic Stress Disorder (PTSD) 

OCD is, of course, only one of a number of anxiety disorders characterized by recurring, obsessive thoughts. Another commonly discussed mental illness defined by a loss of control is post-traumatic stress disorder (PTSD). Normally, the body responds to stress via the “fight-or-flight” response, characterized by a stimulation of adrenaline that allows an organism to quickly decide whether to fight or flee from a potentially life-threatening situation (10). This response helps protect the individual from physical and psychological harm by increasing the speed of response in an emergency situation. However, extreme stress, such as horrendous accidents and disasters, permanently shift victims’ mental states. PTSD patients are obsessed with the event itself, and their minds are on constant replay. Research shows that some of this obsession can be traced to a gene that is responsible for producing stathmin, a protein in the emotional control center of the brain- the amygdala (11). The general fear response is controlled by a balance between the activation of the amygdala with the activation of the frontal cortex and the hippocampus, a midbrain structure responsible for memory formation. Thus, the control of fear is linked to previous experiences. In PTSD patients, there is decreased frontal control, causing the amygdala to become overactive and the patients to be dominated by persistent fear. Recent research attributes this amygdala overactivity to a lack of synapse plasticity, or the ability of neuron connections to strengthen or weaken depending on their necessity (12). This synaptic plasticity may in turn be controlled by an increased transcription rate of brain-derived neurotrophic factor (BDNF), a key protein in forming long-term memories (13). Thus, the altered regulation of BDNF hints that there is an association between the encoding of the memory of the traumatic incident and the constant fear exhibited by PTSD patients. The currently prescribed medications for PTSD patients are sertraline and paroxetine, both SSRIs that interfere with this fear response (14). The similar drug treatments for PTSD and OCD patients further suggest that properly balancing neurotransmitters may be the key to controlling obsessions.

What Now? 

The line between passion and obsession is thin and at times entirely indecipherable. While psychology and neurobiology attempt to provide a clear boundary, we ultimately must decide what, when, and how “obsession” becomes pathological.

Yet, unlike the generations before us, we have the advantages of biological innovations and a greater societal awareness to add onto our innate ability to control our fates.

Carrie Sha ‘17 is a freshman in Thayer Hall.

References

  1. Greenberg, William M. (2013, Sept 23). Obsessive-Compulsive Disorder. Retrieved from http://emedicine.medscape.com/article/1934139-overview#a0101.
  2. Passion. (n.d.). In Oxford English Dictionary. Retrieved from http://www.oed.com/view/Entry/.
  3. International OCD Foundation. (2012). Obsessions and Compulsions. Retrieved from http://www.ocfoundation.org/o_c.aspx.
  4. National Institute of Mental Health (2014). What is Obsessive-Compulsive Disorder? Retrieved from http://www.nimh.nih.gov/health/topics/obsessive-compulsive-disorder-ocd/index.shtml.
  5. OCD-UK (2013). The Different Types of Obsessive-Compulsive Disorder. Retrieved from http://www.ocduk.org/types-ocd.
  6. Menzies et al. (2008). Integrating evidence from neuroimaging and neuropsychological studies of obsessive-compulsive disorder: The orbitofronto-striatal model revisited. Neuroscience & BioBehavioral Reviews, 32, 525-549.
  7. Graybiel, A.M. & Rauch, S.L. (2000). Toward a Neurobiology of Obsessive-Compulsive Disorder. Neuron, 28, 343-347.
  8. Eagle et al. (2014). The dopamine D2/D3 receptor agonist quinpirole increases checking-like behaviour in an operant observing response task with uncertain reinforcement: A novel possible model of OCD. Behavioral Brain Research, In Press.
  9. International OCD Foundation. (2012). Cognitive Behavior Therapy(CBT). Retrieved from http://www.ocfoundation.org/cbt.aspx.
  10. Brown, T.M., & Fee, E. (2002). Walter Bradford Cannon: Pioneer Physiologist of Human Emotions. Am J Public Health, 92(10), 1594-1595.
  11. National Institute of Mental Health (2014). What is Post-Traumatic Stress Disorder (PTSD)? Retrieved from http://www.nimh.nih.gov/health/topics/post-traumatic-stress-disorder-ptsd/index.shtml.
  12. Mahan, A.M. & Ressler, K.J. (2012). Fear conditioning, synaptic plasticity and the amygdala: implications for posttraumatic stress disorder. Cell, 35, 24-35.
  13. Pape, H., and Pare, D. (2010). Plastic Synpatic Networks of the Amygdala for the Acquisition, Expression, and Extinction of Conditioned Fear. Physiol Rev, 90(2), 419-463.
  14. Davidsona, J, Landermana, L.R., Claryb, C.M. (2004). Improvement of anger at one week predicts the effects of sertraline and placebo in PTSD. Journal of Psychiatric Research, 38, 497-502.

Optogenetics: A New Frontier

by Jen Guidera

Neuroscientists often try to correlate observable behavior with activity in the brain. This is a grand undertaking, with the human brain containing an estimated 86 billion neurons and 100 trillion synapses (9)(10). Given the size and complexity of the brain, you may be surprised to learn that one of the most fruitful fields in neurobiology, that of electrophysiology, focuses on establishing correlations between brain activity and behavior at the level of single cells. Correlating brain activity to behavior at such a small level is attractive because of its innately higher resolution, with single cells offering potentially more information than entire brain regions.

A new technique called optogenetics is revolutionizing how we study the brain at the level of single neurons. In this news brief, we will explain how optogenetics works, tell the short story of its birth, and finally present a few of the current applications to which optogenetics has been applied.

At its simplest, optogenetics involves firing neurons with light (3). Normally, neurons fire when they receive a burst of positive charge from one or more upstream neurons. This burst of positive charge causes protein channels embedded in the membrane of the receiving neuron to open. Importantly, these channels are specially designed to allow positive charge to flow into the cell when opened, so that the resulting influx of positive charge sends another burst of positive charge to the next neuron.

Optogenetics uses protein channels that both mimic and differ from the cell’s own protein channels. Like the cell’s own protein channels, channels used in optogenetics also allow positive charge to flow into the cell when open, generating a signal sent to other neurons. However, unlike the cell’s own protein channels, channels used in optogenetics open in response to light (2).

Furthermore, drawing on techniques from genetics, scientists can express these light-responsive protein channels in specific populations of neurons, allowing one to fire only certain neuron types. The consequence of both of these differences—response to light and expression in certain cell populations—gives neuroscientists new and exciting control over neurons: when neurons fire, and which neurons fire.

Optogenetics has a fairly recent history. The inspiration for optogenetics can be traced back to a tiny light-sensitive protein channel, discovered in 2002 in a species of single-celled green algae (1). The protein, called “channel rhodopsin-1”, is elegant in its simplicity: it is an ion channel that opens in response to light. When opened, the channel allows positive ions to flow into the algae, allowing the algae to pinpoint where it must swim to receive more light, an important input for photosynthesis.

A few years after the discovery of channelrhodopsin, scientists had the idea to insert the channel into the membrane of neurons (2). The thinking was that channelrhodopsin could be genetically engineered into only certain populations of neurons. Then, acting in the same way as native mammalian ion channels, these light-sensitive ion channels could be opened by shining light on the neurons, allowing positive ions to rush into the neurons and generating the electrical signal that neurons use to communicate.

Although conceptually simple, putting this idea into practice could be potentially very messy, recalled the pioneering optogeneticist and MIT professor Ed Boyden in his account of the field’s birth and early history (2). Would an ion channel, which had evolved over hundreds of thousands of years in a single-celled organism, be compatible with mammalian neurons, which evolved separately over hundreds of thousands of years? And, would the channels be powerful enough to depolarize the relatively larger mammalian neurons? However, despite possible complications, in 2005, just three years after the discovery of the light-gated ion channel, scientists genetically engineered channelrhodopsin into hippocampal neurons (3).

Since the discovery of channelrhodopsin-1, many more light-sensitive ion channels have been discovered, differing in the wavelength of light they respond to, how long they remain open, and what type of ions they allow into the cell (3). The discovery of these channels opens up new possibilities for the temporal and population-specific control of neurons.

In the last decade since its birth, optogenetics has been used to study basic neural circuits, including the innate escape response (4) and the proboscis extension reflex in butterflies (5). The technique has also been applied to study more complex circuits, such as those involved in anxiety (6). Beyond the discovery of the neural underpinnings of certain behaviors, optogenetics has the potential to be incorporated into novel therapies for currently untreatable conditions, including depression and drug addiction. For example, optogenetic simulation of medial prefrontal cortex neurons in a mouse model of depression has been shown to relieve symptoms of depression (7). In the case of drug addiction, optogenetic stimulation of a distinct population of neurons projecting to the nucleus accumbens has been shown to reverse the neural and behavioral effects of cocaine addiction in mice (8).

Only ten years old, optogenetics is a burgeoning field with a bright future.

References

  1. Channelrhodopsin-1: a light-gated proton channel in green algae. G. Nagel, D. Ollig, M. Fuhrmann, S. Kateriya, A. M. Musti, E. Bamberg, P. Hegemann. Science. 296, 2395-8 (2002).
  2. A history of optogenetics: the development of tools for controlling brain circuits with light. Boyden. F1000 Biol Reports. 3 (2011).
  3. The development and application of optogenetics. L. Fenno, O. Yizhar, K. Deisseroth. Annu. Rev. Neurosci. 32, 389-412 (2011).
  4. Manipulation of an innate escape response in drosophila: photoexcitation of acj6 neurons induces the escape response. G. Zimmerman, L. Wang, A. G. Vaughan, D. S. Manoli, F. Zhang, K. Deisseroth. PLOS one. 4, 1-10 (2009).
  5. Motor control in a Drosophila taste circuit. M. D. Gordon, K. Scott. Neuron. 61, 373-384 (2009).
  6. Diverging neural pathways assemble a behavioral state from separable features in anxiety. S. Kim et al. Nature. 496, 219-223 (2013).
  7. Antidepressant effect of optogenetic stimulation of the medial prefrontal cortex. H. E. Covington et al. The Journal of Neuroscience. 30, 16082-16090 (2010).
  8. Reversal of cocaine-evoked synaptic potentiation resets drug-induced adaptive behavior. Pascoli. Nature. 481, 71-75(2011)
  9. Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. F. A. Azevedo et al. J Comp Neurol. 513, 532-41 (2009).
  10. Synapses and dendritic spines as pathogenic targets in Alzheimer’s disease. W. Yu and B. L. Neural Plast. 2012, 1-8 (2011).

 

 

Psychoactive Fungi: The World Before and After Psilocybin

by Tristan Wang

In 1960, on a summer day in Cuernavaca, Mexico, Harvard psychology professor Timothy Leary and several friends ingested a bowlful of psilocybin mushrooms, an experience that Leary later described as “the deepest religious experience of my life.” Upon returning to Harvard, Leary and his associate, Richard Alpert, immediately formed the “Harvard Psilocybin Project”, later known as the “Harvard Psychedelic Club”, with the intent to survey the psilocybin experiences of graduate students and faculty members in the Boston area. When the researchers denied participation to the curious undergraduate Andrew Weil, he published an exposé in the school newspaper about the club. The article ultimately sent Leary and Alpert packing, but more importantly, it introduced psilocybin to the general American public (Lattin 2010).

The active ingredient that Leary ingested that summer day was the hallucinogen psilocybin, a chemical that eventually rose to fame in the 1960s in both the scientific and recreational world. Psilocybin’s ubiquity allowed it to exert a powerful influence. While it is now listed as a banned drug for recreational use, psilocybin compounds remain a fascination in the fields of mycology and psychology (Tylš 2013).

Ecology and Production

Psilocybin usage can actually be traced back to rituals performed thousands of years ago, in places such as Mexico and New Guinea (Tylš 2013; Guzmán 1998). Their widespread use was partly due to the extensive native habitats of the psilocybin fungi. In the grand scheme of classification, psilocybin mushrooms are a subset of neurotropic fungi, which are fungi that have chemical compounds with a special affinitiy for neural tissues (Guzmán 1998). Neurotropic fungi include not only psilocybin-containing fungi, but also fungi that contain neurotropic chemicals like ibotenic acid (found in some amanita species) and ergotamine (found in ergot fungi) (Guzmán 1998). For the most part, psilocybin-containing fungi occur in the greatest numbers. They are mostly concentrated within the Psilocybe genus, which contains 116 species of hallucinogenic fungi (Guzmán 1998). Neurotropic fungi have been identified as far north as Alaska and Siberia, and as far south as Chile and New Zealand.

Psilocybin is produced throughout the psilocybin mushroom, particularly in the carpophores (fruiting body) and mycelia (hair-like structures that make up the thallus, or main body, of the fungi)(Leung 1965). The concentration of psilocybin in a mushroom may range from 0.2% to 1% of its total dry weight (Tylš 2013). Although it is not clear why Psilocybe fungi produce this hallucinogen, theories have been devised that attempt to explain this enigma. For instance, several species of the Psilocybe genus, like the most common psychoactive fungi P. semilanceat, have been reported to have antimicrobial properties (Ranadive 2013). It is possible that psilocybin contributes to these antimicrobial properties. Psilocybin could also just be a by-product of metabolism, serving no function.

Chemical Properties 

Psilocybin (O-phosphoryl-4-hydroxy-N,N-dimethyltryptamine) is a psychoactive hallucinogen, but the molecule responsible for the mental effects is its metabolized form, psilocin (Tylš 2013). A hallucinogen is a substance that induces distortion of cognitive processes, including changes in perception and disposition (Nichols 2004). Generally, hallucinogens are relatively safe physiologically, and do not induce dependence (Nichols 2004). Psilocybin is water-soluble, whereas psilocin is lipid-soluble, which may explain its lasting effect (Tylš 2013).

Molecule
The molecular structure of psilocybin is a potent hallucinogen, which is responsible for its mental effects. Photo courtesy of Wikimedia Commons.

When psilocybin is ingested, it is quickly metabolized and dephosphorylated into psilocin within the intestinal mucous membrane by alkaline phosphatase, an enzyme (Tylš 2013). When tested in animals, psilocin was detected in several parts of the brain, including the neocortex, hippocampus, and extrapyramidal motor system, as well as the kidneys and liver (Tylš 2013). In the brain, psilocin had a strong affinity to serotonin 5HT receptors, eliciting symptoms often associated with the hormone serotonin, which is well known for its anti-depressive effects (Tylš 2013; Passie 2002; Neumeister 2002). At high enough doses, psilocin may elicit symptoms of altered perceptions, which include confusion, disorientation, hyperactivity, and staring into empty spaces (Tylš 2013; Berger 2005).

Current Status

After its widespread use as a recreational drug, psilocybin was given the legal status of a schedule 1 drug in 1970 (Tylš 213). Nowadays, psilocybin is being reintroduced as a popular research topic because of its potential therapeutic value (Tylš 2013). Psilocybin also remains an important model for psychosis and schizophrenia, as it induces psychotic symptoms and may help us gain a more thorough understanding of brain activity in general (Tylš 213). Given today’s changing views on drug use in the medical and recreational world, it has become even more important to fully understanding this ancient yet fascinating drug.

Tristan Wang ‘16 is a sophomore in Kirkland House and a prospective Organismic and Evolutionary Biology concentrator

References

  1. Lattin, Don. The Harvard Psychedelic Club: How Timothy Leary, Ram Dass, Huston Smith, and Andrew Weil Killed the Fifties and Ushered in a New Age for America. New York: HarperOne, 2010.
  2. Tylš, Filip, Tomáš Páleníček, and Jiří Horáček. “Psilocybin – Summary of Knowledge and New Perspectives.” European Neuropsychopharmacology (2013): n. pag.
  3. Guzmán, Gastón, John W. Allen, and Jochen Gartz. “A Worldwide Geographical Distribution of the Neurotropic Fungi, an Analysis and Discussion.” Ethnomycological Journals: Sacred Mushroom Studies 14 (1998): 189-280.
  4. Leung, Albert Y., A. H. Smith, and A. G. Paul. “Production of psilocybin in Psilocybe baeocystis saprophytic culture.” Journal of Pharmaceutical Sciences 54.11 (1965): 1576-579.
  5. Ranadive, Kiran R., Mugdha H. Belsare, Subhash S. Deokule, Neeta V. Jagtap, Harshada K. Jadhav, and Jitendra G. Vaidya. “Glimpses of Antimicrobial Activity of Fungi from World.” Journal on New Biological Reports 2.2 (2013): 142-62.
  6. Nichols, David E. “Hallucinogens.” Pharmacology & Therapeutics 101.2 (2004): 131-81.
  7. Passie, Torsten, Juergen Seifert, Udo Schneider, and Hinderk M. Emrich. “The Pharmacology of Psilocybin.” Addiction Biology 7.4 (2002): 357-64.
  8. Neumeister, A. “Tryptophan Depletion, Serotonin, and Depression: Where Do We Stand?” Psychopharmacology Bulletin 37.4 (2002): 99-115.
  9. Berger, Kyan J., and David A. Guss. “Mycotoxins Revisited: Part II.” The Journal of Emergency Medicine 28.2 (2005): 175-83.

Politics of HIV/AIDS and the Singing Brain

by Quang Nguyen

Globally, over 35 million people were living with HIV in 2012 (1). In addition to severe physical and immunological deterioration associated with the progression of the illness, HIV/AIDS also creates a significant neuropsychological burden on those infected and their social networks. This additional suffering contributes to the decreases in medical adherence, increases in risky sexual behaviors, difficulty in status disclosure, acceptance, and coping (2). Consequently, anxiety and pessimism reduces the efficacy of care and treatment. Nevertheless, this emotional damage may create a clinical and social opportunity for music, a “harmonic medicine,” to serve as a counterbalance to the negative emotions caused by HIV-related stigma  (3).

Music is all around us. Among speech, writing and performance, music has evolved to become one of the most powerful social, cultural and political practices. The power of this sonic language strongly steams from its ability to create “sensations, imagination, and experience[s]” that persuasively trigger certain emotions and behavioral changes (4). If you have ever shivered just by listening to a song, then you have experienced the emotional power of music. These music-induced sensations are the result of neuronal activations in the orbitofrontal and cingulate cortices of our brain (3). Through complex biochemical interactions, music then encourages a voluntary “musical participation” between those affected and others.

On a more sociological level, music can not only embody one’s political values and experiences but it also can also effectively propagate his or her opinion, ideology, argument and belief. In fact, many social advocates and political enthusiasts around the world have used music as a global resource in the fight against HIV/AIDS pandemic worldwide since early 1980s when the disease was mislabeled the Gay-Related Immune Deficiency (GRID) out of fear and ignorance (5). In this paper, we will take a brief journey into our brain, the center of the nervous system, to investigate how a simple melody of tones that does not contain an “intrinsic reward value” can effectively unite and empower people in their political and emotional responses to the pandemic of HIV/AIDS (6).

Political Consequences of Musical Re-indexing 

In his 2011 book, AIDS, Politics, and Music in South Africa, Fraser McNeill reveals the multifactorial power of locally recognized music in propagating safe sex messages among the general public to educate and influence people’s decisions about HIV/AIDS. For example, consider the following pro-protection message:

Khondomu ndi bosso!
Condom is the boss!
I thivhela malwadze!
It prevents sickness!
Khondomu nga i shume.
Use condoms!
Khondomu ndi bosso!
Condom is the boss!

Here, the flexible property of music is at work. In fact, these phrases comprise the modified version of one of the famous anti-apartheid songs by Joe Modise, South African Minister of Defense from 1994 to 1999 (7). Taking advantage of the area’s strong musical culture, a group of young women, who called themselves the peer educators with the slogan of ‘Prevention is Better than Cure’ in Venda, a Whembe District of the Thulamela Municipality in Southern Africa, decided to sing these succinct phrases while performing the Venda’s python dance (domba) to convey their message. In an adaptation of a famous Lutheran hymn, the phrase “Jesus is number one!” was also changed into “Condom, condom, condom is number one, no matter what the people say, but condom is number one!” (7) By changing the words of the song, one may change its original meaning by focusing the attention on another target. In this case, by “indexing” AIDS to Boer, a synonymous Dutch term to call conservative supporters of the Nationalist government in 1800s in South Africa, the peer educators had channeled memory and experiences of the people about anti-apartheid struggle in the past to the current struggle against AIDS (7). Another instance demonstrating the powerful usage of “re-indexing” by lyric change is from a song named “I ya vhulaya” or “It kills,” which contributed to the provision of the 2010 recommendation of when to start Antiretroviral Therapy (ART) (7). Currently, for the adults and adolescents, the WHO recommends ART to all people who have the CD4 count of 500 cells/mm3, with a priority for people with the count of less than or equal to 350 cells/mm3 and those with advanced HIV disease (8). Through music, governmental policy about a particular issue is subject to wider public review for changes as a result of the collective effort of people who are musically influenced as the effect of “melody, meter, … timbre” and the lyrics in the music they listen to generates sufficient effective emotional attention (3).

Although cultural, social, and traditional conflicts present some of the most challenging roadblocks in international interventions—especially in developing countries like Nigeria, where laws against same-sex marriage have been particularly harsh—they also provide a unique opportunity for the musical arts. In these contexts, music can participate in infectious disease prevention efforts as a unifying catalyst for “retraditionalization” in the ever-evolving community (7). After all, each human culture has some type of music, and all humans are neurologically and socially capable of “creating and responding to music” (9). This inherent capacity of music to allow people to “sing about what [they] cannot talk about”—accompanied by an explanation of the what causes infection—can be used to drive positive political energy in HIV/AIDS (7).

Neuroarchitecture of Musical Emotion 

The work of peer educators in the aforementioned examples is not only social and political action, but it alsobuilds upon a sophisticated “neuroarchitecture” of musical emotion (10). Have you ever had a song that was just stuck in your head? If so, you have experienced “involuntary musical imagery” (INMI), or “earworm” syndrome, in which unwanted familiar and most likely overlearned musical tunes keep repeating in your mind sometimes uncontrollably (11). This common “sticky music” phenomenon, which was postulated to be the result of neural playback circuits, reveals musical transliminality, a hypersensitivity to music due to its powerful cognitive penetrability (11). Providing this music-specific mental penetration—although with a short “life expectancy”—music is a promising untapped resource that, when used correctly, can influence our emotional and mental states through a variety of useful ways (11).

As such, researchers have asked questions such as can music trigger emotional changes that are strong enough to subsequently induce particular actions? What specific changes in the brain that are induced by musical engagement were intentionally or unintentionally recognized and utilized as a tool in HIV/AIDS politics? With the help of current and developing neuro-technology, we are much closer to solving the mystery of musical perception and the brain function.

Musical Rhythm as the Brain’s Temporal Timer 

Since music can facilitate communication among people in a community, it can also contribute to the communication of information across the auditory cortex, which is located at the juncture of the Sylvian fissure and adjacent Heschl’s gyrus in human brain (3). At this juncture, centric auditory cortical fields are concentrated, organized by frequency-specific sounds of rhythm and timbre transmitted from the thalamus (3). Musical sounds, with their uniquely structured rhythmic patterns, serve as “sensory timers” that have been shown to improve the recovery of motor functions in neurologically damaged patients with strokes, Parkinson disease, and traumatic brain injury (12). In fact, musical rhythm creates a meaningful sound pattern in time that parallels the “oscillatory ‘rhythmic’ synchronization codes of neural information processing” in the brain (12). In turn, this complex and expansive cortical process induces an additive effect to streamline the transfer of “sensory and cognitive-perceptual information” (6, 12).

Furthermore, it was suggested that music is “written in the time code of rhythm as its sound patterns resembles the oscillatory ‘rhythmic’ processing in the brain (12).  This temporal feature of music constitutes the neurobiological foundation of perception and learning (12). When we listen to music, dopamine is released in the nucleus accumbens, affecting the reward pathway that operates in perception, addiction, motivation, and emotion mechanisms (3, 6). Specifically, Salimpoor et al. (2013) revealed that our appreciation of desirable new music is not only due to the auditory cortical processes triggered by one’s listening history but also a result of our temporal expectation of the rewarding value of the music we are listening to, based on the implicit understanding of musical sounds and structure. Temporal control in the oscillatory circuits of the speech center in the brain is essential for coordinating movement, memory, and other executive functions. Existing studies provide strong prima facie evidence music can stimulate the neuronal compensatory network for brain areas whose functions are compromised; thus, music may also contribute to neural plasticity. In fact, listening to music with lyrics has been documented to elicit wider bilateral neural activity than in response to purely verbal materials, and regular self-directed music listening has been shown to increase the compensatory capacity of different brain regions in patients with unilateral MCA stroke (14).

Retrieval of Musical Information and Neuronal Memory 

Coupled with the rhythmic pattern, musical “chunking” through melody acts as an effective mnemonic device, which is essential in memory coding (12). For instance, despite Alzheimer’s patients’ inability to learn new songs, perform memory tests like word lists, stories, or figures, they have been shown to be able to retain musical information and skillfully play previously learned songs (14). If patients with neurological disorders can retain music-specific information, then it is unsurprising that music can also be utilized to recreate or trigger profound musical emotions in healthy individuals, especially when that musical information is strongly associated with a particular past event. Neuroscientific studies also support this postulation, linking the differential activation to stimulation in the anterior parabelt regions surrounding the core of auditory cortical fields, thereby connecting auditory stimuli and memory (3). Therefore, music not only allows one to formulate new memories about certain incidents by interpreting and responding to the related musical piece, but also it may help one create a stable neuronal memory that is resistant to certain neurodegenerative forces.

Cognitive Emotion in HIV/AIDS-related Depression 

As neurocognition and immunity are progressively compromised in HIV-infected patients, HIV-related depression impedes the efficacy of ART by further worsening their emotional state. Clinically depressed individuals are unable to regulate negative moods as they often have difficulty accepting and reframing negative automatic thoughts (NAT) (15). Inevitably, the accumulation of this emotional distress would impair their “adaptive cognitive coping strategies,” continually increasing their susceptibility to more advanced neurocognitive sequelae like HIV-associated dementia. In addition to other people’ perceptions, compromised behavioral regulation of emotion in HIV-infected individuals can be further complicated by their own internalization of the stigma (15). However, all these behavioral and social challenges, open up the door for “harmonic medicine” to play a role in both socio-behavioral and neuropsychological aspects of HIV/AIDS treatment (3).

Harmonic Medicine in Emotional Neuroscience 

It was also shown that modulation of amygdala reactivity plays an important role in recruiting various frontal brain regions, including the dorsolateral prefrontal cortex (DLPFC), orbitofrontal cortex (OFC), and anterior cingulate cortex (ACC). This “amygdala-frontal coupling” has been linked to the ability to self-regulate negative emotion and trigger emotion-related behaviors during distress (16). As a result, music, a highly emotional stimulus can serve as a powerful harmonic medicine for self-regulation of negative emotions, which is an important cognitive coping ability that tends to be only minimally induced in HIV-infected individuals following cognitive and behavioral interventions (15). Furthermore, Särkämö and colleagues (2008) also showed that listening to music everyday could prevent negative mood in patients with middle cerebral artery stroke as they experienced less frequent episodes of depression and confusion.

Just as emotions can be classified as happy or sad, music-specific emotional responses via its complex organization of melody can also be classified by the moods elicited bilaterally by different signal changes in the amygdala and limbic system of the listeners (3, 18). Researchers have also suggested that the increase in hormone concentration in tears, such as that of prolactin, when people experience extreme emotions, e.g. during birth delivery and organism, may be an evolutionary mechanism that facilitates the encoding memory of that extreme emotional experiences (3). This may explain the physical changes in people when they watch orchestra. Therefore, it is not exaggerating to say that music indeed plays an important role in humans’ emotional experience.

Furthermore, foundational elements of music, such as pitch, tone and rhythm, each engage different parts of the brain in a sophisticated bilateral cortical collaboration, creating a combined effect on emotion, memory, and perception (13). In fact, the elaborate interactions among sensory-motor, auditory, and frontal cortical networks have been proposed to elicit musical emotions, thereby influencing behavioral decisions by engaging higher-order cognitive affective regions in the brain for the acquired “inventive salience” of music (6). Therefore, music not only helps lessen the social and cultural burdens on people living with HIV/AIDS by allowing their pain to be musically heard and defragmented, but also it may help to alleviate their psychological challenges by changing or  maximizing their brain’s music-dependent plasticity under conditions of external and internal stress (3, 12, 13).

Neuropsychological Integration of Music in HIV/AIDS Prevention, Treatment, Care, and Politics 

Music is the building block for the audience’s emotional experience and for the performers themselves to express their message via the musical melody. In the past, the therapeutic value of music has primarily been interpreted within the framework of the indirect consequences of music and one’s cultural and emotional wellbeing (12). However, not until recently has this understanding been more adequately expanded to include the underlying neurology of musical healing power. This wider consideration suggests the potential application of music as therapy and for confronting the politics of HIV/AIDS, a highly stigmatized and emotionally heated disease that has a long history (12).

Musical power to change one’s emotions and behaviors has been utilized in many HIV prevention programs, especially in developing countries where music is often an important cultural value of the people. As HIV/AIDS levies many heavy negative emotional changes that subsequently result in neurocognitive defects, music, along with neuroscience, psychology, and medicine, is an invaluable resource. With proper implementation, it may help improve current disease control, program, and policy planning. As HIV/AIDS incubates fear and emotional distress in those affected, music offers inexpensive healing power that is free of adverse effects and can promote HIV/AIDS prevention. Furthermore, with its powerful capacity to change emotions and alter perceptions and social behavior, music also contains a politics within itself. As McNeill (2011) points out, music is a social solvent for politics, religions, and traditions to homogenize for the better. Lastly, it is also important to recognize that, as gender differences do play a crucial role in emotional reactions to music, certain marginalized populations are more vulnerable to social stigma than others. Thus, musical applications, including political acts and medical or educational implementation, must remain culturally appropriate, gender-relevant, population-specific, and most importantly ethically justified (11).

Reverend Jackson Muteeba, director of the Integrated Development and AIDS Concern (IDAAC) in Iganga, Uganda, asserts that only behavior can serve as the “metaphoric ‘language’” that “AIDS can hear, […] understand, […] for the people to come to terms with the realities of the disease-both cultural and medical” (18). Creative music can evolve to empower stigmatized HIV-infected individuals while educating others, whether infected or not.

Quang Nguyen ‘16 is a sophomore at Duke University majoring in Cellular and Molecular Biology.

 

References 

  1. Joint United Nations Programme on HIV/AIDS (UNAIDS), Global Report. UNAIDS Report on the Global AIDS Epidemic 2013. (2013).
  2. R. Lyimo et al., Stigma, Disclosure, Coping, and Medication Adherence Among People Living with HIV/AIDS in Northern Tanzania. AIDS Patient Care STDS. 28, 98-105 (2014).
  3. A. J. Kobets, Harmonic medicine: The Influence of Music Over Mind and Medical Practice. Yale J Biol Med. 84, 161-167 (2011).
  4. T. Turino, Music as social life : the politics of participation (University of Chicago Press, Chicago, 2008).
  5. A. E. Lyman, John Corigliano’s Of Rage and Remembrance: Community and Ritual in the Age of AIDS. American Choral Review. 54, 1-7 (2012).
  6. V. N. Salimpoor et al., Interactions Between the Nucleus Accumbens and Auditory Cortices Predict Music Reward Value. Science. 340, 216-219 (2013).
  7. F. G. McNeill 1977-, AIDS, politics, and music in South Africa (Cambridge University Press, New York, 2011).
  8. World Health Organization (WHO), Consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection. (2013).
  9. The Origins of Music electronic resource] (MIT Press July 2001, Cambridge, 2001).
  10. A. Sel, B. Calvo-Merino, Neuroarchitecture of musical emotions. Rev. Neurol. 56, 289-297 (2013).
  11. C. P. Beaman, T. I. Williams, Earworms (stuck song syndrome): Towards a natural history of intrusive thoughts. Br. J. Psychol. 101, 637-653 (2010).
  12. M. H. Thaut, The Future of Music in Therapy and Medicine. Ann. N. Y. Acad. Sci. 1060, 303-308 (2005).
  13. T. Särkämö et al., Music listening enhances cognitive recovery and mood after middle cerebral artery stroke. Brain. 131, 866-876 (2008).
  14. A. Cowles et al., Musical Skill in Dementia: A Violinist Presumed to Have Alzheimer’s Disease Learns to Play a New Song. Neurocase. 9, 493-503 (2003).
  15. R. C. McIntosh, J. S. Seay, M. H. Antoni, N. Schneiderman, Cognitive vulnerability for depression in HIV. J. Affect. Disord. 150, 908-915 (2013).
  16. S. J. Banks, K. T. Eddy, M. Angstadt, P. J. Nathan, K. L. Phan, Amygdala–frontal connectivity during emotion regulation. Social Cognitive and Affective Neuroscience. 2, 303-312 (2007).
  17. H. T. Ghashghaei, C. C. Hilgetag, H. Barbas, Sequence of information processing for emotions based on the anatomic dialogue between prefrontal cortex and amygdala. Neuroimage. 34, 905-923 (2007).
  18. S. Koelsch, Towards a neural basis of music-evoked emotions. Trends Cogn. Sci. (Regul. Ed. ). 14, 131-137 (2010).
  19. G. F. Barz 1960-, Singing for life : HIV/AIDS and music in Uganda (Routledge, New York, 2006).